| LLM Name | PRYMMAL 6B Slerp | 
| Repository ๐ค | https://huggingface.co/LilRg/PRYMMAL-6B-slerp | 
| Base Model(s) | |
| Merged Model | Yes | 
| Model Size | 6b | 
| Required VRAM | 6.4 GB | 
| Updated | 2025-09-17 | 
| Maintainer | LilRg | 
| Model Type | llama | 
| Model Files | |
| Model Architecture | LlamaForCausalLM | 
| License | apache-2.0 | 
| Context Length | 4096 | 
| Model Max Length | 4096 | 
| Transformers Version | 4.44.2 | 
| Tokenizer Class | LlamaTokenizer | 
| Padding Token | <unk> | 
| Vocabulary Size | 64000 | 
| Torch Data Type | bfloat16 | 
| Best Alternatives | Context / RAM | Downloads | Likes | 
|---|---|---|---|
| Yi 6B 200K | 195K / 12.1 GB | 7312 | 172 | 
| Yi 6B 200K AEZAKMI V2 | 195K / 12.1 GB | 883 | 1 | 
| Yi 6B 200K DPO | 195K / 12.1 GB | 794 | 0 | 
| Wukong Yi 6B 200K | 195K / 12.1 GB | 6 | 1 | 
| Barcenas 6B 200K | 195K / 12.1 GB | 1848 | 2 | 
| Yi 6B 200K Llamafied | 195K / 12.1 GB | 7 | 11 | 
| Yi 6B 200K Llama | 195K / 12.1 GB | 1 | 5 | 
| Llama 3.2 6B AlgoCode | 128K / 12.7 GB | 6 | 1 | 
| Chatglm2 6B Port Llama | 32K / 12.5 GB | 2 | 4 | 
| Miqu 6B Truthy | 31K / 11.3 GB | 5 | 1 | 
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐