LLM Name | Openbuddy Mistral 10B V17.1 32K 8.0bpw H8 EXL2 |
Repository ๐ค | https://huggingface.co/LoneStriker/openbuddy-mistral-10b-v17.1-32k-8.0bpw-h8-exl2 |
Model Size | 10b |
Required VRAM | 10.9 GB |
Updated | 2025-07-31 |
Maintainer | LoneStriker |
Model Type | mistral |
Model Files | |
Quantization Type | exl2 |
Model Architecture | MistralForCausalLM |
License | apache-2.0 |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.38.0.dev0 |
Tokenizer Class | LlamaTokenizer |
Beginning of Sentence Token | <s> |
End of Sentence Token | </s> |
Unk Token | <unk> |
Vocabulary Size | 36608 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Contrail 200M 64K | 64K / 0.4 GB | 8 | 2 |
Contrail 200M 64K | 64K / 0.4 GB | 0 | 0 |
NarutoDolphin 10B | 32K / 21.5 GB | 921 | 2 |
Sirius 10B | 32K / 21.5 GB | 908 | 1 |
Mistral Passthrough 8L 10B | 32K / 14.5 GB | 1017 | 0 |
Occiglot10b DPO | 32K / 19.7 GB | 4 | 1 |
...penbuddy Mistral 10B V17.1 32K | 32K / 21.5 GB | 6 | 5 |
Voldemort 10B DPO | 8K / 21.4 GB | 917 | 0 |
Voldemort 10B | 8K / 21.5 GB | 914 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐