| Model Type |
| |||
| Additional Notes |
| |||
| Supported Languages |
| |||
| Training Details |
|
| LLM Name | Mixtral 8x7B Instruct V0.1 DPO |
| Repository ๐ค | https://huggingface.co/cloudyu/Mixtral-8x7B-Instruct-v0.1-DPO |
| Model Size | 46.7b |
| Required VRAM | 93.6 GB |
| Updated | 2025-09-23 |
| Maintainer | cloudyu |
| Model Type | mixtral |
| Instruction-Based | Yes |
| Model Files | |
| Supported Languages | fr it de es en |
| Model Architecture | MixtralForCausalLM |
| License | apache-2.0 |
| Context Length | 32768 |
| Model Max Length | 32768 |
| Transformers Version | 4.37.0 |
| Tokenizer Class | LlamaTokenizer |
| Padding Token | </s> |
| Vocabulary Size | 32000 |
| Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Mixtral 8x7B Instruct V0.1 | 32K / 93.6 GB | 295678 | 4562 |
| ...xtral 8x7B Yes Instruct LimaRP | 32K / 93.5 GB | 8 | 0 |
| Mixtral 8x7B Instruct V0.1 FP8 | 32K / 47.1 GB | 746 | 0 |
| ...rkrautLM Mixtral 8x7B Instruct | 32K / 93.6 GB | 2940 | 22 |
| Mixtral 8x7B Instruct V0.1 FP8 | 32K / 47.1 GB | 374 | 0 |
| Merge Mixtral Prometheus 8x7B | 32K / 91.9 GB | 7 | 2 |
| Sage Ft Mixtral 8x7b | 32K / 90 GB | 8 | 24 |
| Dolphin 2.5 Mixtral 8x7b | 32K / 93.6 GB | 4125 | 1235 |
| Notux 8x7b V1 | 32K / 93.6 GB | 10 | 163 |
| Dolphin 2.5 Mixtral 8x7b | 32K / 93.6 GB | 2845 | 1237 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐