4bit Quant TomGrc FusionNet 34Bx2 MoE V0.1 DPO is an open-source language model by cloudyu. Features: 61.7b LLM, VRAM: 35.6GB, Context: 195K, License: apache-2.0, MoE, Quantized, HF Score: 77, LLM Explorer Score: 0.12, Arc: 73.2, HellaSwag: 86.1, MMLU: 75.4, TruthfulQA: 72.8, WinoGrande: 83, GSM8K: 71.2.
| Model Type |
| ||||||
| Training Details |
|
| LLM Name | 4bit Quant TomGrc FusionNet 34Bx2 MoE V0.1 DPO |
| Repository ๐ค | https://huggingface.co/cloudyu/4bit_quant_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO |
| Model Size | 61.7b |
| Required VRAM | 35.6 GB |
| Updated | 2026-03-30 |
| Maintainer | cloudyu |
| Model Type | mixtral |
| Model Files | |
| Quantization Type | 4bit |
| Model Architecture | MixtralForCausalLM |
| License | apache-2.0 |
| Context Length | 200000 |
| Model Max Length | 200000 |
| Transformers Version | 4.37.2 |
| Tokenizer Class | LlamaTokenizer |
| Padding Token | <s> |
| Vocabulary Size | 64000 |
| Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| 60B MoE Coder V2 | 195K / 35.6 GB | 86 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐