| Model Type | 
 | ||||||
| Additional Notes | 
 | ||||||
| Training Details | 
 | 
| LLM Name | FusionNet 34Bx2 MoE | 
| Repository ๐ค | https://huggingface.co/TomGrc/FusionNet_34Bx2_MoE | 
| Model Size | 60.8b | 
| Required VRAM | 121.2 GB | 
| Updated | 2025-09-23 | 
| Maintainer | TomGrc | 
| Model Type | mixtral | 
| Model Files | |
| Supported Languages | en | 
| Model Architecture | MixtralForCausalLM | 
| License | mit | 
| Context Length | 32768 | 
| Model Max Length | 32768 | 
| Transformers Version | 4.36.2 | 
| Tokenizer Class | LlamaTokenizer | 
| Padding Token | <s> | 
| Vocabulary Size | 64000 | 
| Torch Data Type | bfloat16 | 
| Model | Likes | Downloads | VRAM | 
|---|---|---|---|
| FusionNet 34Bx2 MoE GGUF | 5 | 421 | 22 GB | 
| FusionNet 34Bx2 MoE AWQ | 5 | 10 | 32 GB | 
| FusionNet 34Bx2 MoE GPTQ | 2 | 9 | 31 GB | 
| Best Alternatives | Context / RAM | Downloads | Likes | 
|---|---|---|---|
| Mixtral 34Bx2 MoE 60B | 195K / 121.9 GB | 9915 | 111 | 
| Yi 34Bx2 MoE 60B DPO | 195K / 121.8 GB | 9734 | 3 | 
| Bagel Hermes 2x34B | 195K / 121.9 GB | 67 | 16 | 
| Yi 34Bx2 MoE 200K | 195K / 121.9 GB | 9768 | 2 | 
| Yi 34Bx2 MoE 60B | 195K / 121.9 GB | 9792 | 65 | 
| ...34Bx2 MoE V0.1 Full Linear DPO | 195K / 121.8 GB | 5 | 2 | 
| FusionNet 34Bx2 MoE V0.1 | 195K / 121.2 GB | 7 | 8 | 
| ... Cloudyu Mixtral 34Bx2 MoE 60B | 195K / 121.8 GB | 8 | 0 | 
| ...DPO TomGrc FusionNet 34Bx2 MoE | 32K / 121.8 GB | 5 | 4 | 
| Nous Hermes 2 MoE 2x34B | 4K / 121.9 GB | 1750 | 0 | 
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐