FusionNet 34Bx2 MoE AWQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  FusionNet 34Bx2 MoE AWQ   URL Share it on

FusionNet 34Bx2 MoE AWQ is an open-source language model by TheBloke. Features: 60.8b LLM, VRAM: 32.8GB, Context: 32K, License: mit, MoE, Quantized, LLM Explorer Score: 0.12.

  4-bit   Awq Base model:quantized:tomgrc/fu... Base model:tomgrc/fusionnet 34...   Conversational   En   Mixtral   Moe   Quantized   Region:us   Safetensors   Sharded   Tensorflow

FusionNet 34Bx2 MoE AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
FusionNet 34Bx2 MoE AWQ (TheBloke/FusionNet_34Bx2_MoE-AWQ)
๐ŸŒŸ Advertise your project ๐Ÿš€

FusionNet 34Bx2 MoE AWQ Parameters and Internals

Model Type 
mixtral
Use Cases 
Areas:
text generation
Additional Notes 
This model is tuned for MoE method which boosts performance significantly. For AutoAWQ inference, AutoAWQ 0.1.8 or later versions should be installed for compatibility.
Supported Languages 
en (fine-tuned)
Training Details 
Data Sources:
VMware Open Instruct
Methodology:
fine-tuned using MoE method
Context Length:
8192
Model Architecture:
FusionNet with 60.8B parameters, utilizing MoE (Mixture of Experts) method
Input Output 
Input Format:
[INST] <> {system_message} <> {prompt} [/INST]
Accepted Modalities:
text
Output Format:
text generation
LLM NameFusionNet 34Bx2 MoE AWQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/FusionNet_34Bx2_MoE-AWQ 
Model NameFusionNet 34Bx2 MoE
Model CreatorSuqin Zhang
Base Model(s)  FusionNet 34Bx2 MoE   TomGrc/FusionNet_34Bx2_MoE
Model Size60.8b
Required VRAM32.8 GB
Updated2026-03-29
MaintainerTheBloke
Model Typemixtral
Model Files  9.9 GB: 1-of-4   9.9 GB: 2-of-4   9.9 GB: 3-of-4   3.1 GB: 4-of-4
Supported Languagesen
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureMixtralForCausalLM
Licensemit
Context Length32768
Model Max Length32768
Transformers Version4.37.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token<s>
Vocabulary Size64000
Torch Data Typefloat16

Best Alternatives to FusionNet 34Bx2 MoE AWQ

Best Alternatives
Context / RAM
Downloads
Likes
...sionNet 34Bx2 MoE V0.1 DPO F16195K / 121.8 GB850915
60B MoE Coder V3195K / 121.8 GB803
Mixtral 34Bx2 MoE 60B195K / 121.9 GB8683111
Yi 34Bx2 MoE 60B DPO195K / 121.8 GB82253
Bagel Hermes 2x34B195K / 121.9 GB10516
Yi 34Bx2 MoE 200K195K / 121.9 GB82342
Yi 34Bx2 MoE 60B195K / 121.9 GB815564
...34Bx2 MoE V0.1 Full Linear DPO195K / 121.8 GB1062
FusionNet 34Bx2 MoE V0.1195K / 121.2 GB598
... Cloudyu Mixtral 34Bx2 MoE 60B195K / 121.8 GB840
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/FusionNet_34Bx2_MoE-AWQ.

Rank the FusionNet 34Bx2 MoE AWQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52721 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a