| Model Type | 
 | |||
| Additional Notes | 
 | |||
| Training Details | 
 | 
| LLM Name | WizardLM 2 8x22B Beige 2.4bpw H6 EXL2 | 
| Repository ๐ค | https://huggingface.co/LoneStriker/WizardLM-2-8x22B-Beige-2.4bpw-h6-exl2 | 
| Base Model(s) | |
| Merged Model | Yes | 
| Required VRAM | 42.7 GB | 
| Updated | 2025-09-21 | 
| Maintainer | LoneStriker | 
| Model Type | mixtral | 
| Instruction-Based | Yes | 
| Model Files | |
| Quantization Type | exl2 | 
| Model Architecture | MixtralForCausalLM | 
| Context Length | 65536 | 
| Model Max Length | 65536 | 
| Transformers Version | 4.41.2 | 
| Tokenizer Class | LlamaTokenizer | 
| Padding Token | <unk> | 
| Vocabulary Size | 32000 | 
| Torch Data Type | bfloat16 | 
| Best Alternatives | Context / RAM | Downloads | Likes | 
|---|---|---|---|
| ...M 2 8x22B Beige 3.0bpw H6 EXL2 | 64K / 53.2 GB | 6 | 0 | 
| ...M 2 8x22B Beige 5.0bpw H6 EXL2 | 64K / 88.5 GB | 6 | 0 | 
| ...M 2 8x22B Beige 4.0bpw H6 EXL2 | 64K / 70.8 GB | 5 | 0 | 
| ...B Instruct V0.1 8.0bpw H8 EXL2 | 64K / 120.2 GB | 10 | 1 | 
| ...8x22b Instruct Oh EXL2 2.25bpw | 64K / 40.1 GB | 5 | 1 | 
| ...eryTour V2 8x7B 4.5bpw H6 EXL2 | 32K / 26.5 GB | 7 | 2 | 
| ...it MoE 2bitgs8 Metaoffload HQQ | 32K / 24.1 GB | 15 | 19 | 
| ... 4bit MoE 3bit Metaoffload HQQ | 32K / 22.4 GB | 11 | 13 | 
| ...hin 2.7 Mixtral 8x7b 8bpw EXL2 | 32K / 46.8 GB | 7 | 2 | 
| ... 4bit MoE 2bit Metaoffload HQQ | 32K / 18.3 GB | 11 | 16 | 
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐