| Model Type | 
 | |
| Additional Notes | 
 | 
| LLM Name | WestLakeMultiverse 12B MoE | 
| Repository ๐ค | https://huggingface.co/allknowingroger/WestLakeMultiverse-12B-MoE | 
| Base Model(s) | |
| Model Size | 7b | 
| Required VRAM | 25.8 GB | 
| Updated | 2025-09-23 | 
| Maintainer | allknowingroger | 
| Model Type | mixtral | 
| Model Files | |
| Model Architecture | MixtralForCausalLM | 
| License | apache-2.0 | 
| Context Length | 32768 | 
| Model Max Length | 32768 | 
| Transformers Version | 4.39.3 | 
| Tokenizer Class | LlamaTokenizer | 
| Padding Token | <s> | 
| Vocabulary Size | 32000 | 
| Torch Data Type | bfloat16 | 
| Best Alternatives | Context / RAM | Downloads | Likes | 
|---|---|---|---|
| Multimaster 7B V6 | 32K / 142.5 GB | 9715 | 1 | 
| Multilingual Mistral | 32K / 93.5 GB | 1735 | 2 | 
| MultiverseBuddy 15B MoE | 32K / 25.8 GB | 9 | 0 | 
| Mini Mixtral V0.2 | 32K / 25.8 GB | 6 | 4 | 
| Lumina 2 | 32K / 37.1 GB | 5 | 0 | 
| OpenMistral MoE | 32K / 48.3 GB | 1218 | 0 | 
| Laserxtral | 32K / 48.3 GB | 810 | 78 | 
| Mixtral 7B 8expert | 32K / 93.6 GB | 1059 | 264 | 
| Merged Model MoE | 32K / 53.3 GB | 4 | 1 | 
| RogerWizard 12B MoE | 32K / 25.8 GB | 4 | 1 | 
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐