Vlsi MoE Ffn Merged is an open-source language model by vxkyyy. Features: 32.8b LLM, VRAM: 65.5GB, Context: 32K, MoE, LLM Explorer Score: 0.29.
| LLM Name | Vlsi MoE Ffn Merged |
| Repository 🤗 | https://huggingface.co/vxkyyy/vlsi-moe-ffn-merged |
| Model Size | 32.8b |
| Required VRAM | 65.5 GB |
| Updated | 2026-05-07 |
| Maintainer | vxkyyy |
| Model Type | qwen2 |
| Model Files | |
| Model Architecture | Qwen2ForCausalLM |
| Context Length | 32768 |
| Model Max Length | 32768 |
| Transformers Version | 5.8.0 |
| Tokenizer Class | Qwen2Tokenizer |
| Padding Token | <|endoftext|> |
| Vocabulary Size | 152064 |
| Errors | replace |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| AM Thinking V1 | 128K / 65.7 GB | 203 | 205 |
| Magnum Hamanasu QwQ V2 R1 | 128K / 65.8 GB | 12 | 0 |
| Magnum Hamanasu QwQ V2 R1 | 128K / 65.8 GB | 5 | 0 |
| INTELLECT 2 | 40K / 131.5 GB | 43 | 205 |
| FluentlyLM Prinum | 32K / 65.8 GB | 1436 | 31 |
| PathFinderAi3.0 | 32K / 65.8 GB | 4 | 1 |
| ...py Roleplay 1739875661 0711603 | 32K / 65.8 GB | 54 | 0 |
| ...ppy Roleplay 1739875662 172876 | 32K / 65.8 GB | 5 | 0 |
🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟