Qwen3 Next 80B A3B Thinking is an open-source language model by Qwen. Features: 80b LLM, VRAM: 162.7GB, Context: 256K, License: apache-2.0, LLM Explorer Score: 0.36.
Model |
Likes |
Downloads |
VRAM |
|---|---|---|---|
| ...Next 80B A3B Thinking AWQ 4bit | 23 | 125375 | 49 GB |
| ...Next 80B A3B Thinking AWQ 4bit | 11 | 36149 | 47 GB |
| ...en3 Next 80B A3B Thinking 8bit | 2 | 10453 | 83 GB |
| ...en3 Next 80B A3B Thinking 4bit | 2 | 1512 | 44 GB |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Qwen3 Next 80B A3B Instruct | 256K / 162.7 GB | 1398721 | 855 |
| Qwen3 Next 80B A3B Instruct | 256K / 162.7 GB | 3126 | 73 |
| Qwen3 Next 80B A3B Thinking | 256K / 162.7 GB | 36 | 7 |
| ...n3 Next 80B A3B Thinking NVFP4 | 256K / 50.7 GB | 111049 | 53 |
| ...wen3 Next 80B A3B Instruct FP8 | 256K / 81.8 GB | 190600 | 85 |
| ...n3 Next 80B A3B Instruct NVFP4 | 256K / 50.7 GB | 20206 | 37 |
| ...wen3 Next 80B A3B Thinking FP8 | 256K / 81.8 GB | 24473 | 52 |
| Qwen3 Next MoE | 256K / 0 GB | 15643 | 4 |
| ... Instruct Int4 Mixed AutoRound | 256K / 43.1 GB | 129 | 23 |
| ...ext 80B A3B Instruct Mxfp4 Mlx | 256K / 42 GB | 228 | 8 |
🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟