Qwen3 Next 80B A3B Thinking 4bit by mlx-community

 ยป  All LLMs  ยป  mlx-community  ยป  Qwen3 Next 80B A3B Thinking 4bit   URL Share it on

  4-bit   4bit Base model:quantized:qwen/qwen... Base model:qwen/qwen3-next-80b...   Conversational   Mlx   Quantized   Qwen3 next   Region:us   Safetensors   Sharded   Tensorflow

Qwen3 Next 80B A3B Thinking 4bit Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Qwen3 Next 80B A3B Thinking 4bit (mlx-community/Qwen3-Next-80B-A3B-Thinking-4bit)
๐ŸŒŸ Advertise your project ๐Ÿš€

Qwen3 Next 80B A3B Thinking 4bit Parameters and Internals

LLM NameQwen3 Next 80B A3B Thinking 4bit
Repository ๐Ÿค—https://huggingface.co/mlx-community/Qwen3-Next-80B-A3B-Thinking-4bit 
Base Model(s)  Qwen3 Next 80B A3B Thinking   Qwen/Qwen3-Next-80B-A3B-Thinking
Model Size80b
Required VRAM44.9 GB
Updated2025-09-19
Maintainermlx-community
Model Typeqwen3_next
Model Files  5.1 GB: 1-of-9   5.3 GB: 2-of-9   5.2 GB: 3-of-9   5.3 GB: 4-of-9   5.3 GB: 5-of-9   5.2 GB: 6-of-9   5.3 GB: 7-of-9   5.3 GB: 8-of-9   2.9 GB: 9-of-9
Quantization Type4bit
Model ArchitectureQwen3NextForCausalLM
Licenseapache-2.0
Context Length262144
Model Max Length262144
Transformers Version4.57.0.dev0
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size151936
Torch Data Typebfloat16
Errorsreplace

Best Alternatives to Qwen3 Next 80B A3B Thinking 4bit

Best Alternatives
Context / RAM
Downloads
Likes
...Next 80B A3B Instruct Bnb 4bit256K / 42.1 GB593612
...en3 Next 80B A3B Instruct 4bit256K / 44.9 GB515413
...3 Next 80B A3B Instruct Q2 Mlx256K / 24.9 GB27815
...en3 Next 80B A3B Instruct 8bit256K / 83.9 GB12282
Qwen3 Next 80B A3B Instruct256K / 162.7 GB407791663
Qwen3 Next 80B A3B Thinking256K / 162.7 GB247977369
Qwen3 Next 80B A3B Instruct256K / 162.7 GB282472
...Next 80B A3B Instruct AWQ 4bit256K / 47.5 GB4565423
...Next 80B A3B Thinking AWQ 4bit256K / 47.5 GB3062211
... Thinking Int4 Mixed AutoRound256K / 43.1 GB169225
Note: green Score (e.g. "73.2") means that the model is better than mlx-community/Qwen3-Next-80B-A3B-Thinking-4bit.

Rank the Qwen3 Next 80B A3B Thinking 4bit Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51453 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124