Qwen3 Next 80B A3B Instruct Mxfp4 Mlx by nightmedia

 ยป  All LLMs  ยป  nightmedia  ยป  Qwen3 Next 80B A3B Instruct Mxfp4 Mlx   URL Share it on

  4-bit Base model:quantized:qwen/qwen... Base model:qwen/qwen3-next-80b...   Conversational   Instruct   Mlx   Qwen3 next   Region:us   Safetensors   Sharded   Tensorflow

Qwen3 Next 80B A3B Instruct Mxfp4 Mlx Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Qwen3 Next 80B A3B Instruct Mxfp4 Mlx (nightmedia/Qwen3-Next-80B-A3B-Instruct-mxfp4-mlx)
๐ŸŒŸ Advertise your project ๐Ÿš€

Qwen3 Next 80B A3B Instruct Mxfp4 Mlx Parameters and Internals

LLM NameQwen3 Next 80B A3B Instruct Mxfp4 Mlx
Repository ๐Ÿค—https://huggingface.co/nightmedia/Qwen3-Next-80B-A3B-Instruct-mxfp4-mlx 
Base Model(s)  Qwen3 Next 80B A3B Instruct   Qwen/Qwen3-Next-80B-A3B-Instruct
Model Size80b
Required VRAM42 GB
Updated2025-10-23
Maintainernightmedia
Model Typeqwen3_next
Instruction-BasedYes
Model Files  5.1 GB: 1-of-9   5.2 GB: 2-of-9   5.2 GB: 3-of-9   5.2 GB: 4-of-9   5.2 GB: 5-of-9   5.2 GB: 6-of-9   5.2 GB: 7-of-9   5.2 GB: 8-of-9   0.5 GB: 9-of-9
Model ArchitectureQwen3NextForCausalLM
Licenseapache-2.0
Context Length262144
Model Max Length262144
Transformers Version4.57.0.dev0
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size151936
Torch Data Typebfloat16
Errorsreplace

Best Alternatives to Qwen3 Next 80B A3B Instruct Mxfp4 Mlx

Best Alternatives
Context / RAM
Downloads
Likes
Qwen3 Next 80B A3B Instruct256K / 162.7 GB2582156841
Qwen3 Next 80B A3B Instruct256K / 162.7 GB312673
...t 80B A3B Instruct FP8 Dynamic256K / 80.5 GB236924
... Instruct Int4 Mixed AutoRound256K / 43.1 GB756314
...0B A3B Instruct Int4 AutoRound256K / 42.3 GB11267
...t 80B A3B Instruct Qx86 Hi Mlx256K / 73.5 GB1272
Qwen3 Next MoE256K / 0 GB552
...Next 80B A3B Instruct Bnb 4bit256K / 42.1 GB1745912
...en3 Next 80B A3B Instruct 4bit256K / 44.9 GB623617
...3 Next 80B A3B Instruct Q2 Mlx256K / 24.9 GB30265
Note: green Score (e.g. "73.2") means that the model is better than nightmedia/Qwen3-Next-80B-A3B-Instruct-mxfp4-mlx.

Rank the Qwen3 Next 80B A3B Instruct Mxfp4 Mlx Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51545 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124