NVIDIA Nemotron 3 Super 120B A12B 5bit by mlx-community

 »  All LLMs  »  mlx-community  »  NVIDIA Nemotron 3 Super 120B A12B 5bit   URL Share it on

NVIDIA Nemotron 3 Super 120B A12B 5bit is an open-source language model by mlx-community. Features: 120b LLM, VRAM: 83.1GB, Context: 256K, License: other, Quantized, LLM Explorer Score: 0.31.

  5-bit   5bit Base model:nvidia/nvidia-nemot... Base model:quantized:nvidia/nv...   Conversational   Custom code Dataset:nvidia/nemotron-post-t... Dataset:nvidia/nemotron-pre-tr...   De   En   Es   Fr   It   Ja   Latent-moe   Mlx   Mtp   Nemotron-3   Nemotron h   Nvidia   Pytorch   Quantized   Region:us   Safetensors   Sharded   Tensorflow   Zh

NVIDIA Nemotron 3 Super 120B A12B 5bit Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

NVIDIA Nemotron 3 Super 120B A12B 5bit Parameters and Internals

LLM NameNVIDIA Nemotron 3 Super 120B A12B 5bit
Repository 🤗https://huggingface.co/mlx-community/NVIDIA-Nemotron-3-Super-120B-A12B-5bit 
Base Model(s)  ...emotron 3 Super 120B A12B BF16   nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-BF16
Model Size120b
Required VRAM83.1 GB
Updated2026-05-07
Maintainermlx-community
Model Typenemotron_h
Model Files  4.5 GB: 1-of-17   5.1 GB: 2-of-17   5.2 GB: 3-of-17   5.1 GB: 4-of-17   5.2 GB: 5-of-17   5.1 GB: 6-of-17   5.2 GB: 7-of-17   5.1 GB: 8-of-17   5.2 GB: 9-of-17   5.1 GB: 10-of-17   5.2 GB: 11-of-17   5.1 GB: 12-of-17   5.2 GB: 13-of-17   5.1 GB: 14-of-17   5.2 GB: 15-of-17   5.1 GB: 16-of-17   1.4 GB: 17-of-17
Supported Languagesen fr es it de ja zh
Quantization Type5bit
Model ArchitectureNemotronHForCausalLM
Licenseother
Context Length262144
Model Max Length262144
Transformers Version4.57.6
Tokenizer ClassTokenizersBackend
Padding Token<|im_end|>
Vocabulary Size131072

Best Alternatives to NVIDIA Nemotron 3 Super 120B A12B 5bit

Best Alternatives
Context / RAM
Downloads
Likes
...on 3 Super 120B A12B Base BF161024K / 209.5 GB2267830
...emotron 3 Super 120B A12B BF16256K / 194.6 GB729273344
...motron 3 Super 120B A12B NVFP4256K / 80.3 GB894238290
...Nemotron 3 Super 120B A12B FP8256K / 128.4 GB370440244
...Nemotron 3 Super 120B A12B FP8256K / 128.4 GB14869
...DIA Nemotron 3 Super 120B A12B256K / 214.5 GB17233
...motron 3 Super 120B A12B NVFP4256K / 80.3 GB4608222
... Super 64B A12B Math REAP BF16256K / 128.6 GB6571
...uper 120B A12B BF16 Heretic V2256K / 241.4 GB21903
...20B A12B BF16 REAP 50pct Draft256K / 128.5 GB2046
Note: green Score (e.g. "73.2") means that the model is better than mlx-community/NVIDIA-Nemotron-3-Super-120B-A12B-5bit.

Rank the NVIDIA Nemotron 3 Super 120B A12B 5bit Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 53640 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum — our secure, self-hosted AI agent for server management.
Release v20260328a