NVIDIA Nemotron 3 Super 120B A12B BF16 REAP 50pct Draft by 0xSero

 ยป  All LLMs  ยป  0xSero  ยป  NVIDIA Nemotron 3 Super 120B A12B BF16 REAP 50pct Draft   URL Share it on

NVIDIA Nemotron 3 Super 120B A12B BF16 REAP 50pct Draft is an open-source language model by 0xSero. Features: 120b LLM, VRAM: 128.5GB, Context: 256K, License: other, LLM Explorer Score: 0.31.

  Arxiv:2510.13999 Base model:finetune:nvidia/nvi... Base model:nvidia/nvidia-nemot...   Conversational   Custom code   Draft   Endpoints compatible   Latent-moe   Nemotron h   Pruned   Reap   Region:us   Safetensors   Sharded   Sparse-moe   Tensorflow

NVIDIA Nemotron 3 Super 120B A12B BF16 REAP 50pct Draft Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
NVIDIA Nemotron 3 Super 120B A12B BF16 REAP 50pct Draft (0xSero/NVIDIA-Nemotron-3-Super-120B-A12B-BF16-REAP-50pct-draft)
๐ŸŒŸ Advertise your project ๐Ÿš€

NVIDIA Nemotron 3 Super 120B A12B BF16 REAP 50pct Draft Parameters and Internals

LLM NameNVIDIA Nemotron 3 Super 120B A12B BF16 REAP 50pct Draft
Repository ๐Ÿค—https://huggingface.co/0xSero/NVIDIA-Nemotron-3-Super-120B-A12B-BF16-REAP-50pct-draft 
Base Model(s)  ...emotron 3 Super 120B A12B BF16   nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-BF16
Model Size120b
Required VRAM128.5 GB
Updated2026-03-28
Maintainer0xSero
Model Typenemotron_h
Model Files  50.0 GB: 1-of-3   50.0 GB: 2-of-3   28.5 GB: 3-of-3
Model ArchitectureNemotronHForCausalLM
Licenseother
Context Length262144
Model Max Length262144
Transformers Version5.2.0.dev0
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|im_end|>
Vocabulary Size131072

Best Alternatives to NVIDIA Nemotron 3 Super 120B A12B BF16 REAP 50pct Draft

Best Alternatives
Context / RAM
Downloads
Likes
...on 3 Super 120B A12B Base BF161024K / 209.5 GB1044225
...motron 3 Super 120B A12B NVFP4256K / 80.3 GB1058108219
...Nemotron 3 Super 120B A12B FP8256K / 128.4 GB873745202
...emotron 3 Super 120B A12B BF16256K / 194.6 GB163028308
...motron 3 Super 120B A12B NVFP4256K / 80.3 GB4608222
...Nemotron 3 Super 120B A12B FP8256K / 128.4 GB25149
...uper 120B A12B BF16 Heretic V2256K / 241.4 GB2480
Note: green Score (e.g. "73.2") means that the model is better than 0xSero/NVIDIA-Nemotron-3-Super-120B-A12B-BF16-REAP-50pct-draft.

Rank the NVIDIA Nemotron 3 Super 120B A12B BF16 REAP 50pct Draft Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52221 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a