UltraLong Thinking by mergekit-community

 »  All LLMs  »  mergekit-community  »  UltraLong Thinking   URL Share it on

UltraLong Thinking is an open-source language model by mergekit-community. Features: 8b LLM, VRAM: 16.1GB, Context: 4192K, Instruction-Based, Merged, LLM Explorer Score: 0.19.

  Merged Model Base model:mobiuslabsgmbh/deep... Base model:nvidia/llama-3.1-ne...   Conversational   Endpoints compatible   Instruct   Llama   Region:us   Safetensors   Sharded   Tensorflow

UltraLong Thinking Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

UltraLong Thinking Parameters and Internals

LLM NameUltraLong Thinking
Repository 🤗https://huggingface.co/mergekit-community/UltraLong-Thinking 
Base Model(s)  mobiuslabsgmbh/DeepSeek-R1-ReDistill-Llama3-8B-v1.1   nvidia/Llama-3.1-Nemotron-8B-UltraLong-4M-Instruct   mobiuslabsgmbh/DeepSeek-R1-ReDistill-Llama3-8B-v1.1   nvidia/Llama-3.1-Nemotron-8B-UltraLong-4M-Instruct
Merged ModelYes
Model Size8b
Required VRAM16.1 GB
Updated2026-04-11
Maintainermergekit-community
Model Typellama
Instruction-BasedYes
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
Model ArchitectureLlamaForCausalLM
Context Length4292608
Model Max Length4292608
Transformers Version4.51.1
Tokenizer ClassPreTrainedTokenizer
Vocabulary Size129024
Torch Data Typebfloat16

Best Alternatives to UltraLong Thinking

Best Alternatives
Context / RAM
Downloads
Likes
...otron 8B UltraLong 4M Instruct4192K / 32.1 GB89119
...a 3.1 8B UltraLong 4M Instruct4192K / 32.1 GB17624
...a 3.1 8B UltraLong 2M Instruct2096K / 32.1 GB8759
...otron 8B UltraLong 2M Instruct2096K / 32.1 GB9117
Cthulhu 8B V1.41048K / 16.1 GB129
...otron 8B UltraLong 1M Instruct1048K / 32.1 GB344055
...a 3.1 8B UltraLong 1M Instruct1048K / 32.1 GB138729
Zero Llama 3.1 8B Beta61048K / 16.1 GB71
...dger Nu Llama 3.1 8B UltraLong1048K / 16.2 GB103
....1 1million Ctx Dark Planet 8B1048K / 32.3 GB83
Note: green Score (e.g. "73.2") means that the model is better than mergekit-community/UltraLong-Thinking.

Rank the UltraLong Thinking Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 53151 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum — our secure, self-hosted AI agent for server management.
Release v20260328a