Danube2 Upscale 1.7 is an open-source language model by Lambent. Features: 2.2b LLM, VRAM: 4.5GB, Context: 8K, License: apache-2.0, Merged, HF Score: 45.1, LLM Explorer Score: 0.16, Arc: 43.3, HellaSwag: 69, MMLU: 40.4, TruthfulQA: 42.2, WinoGrande: 64.3, GSM8K: 11.3.
Merged Model Arxiv:2203.05482 Base model:lambent/danube2-ups... Base model:lambent/danube2-ups... Base model:lambent/danube2-ups... Base model:lambent/danube2-ups... Dataset:huggingfacetb/cosmoped... Dataset:nampdn-ai/tiny-bridged... Dataset:severian/internal-know... Dataset:severian/internal-know... Dataset:sordonia/redpajama-sam... Dataset:teknium/gpteacher-gene... Dataset:vezora/tested-22k-pyth... Endpoints compatible Mistral Region:us Safetensors Sharded Tensorflow
Danube2 Upscale 1.7 Benchmarks
nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Danube2 Upscale 1.7 Parameters and Internals
Model Type pre-trained, language model
Additional Notes Training methodology is experimental, not easy to replicate.
Training Details
Data Sources: HuggingFaceTB/cosmopedia-100k, Vezora/Tested-22k-Python-Alpaca, sordonia/redpajama-sample_from_valid_all, nampdn-ai/tiny-bridgedict, teknium/GPTeacher-General-Instruct, Severian/Internal-Knowledge-Map, Severian/Internal-Knowledge-Map-StoryWriter-RolePlaying
Methodology: Linear merge of layers, duplicating layers 16-21, and attempting various repair methods. Merge achieved using EQ-Bench benchmark.
LLM Name Danube2 Upscale 1.7 Repository ๐ค https://huggingface.co/Lambent/danube2-upscale-1.7 Base Model(s) Lambent/danube2-upscale-1.53lisa Lambent/danube2-upscale-1.51galore Lambent/danube2-upscale-1.531qlora Lambent/danube2-upscale-1.51qlora Lambent/danube2-upscale-1.53lisa Lambent/danube2-upscale-1.51galore Lambent/danube2-upscale-1.531qlora Lambent/danube2-upscale-1.51qlora Merged Model Yes Model Size 2.2b Required VRAM 4.5 GB Updated 2026-04-05 Maintainer Lambent Model Type mistral Model Files 4.5 GB: 1-of-1 Model Architecture MistralForCausalLM License apache-2.0 Context Length 8192 Model Max Length 8192 Transformers Version 4.36.2 Tokenizer Class LlamaTokenizer Padding Token </s> Vocabulary Size 32000 Torch Data Type float16
Best Alternatives to Danube2 Upscale 1.7
Expand
Rank the Danube2 Upscale 1.7 Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
Expand
Check out
Ag3ntum โ our secure, self-hosted AI agent for server management.
Release v20260328a