Danube2 Upscale 1.7 by Lambent

 ยป  All LLMs  ยป  Lambent  ยป  Danube2 Upscale 1.7   URL Share it on

Danube2 Upscale 1.7 is an open-source language model by Lambent. Features: 2.2b LLM, VRAM: 4.5GB, Context: 8K, License: apache-2.0, Merged, HF Score: 45.1, LLM Explorer Score: 0.16, Arc: 43.3, HellaSwag: 69, MMLU: 40.4, TruthfulQA: 42.2, WinoGrande: 64.3, GSM8K: 11.3.

  Merged Model   Arxiv:2203.05482 Base model:lambent/danube2-ups... Base model:lambent/danube2-ups... Base model:lambent/danube2-ups... Base model:lambent/danube2-ups... Dataset:huggingfacetb/cosmoped... Dataset:nampdn-ai/tiny-bridged... Dataset:severian/internal-know... Dataset:severian/internal-know... Dataset:sordonia/redpajama-sam... Dataset:teknium/gpteacher-gene... Dataset:vezora/tested-22k-pyth...   Endpoints compatible   Mistral   Region:us   Safetensors   Sharded   Tensorflow

Danube2 Upscale 1.7 Benchmarks

Danube2 Upscale 1.7 (Lambent/danube2-upscale-1.7)
๐ŸŒŸ Advertise your project ๐Ÿš€

Danube2 Upscale 1.7 Parameters and Internals

Model Type 
pre-trained, language model
Additional Notes 
Training methodology is experimental, not easy to replicate.
Training Details 
Data Sources:
HuggingFaceTB/cosmopedia-100k, Vezora/Tested-22k-Python-Alpaca, sordonia/redpajama-sample_from_valid_all, nampdn-ai/tiny-bridgedict, teknium/GPTeacher-General-Instruct, Severian/Internal-Knowledge-Map, Severian/Internal-Knowledge-Map-StoryWriter-RolePlaying
Methodology:
Linear merge of layers, duplicating layers 16-21, and attempting various repair methods. Merge achieved using EQ-Bench benchmark.
LLM NameDanube2 Upscale 1.7
Repository ๐Ÿค—https://huggingface.co/Lambent/danube2-upscale-1.7 
Base Model(s)  Lambent/danube2-upscale-1.53lisa   Lambent/danube2-upscale-1.51galore   Lambent/danube2-upscale-1.531qlora   Lambent/danube2-upscale-1.51qlora   Lambent/danube2-upscale-1.53lisa   Lambent/danube2-upscale-1.51galore   Lambent/danube2-upscale-1.531qlora   Lambent/danube2-upscale-1.51qlora
Merged ModelYes
Model Size2.2b
Required VRAM4.5 GB
Updated2026-04-05
MaintainerLambent
Model Typemistral
Model Files  4.5 GB: 1-of-1
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length8192
Model Max Length8192
Transformers Version4.36.2
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Danube2 Upscale 1.7

Best Alternatives
Context / RAM
Downloads
Likes
... Phi 3 Medium AWQ 4bit Smashed4K / 7.8 GB90

Rank the Danube2 Upscale 1.7 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52758 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a