Stablelm 2 1 6B by stabilityai

 ยป  All LLMs  ยป  stabilityai  ยป  Stablelm 2 1 6B   URL Share it on

Stablelm 2 1 6B is an open-source language model by stabilityai. Features: 6b LLM, VRAM: 3.3GB, Context: 4K, License: other, HF Score: 45.3, LLM Explorer Score: 0.19, Arc: 43.3, HellaSwag: 70.5, MMLU: 39, TruthfulQA: 36.8, WinoGrande: 64.6, GSM8K: 17.4.

  Arxiv:1607.06450   Arxiv:1910.02054   Arxiv:1910.07467   Arxiv:2101.00027   Arxiv:2104.09864   Arxiv:2204.06745   Arxiv:2206.11147   Arxiv:2305.06161   Arxiv:2305.14201   Arxiv:2307.09288   Arxiv:2309.09400   Arxiv:2309.16609   Arxiv:2402.17834   Dataset:bigcode/starcoderdata   Dataset:carperai/pilev2-dev Dataset:dataprovenanceinitiati... Dataset:tiiuae/falcon-refinedw... Dataset:togethercomputer/redpa...   Dataset:uonlp/culturax   De   En   Endpoints compatible   Es   Fr   It   Nl   Pt   Region:us   Safetensors   Stablelm

Stablelm 2 1 6b Benchmarks

Stablelm 2 1 6B (stabilityai/stablelm-2-1_6b)
๐ŸŒŸ Advertise your project ๐Ÿš€

Stablelm 2 1 6B Parameters and Internals

Model Type 
causal-lm
Use Cases 
Areas:
research, commercial applications
Applications:
foundational base model for application-specific fine-tuning
Limitations:
May exhibit unreliable, unsafe, or other undesirable behaviors that must be corrected prior to deployment., Pre-training data may have contained offensive or inappropriate content.
Supported Languages 
en (intermediate), de (intermediate), es (intermediate), fr (intermediate), it (intermediate), nl (intermediate), pt (intermediate)
Training Details 
Data Sources:
tiiuae/falcon-refinedweb, togethercomputer/RedPajama-Data-1T, uonlp/CulturaX, CarperAI/pilev2-dev, bigcode/starcoderdata, DataProvenanceInitiative/Commercially-Verified-Licenses
Data Volume:
2 trillion tokens
Methodology:
Pre-trained on diverse multilingual and code datasets for two epochs
Context Length:
4096
Hardware Used:
512 NVIDIA A100 40GB GPUs (AWS P4d instances)
Model Architecture:
Decoder-only transformer similar to the LLaMA architecture with modifications
Input Output 
Input Format:
prompts in tokenized form
Accepted Modalities:
text
Output Format:
generated text
Performance Tips:
Fine-tuning the base model is recommended for downstream tasks.
LLM NameStablelm 2 1 6b
Repository ๐Ÿค—https://huggingface.co/stabilityai/stablelm-2-1_6b 
Model Size6b
Required VRAM3.3 GB
Updated2026-01-13
Maintainerstabilityai
Model Typestablelm
Model Files  3.3 GB
Supported Languagesen de es fr it nl pt
Model ArchitectureStableLmForCausalLM
Licenseother
Context Length4096
Model Max Length4096
Transformers Version4.38.0
Tokenizer ClassGPT2TokenizerFast
Vocabulary Size100352
Torch Data Typefloat16

Best Alternatives to Stablelm 2 1 6B

Best Alternatives
Context / RAM
Downloads
Likes
Stablelm 2 1 6b Chat4K / 6.6 GB181834
Stablelm 2 1 6b Sft Full4K / 3.3 GB70
Cot 5k4K / 3.3 GB790
StableGPT4 Micro 1.6B4K / 6.6 GB811
StableLM FineTune GPT44K / 6.6 GB51
Parrot 1 6B4K / 3.3 GB41
Stablelm 2 1 6b4K /  GB32
Stablelm 2 Zephyr 1 6b4K /  GB71
Stablelm 2 Zephyr 1 6b4K / 3.3 GB121
Stablelm 2 Zephyr 1 6b Q44K /  GB51
Note: green Score (e.g. "73.2") means that the model is better than stabilityai/stablelm-2-1_6b.

Rank the Stablelm 2 1 6B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52509 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a