DeciLM 7B by Deci

 ยป  All LLMs  ยป  Deci  ยป  DeciLM 7B   URL Share it on

  Autotrain compatible   Conversational   Custom code   Deci   En   Region:us   Safetensors   Sharded   Tensorflow
Model Card on HF ๐Ÿค—: https://huggingface.co/Deci/DeciLM-7B 

DeciLM 7B Benchmarks

DeciLM 7B (Deci/DeciLM-7B)
๐ŸŒŸ Advertise your project ๐Ÿš€

DeciLM 7B Parameters and Internals

Model Type 
decoder-only, text generation
Use Cases 
Areas:
commercial, research
Limitations:
The model's testing has primarily been in English, and it may not work well in other languages., There's a risk of generating inaccurate, biased, or objectionable content.
Considerations:
Thorough safety testing and tuning are recommended before deployment.
Supported Languages 
en (high)
Training Details 
Methodology:
The model's architecture was generated using Deci's proprietary Neural Architecture Search technology, AutoNAC.
Context Length:
8192
Model Architecture:
Optimized transformer decoder architecture with variable Grouped-Query Attention.
Responsible Ai Considerations 
Transparency:
The outputs of DeciLM-7B are unpredictable and may be inaccurate, biased, or objectionable.
LLM NameDeciLM 7B
Repository ๐Ÿค—https://huggingface.co/Deci/DeciLM-7B 
Model Size7b
Required VRAM14.1 GB
Updated2025-08-20
MaintainerDeci
Model Typedeci
Model Files  5.0 GB: 1-of-3   4.9 GB: 2-of-3   4.2 GB: 3-of-3
Supported Languagesen
Model ArchitectureDeciLMForCausalLM
Licenseapache-2.0
Context Length8192
Model Max Length8192
Transformers Version4.35.2
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typebfloat16

Quantized Models of the DeciLM 7B

Model
Likes
Downloads
VRAM
DeciLM 7B GPTQ154 GB

Best Alternatives to DeciLM 7B

Best Alternatives
Context / RAM
Downloads
Likes
DeciLM 7B Instruct 128K128K / 14.1 GB43
DeciLM 7B Instruct 32K32K / 14.1 GB83
DeciLM 7B Instruct8K / 14.1 GB255395
DeciLM Base ChatTuned Blogv0.28K / 14.1 GB61
M128K / 14.1 GB70
M118K / 14.1 GB50
M108K / 14.1 GB70
M98K / 14.1 GB70
M88K / 14.1 GB50
M78K / 14.1 GB60

Rank the DeciLM 7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50767 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124