Dolly V2 12B by databricks

 ยป  All LLMs  ยป  databricks  ยป  Dolly V2 12B   URL Share it on

  Autotrain compatible Dataset:databricks/databricks-...   En   Gpt neox   Pytorch   Region:us
Model Card on HF ๐Ÿค—: https://huggingface.co/databricks/dolly-v2-12b 

Dolly V2 12B Benchmarks

Dolly V2 12B (databricks/dolly-v2-12b)
๐ŸŒŸ Advertise your project ๐Ÿš€

Dolly V2 12B Parameters and Internals

Model Type 
causal language model
Use Cases 
Primary Use Cases:
instruction following
Limitations:
syntactically complex prompts, programming problems, mathematical operations, factual errors, dates and times, open-ended question answering, hallucination, enumerating lists of specific length, stylistic mimicry, humor, well-formatted letter writing
Training Details 
Data Sources:
EleutherAI Pythia, databricks-dolly-15k
Methodology:
Instruction fine-tuning
Model Architecture:
Derived from EleutherAI's Pythia-12b
Input Output 
Accepted Modalities:
text
Performance Tips:
Include `torch_dtype=torch.bfloat16` to reduce memory usage if supported. Use `trust_remote_code=True` when loading the pipeline.
LLM NameDolly V2 12B
Repository ๐Ÿค—https://huggingface.co/databricks/dolly-v2-12b 
Model Size12b
Required VRAM23.8 GB
Updated2025-09-23
Maintainerdatabricks
Model Typegpt_neox
Model Files  23.8 GB
Supported Languagesen
Model ArchitectureGPTNeoXForCausalLM
Licensemit
Context Length2048
Model Max Length2048
Transformers Version4.25.1
Tokenizer ClassGPTNeoXTokenizer
Vocabulary Size50280
Torch Data Typebfloat16

Best Alternatives to Dolly V2 12B

Best Alternatives
Context / RAM
Downloads
Likes
...sst Sft 4 Pythia 12B Epoch 3.52K / 23.8 GB2874370
Pythia 12B2K / 23.8 GB7784141
Oasst Sft 1 Pythia 12B2K / 23.8 GB1981277
Pythia 12B Deduped2K / 23.8 GB681452
H2ogpt Gm Oasst1 En 1024 12B2K / 23.8 GB19195
...ythia 12B Sft V8 Rlhf 2K Steps2K / 23.8 GB18360
Pythia 12B Pre V8.12.5K Steps2K / 23.8 GB18296
H2ogpt Oasst1 512 12B2K / 23.9 GB177829
Pythia 12B Sft V8.2.5K Steps2K / 23.8 GB16510
Instruct 12B2K / 47.6 GB184816

Rank the Dolly V2 12B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51544 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124