Dolly V2 7B Sharded by ethzanalytics

 ยป  All LLMs  ยป  ethzanalytics  ยป  Dolly V2 7B Sharded   URL Share it on

  Autotrain compatible Dataset:databricks/databricks-...   Dolly   Dolly-v2   En   Endpoints compatible   Gpt neox   Instruct   Pytorch   Region:us   Sharded

Dolly V2 7B Sharded Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Dolly V2 7B Sharded (ethzanalytics/dolly-v2-7b-sharded)
๐ŸŒŸ Advertise your project ๐Ÿš€

Dolly V2 7B Sharded Parameters and Internals

Model Type 
text generation
Additional Notes 
Sharded checkpoint enables low-RAM loading, suitable for environments like Colab.
Supported Languages 
en (English)
Input Output 
Accepted Modalities:
text
LLM NameDolly V2 7B Sharded
Repository ๐Ÿค—https://huggingface.co/ethzanalytics/dolly-v2-7b-sharded 
Model Size7b
Required VRAM27.7 GB
Updated2025-09-23
Maintainerethzanalytics
Model Typegpt_neox
Model Files  3.8 GB: 1-of-8   3.8 GB: 2-of-8   4.0 GB: 3-of-8   3.9 GB: 4-of-8   3.8 GB: 5-of-8   3.8 GB: 6-of-8   3.8 GB: 7-of-8   0.8 GB: 8-of-8
Supported Languagesen
Model ArchitectureGPTNeoXForCausalLM
Licensemit
Context Length2048
Model Max Length2048
Transformers Version4.28.1
Tokenizer ClassGPTNeoXTokenizer
Vocabulary Size50280
Torch Data Typefloat32

Quantized Models of the Dolly V2 7B Sharded

Model
Likes
Downloads
VRAM
Dolly V2 7B Sharded 8bit1117 GB

Best Alternatives to Dolly V2 7B Sharded

Best Alternatives
Context / RAM
Downloads
Likes
Literature 7B 1638416K / 36 GB915
RedPajama 7B 1638416K / 36 GB74
Stablelm Tuned Alpha 7B4K / 31.9 GB2359360
Stablelm Base Alpha 7B4K / 31.9 GB2125209
Stablelm 7B Sft V7 Epoch 34K / 32.4 GB182067
StableLManticore 7B4K / 16 GB61
Pythia 6.9B Deduped 4K4K / 27.2 GB810
Stablelm 7B4K / 31.9 GB62
Open Calm 7B2K / 13.9 GB7000205
Sarashina1 7B2K / 13.9 GB12900
Note: green Score (e.g. "73.2") means that the model is better than ethzanalytics/dolly-v2-7b-sharded.

Rank the Dolly V2 7B Sharded Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51542 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124