Tulu 30B by allenai

 ยป  All LLMs  ยป  allenai  ยป  Tulu 30B   URL Share it on

  Arxiv:2301.13688   Arxiv:2302.13971   Arxiv:2304.03277   Arxiv:2304.07327   Arxiv:2306.04751   Autotrain compatible Dataset:databricks/databricks-...   Dataset:openassistant/oasst1 Dataset:sahil2801/codealpaca-2...   En   Endpoints compatible   Llama   Pytorch   Region:us   Sharded
Model Card on HF ๐Ÿค—: https://huggingface.co/allenai/tulu-30b 

Tulu 30B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Tulu 30B (allenai/tulu-30b)
๐ŸŒŸ Advertise your project ๐Ÿš€

Tulu 30B Parameters and Internals

Model Type 
instruction tuning
Additional Notes 
This is a model diff; usage involves specific steps to apply over a base LLaMa model.
Supported Languages 
en (proficient)
Training Details 
Data Sources:
databricks/databricks-dolly-15k, OpenAssistant/oasst1, sahil2801/CodeAlpaca-20k, FLAN V2, CoT, Dolly, Open Assistant 1, GPT4-Alpaca, Code-Alpaca, ShareGPT
Methodology:
Finetuned on a mixture of instruction datasets
Input Output 
Input Format:
<|user|> Your message here! <|assistant|>
Accepted Modalities:
text
Output Format:
Generated text follows after input format
Performance Tips:
Ensure to include a newline after <|assistant|> for best quality
LLM NameTulu 30B
Repository ๐Ÿค—https://huggingface.co/allenai/tulu-30b 
Model Size30b
Required VRAM130.5 GB
Updated2025-08-19
Maintainerallenai
Model Typellama
Model Files  9.9 GB: 1-of-14   9.7 GB: 2-of-14   9.8 GB: 3-of-14   10.0 GB: 4-of-14   9.8 GB: 5-of-14   9.9 GB: 6-of-14   9.9 GB: 7-of-14   9.8 GB: 8-of-14   10.0 GB: 9-of-14   9.8 GB: 10-of-14   9.9 GB: 11-of-14   9.9 GB: 12-of-14   9.8 GB: 13-of-14   2.3 GB: 14-of-14
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Context Length2048
Model Max Length2048
Transformers Version4.29.2
Tokenizer ClassLlamaTokenizer
Vocabulary Size32001
Torch Data Typefloat32

Quantized Models of the Tulu 30B

Model
Likes
Downloads
VRAM
Tulu 30B GGUF04713 GB
Tulu 30B GPTQ101516 GB

Best Alternatives to Tulu 30B

Best Alternatives
Context / RAM
Downloads
Likes
Flash Llama 30M 2000132K / 0.1 GB17690
Smaug Slerp 30B V0.132K / 60.4 GB50
Tenebra 30B Alpha0116K / 65 GB1412
Llama33b 16K16K / 65.2 GB141
Yayi2 30B Llama4K / 121.2 GB92222
... Tokens By Perplexity Bottom K4K / 5.4 GB50
...via Sample With Temperature2.04K / 5.4 GB50
...lue Sample With Temperature2.04K / 5.4 GB50
... Tokens By Writing Style Top K4K / 5.4 GB50
Yayi2 30B Llama4K / 121.2 GB1822
Note: green Score (e.g. "73.2") means that the model is better than allenai/tulu-30b.

Rank the Tulu 30B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50751 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124