OpenAssistant SFT 7 Llama 30B HF by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  OpenAssistant SFT 7 Llama 30B HF   URL Share it on

  Arxiv:2304.07327   Autotrain compatible   Endpoints compatible   Llama   Pytorch   Region:us   Sharded

OpenAssistant SFT 7 Llama 30B HF Benchmarks

OpenAssistant SFT 7 Llama 30B HF (TheBloke/OpenAssistant-SFT-7-Llama-30B-HF)
๐ŸŒŸ Advertise your project ๐Ÿš€

OpenAssistant SFT 7 Llama 30B HF Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
research, commercial applications
Additional Notes 
This is the result of merging XORs from the repo with the original Llama 30B weights.
Supported Languages 
bg (supported), ca (supported), cs (supported), da (supported), de (supported), en (supported), es (supported), fr (supported), hr (supported), hu (supported), it (supported), nl (supported), pl (supported), pt (supported), ro (supported), ru (supported), sl (supported), sr (supported), sv (supported), uk (supported)
Training Details 
Data Sources:
oasst_export, vicuna, dolly15k, grade_school_math_instructions, code_alpaca
Methodology:
Merged XORs with original Llama 30B weights, using epoch 7 of OpenAssistant's training.
Context Length:
2048
Hardware Used:
unknown
Model Architecture:
llama
LLM NameOpenAssistant SFT 7 Llama 30B HF
Repository ๐Ÿค—https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-HF 
Model Size30b
Required VRAM65.2 GB
Updated2025-08-19
MaintainerTheBloke
Model Typellama
Model Files  9.8 GB: 1-of-7   10.0 GB: 2-of-7   9.9 GB: 3-of-7   9.9 GB: 4-of-7   9.9 GB: 5-of-7   10.0 GB: 6-of-7   5.7 GB: 7-of-7
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length2048
Model Max Length2048
Transformers Version4.29.0.dev0
Tokenizer ClassLlamaTokenizer
End of Sentence Token</s>
Unk Token</s>
Vocabulary Size32016
Torch Data Typefloat16

Quantized Models of the OpenAssistant SFT 7 Llama 30B HF

Model
Likes
Downloads
VRAM
...Assistant SFT 7 Llama 30B GPTQ35202916 GB

Best Alternatives to OpenAssistant SFT 7 Llama 30B HF

Best Alternatives
Context / RAM
Downloads
Likes
Flash Llama 30M 2000132K / 0.1 GB17690
Smaug Slerp 30B V0.132K / 60.4 GB50
Tenebra 30B Alpha0116K / 65 GB1412
Llama33b 16K16K / 65.2 GB141
Yayi2 30B Llama4K / 121.2 GB92222
... Tokens By Perplexity Bottom K4K / 5.4 GB50
...via Sample With Temperature2.04K / 5.4 GB50
...lue Sample With Temperature2.04K / 5.4 GB50
... Tokens By Writing Style Top K4K / 5.4 GB50
Yayi2 30B Llama4K / 121.2 GB1822
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/OpenAssistant-SFT-7-Llama-30B-HF.

Rank the OpenAssistant SFT 7 Llama 30B HF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50738 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124