Llama 30B by nonlinearshimada

 ยป  All LLMs  ยป  nonlinearshimada  ยป  Llama 30B   URL Share it on

  Autotrain compatible   Endpoints compatible   Llama   Pytorch   Region:us   Sharded

Llama 30B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 30B (nonlinearshimada/llama-30b)
๐ŸŒŸ Advertise your project ๐Ÿš€

Llama 30B Parameters and Internals

Model Type 
auto-regressive language model, transformer architecture
Use Cases 
Areas:
research, NLP exploratory tasks
Applications:
question answering, reading comprehension, natural language understanding
Primary Use Cases:
research on large language models, exploring potential applications
Limitations:
has not been trained with human feedback; can thus generate toxic or offensive content
Considerations:
Foundation model, should not be used on downstream applications without further risk evaluation and mitigation.
Supported Languages 
primary (English), others (Spanish, French, German, Dutch, Italian, Portuguese, Russian, Chinese, etc.)
Training Details 
Data Sources:
CCNet, C4, GitHub, Wikipedia, Books, ArXiv, Stack Exchange
Data Volume:
Approximately 1T tokens for smaller models, 1.4T tokens for larger models
Model Architecture:
Transformer
Responsible Ai Considerations 
Fairness:
Expected to reflect biases from sources due to internet data. Evaluated on RAI datasets for various biases.
Mitigation Strategies:
Filtered web data for proximity to Wikipedia with Kneser-Ney language model and fastText linear classifier.
LLM NameLlama 30B
Repository ๐Ÿค—https://huggingface.co/nonlinearshimada/llama-30b 
Model Size30b
Required VRAM58.5 GB
Updated2025-06-25
Maintainernonlinearshimada
Model Typellama
Model Files  1.3 GB: 0-of-61   1.3 GB: 1-of-61   1.3 GB: 2-of-61   1.3 GB: 3-of-61   1.3 GB: 4-of-61   1.3 GB: 5-of-61   1.3 GB: 6-of-61   1.3 GB: 7-of-61   1.3 GB: 8-of-61   1.3 GB: 9-of-61   1.3 GB: 10-of-61   1.3 GB: 11-of-61   1.3 GB: 12-of-61   1.3 GB: 13-of-61   1.3 GB: 14-of-61   1.3 GB: 15-of-61   1.3 GB: 16-of-61   1.3 GB: 17-of-61   1.3 GB: 18-of-61   1.3 GB: 19-of-61   1.3 GB: 20-of-61   1.3 GB: 21-of-61   1.3 GB: 22-of-61   1.3 GB: 23-of-61   1.3 GB: 24-of-61   1.3 GB: 25-of-61   1.3 GB: 26-of-61   1.3 GB: 27-of-61   1.3 GB: 28-of-61   1.3 GB: 29-of-61   1.3 GB: 30-of-61   1.3 GB: 31-of-61   1.3 GB: 32-of-61   1.3 GB: 33-of-61   1.3 GB: 34-of-61   1.3 GB: 35-of-61   1.3 GB: 36-of-61   1.3 GB: 37-of-61   1.3 GB: 38-of-61   1.3 GB: 39-of-61   1.3 GB: 40-of-61   1.3 GB: 41-of-61   1.3 GB: 42-of-61   1.3 GB: 43-of-61   1.3 GB: 44-of-61
Model ArchitectureLLaMAForCausalLM
Licenseother
Transformers Version4.27.0.dev0
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Llama 30B

Best Alternatives
Context / RAM
Downloads
Likes
Llama 30B Int40K / 17 GB182
Llama 30B Int40K / 17 GB178
Llama 30B 3bit Gr1280K / 14 GB114

Rank the Llama 30B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51483 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124