Llama 65B Hf by Enoch

 ยป  All LLMs  ยป  Enoch  ยป  Llama 65B Hf   URL Share it on

  Autotrain compatible   Endpoints compatible   Llama   Pytorch   Region:us   Sharded
Model Card on HF ๐Ÿค—: https://huggingface.co/Enoch/llama-65b-hf 

Llama 65B Hf Benchmarks

Llama 65B Hf (Enoch/llama-65b-hf)
๐ŸŒŸ Advertise your project ๐Ÿš€

Llama 65B Hf Parameters and Internals

Model Type 
auto-regressive, transformer architecture
Use Cases 
Primary Use Cases:
Research on large language models, Exploring applications such as question answering and reading comprehension, Evaluating and mitigating biases, Determining capabilities and limitations of models
Limitations:
Base model not suitable for downstream applications without risk evaluation
Supported Languages 
en (High proficiency), others (Included 20 languages, mainly supports English)
Training Details 
Data Sources:
CCNet, C4, GitHub, Wikipedia, Books, ArXiv, Stack Exchange
Data Volume:
1T tokens with different breakdowns for different model sizes
Model Architecture:
transformer architecture
Responsible Ai Considerations 
Fairness:
Model reflects biases from web sources. Evaluated biases include gender, religion, race, sexual orientation, age, nationality, disability, physical appearance, and socioeconomic status.
Transparency:
Model trained using web-sourced data which may contain biased and harmful content.
Accountability:
Use GitHub repository to raise questions or comments.
Mitigation Strategies:
Filtered data based on proximity to Wikipedia text using a Kneser-Ney language model and fastText linear classifier.
LLM NameLlama 65B Hf
Repository ๐Ÿค—https://huggingface.co/Enoch/llama-65b-hf 
Model Size65b
Required VRAM72 GB
Updated2025-09-14
MaintainerEnoch
Model Typellama
Model Files  1.6 GB: 1-of-81   1.6 GB: 2-of-81   1.6 GB: 3-of-81   1.6 GB: 4-of-81   1.6 GB: 5-of-81   1.6 GB: 6-of-81   1.6 GB: 7-of-81   1.6 GB: 8-of-81   1.6 GB: 9-of-81   1.6 GB: 10-of-81   1.6 GB: 11-of-81   1.6 GB: 12-of-81   1.6 GB: 13-of-81   1.6 GB: 14-of-81   1.6 GB: 15-of-81   1.6 GB: 16-of-81   1.6 GB: 17-of-81   1.6 GB: 18-of-81   1.6 GB: 19-of-81   1.6 GB: 20-of-81   1.6 GB: 21-of-81   1.6 GB: 22-of-81   1.6 GB: 23-of-81   1.6 GB: 24-of-81   1.6 GB: 25-of-81   1.6 GB: 26-of-81   1.6 GB: 27-of-81   1.6 GB: 28-of-81   1.6 GB: 29-of-81   1.6 GB: 30-of-81   1.6 GB: 31-of-81   1.6 GB: 32-of-81   1.6 GB: 33-of-81   1.6 GB: 34-of-81   1.6 GB: 35-of-81   1.6 GB: 36-of-81   1.6 GB: 37-of-81   1.6 GB: 38-of-81   1.6 GB: 39-of-81   1.6 GB: 40-of-81   1.6 GB: 41-of-81   1.6 GB: 42-of-81   1.6 GB: 43-of-81   1.6 GB: 44-of-81   1.6 GB: 45-of-81
Model ArchitectureLLaMAForCausalLM
Licenseother
Transformers Version4.28.0.dev0
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Llama 65B Hf

Best Alternatives
Context / RAM
Downloads
Likes
Llama 65B Hf0K / 75.2 GB60
Llama 65B0K / 72 GB210
Llama 65B Int40K / 33.5 GB177
LLaMA 65B HF0K / 73.6 GB1119
Deepshard 65B Raw0K / 73.6 GB61
Llama 65B 4bit0K / 33.5 GB106

Rank the Llama 65B Hf Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51368 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124