LLaMA 65B HF by boboto

 ยป  All LLMs  ยป  boboto  ยป  LLaMA 65B HF   URL Share it on

  Autotrain compatible   Endpoints compatible   Llama   Pytorch   Region:us   Sharded
Model Card on HF ๐Ÿค—: https://huggingface.co/boboto/LLaMA-65B-HF 

LLaMA 65B HF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
LLaMA 65B HF (boboto/LLaMA-65B-HF)
๐ŸŒŸ Advertise your project ๐Ÿš€

LLaMA 65B HF Parameters and Internals

Model Type 
auto-regressive language model
Use Cases 
Areas:
research on large language models
Primary Use Cases:
question answering, natural language understanding, reading comprehension
Limitations:
generation of misinformation, generation of harmful, biased or offensive content
Considerations:
The model is for research purposes and not recommended for use in downstream applications without further evaluations.
Supported Languages 
bg (unknown), ca (unknown), cs (unknown), da (unknown), de (unknown), en (unknown), es (unknown), fr (unknown), hr (unknown), hu (unknown), it (unknown), nl (unknown), pl (unknown), pt (unknown), ro (unknown), ru (unknown), sl (unknown), sr (unknown), sv (unknown), uk (unknown)
Training Details 
Data Sources:
CCNet, C4, GitHub, Wikipedia, Books, ArXiv, Stack Exchange
Methodology:
transformer architecture
Training Time:
between Dec. 2022 and Feb. 2023
Responsible Ai Considerations 
Fairness:
Biases have been evaluated.
Mitigation Strategies:
Filtered the data from the Web based on its proximity to Wikipedia. Used Kneser-Ney language model and fastText linear classifier.
LLM NameLLaMA 65B HF
Repository ๐Ÿค—https://huggingface.co/boboto/LLaMA-65B-HF 
Model Size65b
Required VRAM73.6 GB
Updated2025-09-14
Maintainerboboto
Model Typellama
Model Files  1.6 GB: 0-of-81   1.6 GB: 1-of-81   1.6 GB: 2-of-81   1.6 GB: 3-of-81   1.6 GB: 4-of-81   1.6 GB: 5-of-81   1.6 GB: 6-of-81   1.6 GB: 7-of-81   1.6 GB: 8-of-81   1.6 GB: 9-of-81   1.6 GB: 10-of-81   1.6 GB: 11-of-81   1.6 GB: 12-of-81   1.6 GB: 13-of-81   1.6 GB: 14-of-81   1.6 GB: 15-of-81   1.6 GB: 16-of-81   1.6 GB: 17-of-81   1.6 GB: 18-of-81   1.6 GB: 19-of-81   1.6 GB: 20-of-81   1.6 GB: 21-of-81   1.6 GB: 22-of-81   1.6 GB: 23-of-81   1.6 GB: 24-of-81   1.6 GB: 25-of-81   1.6 GB: 26-of-81   1.6 GB: 27-of-81   1.6 GB: 28-of-81   1.6 GB: 29-of-81   1.6 GB: 30-of-81   1.6 GB: 31-of-81   1.6 GB: 32-of-81   1.6 GB: 33-of-81   1.6 GB: 34-of-81   1.6 GB: 35-of-81   1.6 GB: 36-of-81   1.6 GB: 37-of-81   1.6 GB: 38-of-81   1.6 GB: 39-of-81   1.6 GB: 40-of-81   1.6 GB: 41-of-81   1.6 GB: 42-of-81   1.6 GB: 43-of-81   1.6 GB: 44-of-81   1.6 GB: 45-of-81
Model ArchitectureLLaMAForCausalLM
Licenseother
Transformers Version4.27.0.dev0
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to LLaMA 65B HF

Best Alternatives
Context / RAM
Downloads
Likes
Llama 65B Hf0K / 72 GB8123
Llama 65B Hf0K / 75.2 GB60
Llama 65B0K / 72 GB210
Llama 65B Int40K / 33.5 GB177
Deepshard 65B Raw0K / 73.6 GB61
Llama 65B 4bit0K / 33.5 GB106
Note: green Score (e.g. "73.2") means that the model is better than boboto/LLaMA-65B-HF.

Rank the LLaMA 65B HF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51368 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124