Llama 2 70B GGUF by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Llama 2 70B GGUF   URL Share it on

  Arxiv:2307.09288 Base model:meta-llama/llama-2-... Base model:quantized:meta-llam...   En   Facebook   Gguf   Llama   Llama2   Meta   Pytorch   Quantized   Region:us

Llama 2 70B GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Llama 2 70B GGUF (TheBloke/Llama-2-70B-GGUF)
๐ŸŒŸ Advertise your project ๐Ÿš€

Llama 2 70B GGUF Parameters and Internals

Model Type 
text-generation
Use Cases 
Areas:
commercial use, research
Primary Use Cases:
assistant-like chat, natural language generation tasks
Considerations:
Specific formatting needed for expected features, including special tokens and tags.
Additional Notes 
Llama 2 models perform best with English datasets.
Supported Languages 
en (proficient)
Training Details 
Data Sources:
A new mix of publicly available online data
Data Volume:
2.0T tokens
Methodology:
auto-regressive language model
Context Length:
4000
Training Time:
January 2023 - July 2023
Hardware Used:
Meta's Research Super Cluster, production clusters, A100-80GB GPUs
Model Architecture:
optimized transformer architecture with Grouped-Query Attention (GQA) for improved inference scalability
Input Output 
Input Format:
Expected text format with 'INST' and '<>' tags, 'BOS' and 'EOS' tokens
Accepted Modalities:
text
Output Format:
text generation
Performance Tips:
Proper formatting required for intended outputs.
LLM NameLlama 2 70B GGUF
Repository ๐Ÿค—https://huggingface.co/TheBloke/Llama-2-70B-GGUF 
Model NameLlama 2 70B
Model CreatorMeta Llama 2
Base Model(s)  Llama 2 70B Hf   meta-llama/Llama-2-70b-hf
Model Size70b
Required VRAM29.3 GB
Updated2025-09-13
MaintainerTheBloke
Model Typellama
Model Files  29.3 GB   36.1 GB   33.2 GB   29.9 GB   38.9 GB   41.4 GB   39.1 GB   47.5 GB   48.8 GB   47.5 GB
Supported Languagesen
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureAutoModel
Licensellama2

Best Alternatives to Llama 2 70B GGUF

Best Alternatives
Context / RAM
Downloads
Likes
...us Qwen3 R1 Llama Distill GGUF0K / 0.8 GB11762
KafkaLM 70B German V0.1 GGUF0K / 25.5 GB302147
...gekit Passthrough Yqhuxcv GGUF0K / 16.9 GB90
CodeLlama 70B Instruct GGUF0K / 25.5 GB249859
Meta Llama 3 70B Instruct GGUF0K / 26.4 GB1884
CodeLlama 70B Python GGUF0K / 25.5 GB125944
DAD Model V2 70B Q40K / 42.5 GB100
CodeLlama 70B Hf GGUF0K / 25.5 GB56742
Llama 2 70B Guanaco QLoRA GGUF0K / 29.3 GB260
WizardMath 70B V1.0 GGUF0K / 29.3 GB120997
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Llama-2-70B-GGUF.

Rank the Llama 2 70B GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51352 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124