Llama 2 13B Fp16 by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Llama 2 13B Fp16   URL Share it on

  Autotrain compatible   En   Facebook   Fp16   Llama   Llama2   Meta   Pytorch   Quantized   Region:us   Sharded

Llama 2 13B Fp16 Benchmarks

Llama 2 13B Fp16 (TheBloke/Llama-2-13B-fp16)
๐ŸŒŸ Advertise your project ๐Ÿš€

Llama 2 13B Fp16 Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
Commercial, Research
Primary Use Cases:
Dialogue and natural language generation tasks
Limitations:
Use in languages other than English, Use violating applicable laws
Considerations:
Specific formatting required for chat versions.
Supported Languages 
English (primary)
Training Details 
Data Sources:
Publicly available online data
Data Volume:
2 trillion tokens
Methodology:
Supervised fine-tuning and reinforcement learning with human feedback (RLHF)
Context Length:
4000
Training Time:
Jan 2023 - July 2023
Hardware Used:
Meta's Research Super Cluster, A100-80GB GPUs
Model Architecture:
Auto-regressive with optimized transformer architecture
Responsible Ai Considerations 
Fairness:
Testing has been conducted primarily in English and may not cover all scenarios, hence outputs may be inaccurate or biased.
Transparency:
Model outputs cannot be predicted in advance.
Accountability:
Developers should perform safety testing before deploying applications.
Mitigation Strategies:
Fine-tuning and human feedback alignment strategies used for alignment to human preferences.
Input Output 
Input Format:
text
Accepted Modalities:
text
Output Format:
text
Performance Tips:
Follow `INST` and `<>` tags, `BOS` and `EOS` tokens, proper whitespaces for chat models
LLM NameLlama 2 13B Fp16
Repository ๐Ÿค—https://huggingface.co/TheBloke/Llama-2-13B-fp16 
Base Model(s)  Ligma L2 13B   kubernetes-bad/Ligma-L2-13b
Model Size13b
Required VRAM26 GB
Updated2025-09-18
MaintainerTheBloke
Model Typellama
Model Files  9.9 GB: 1-of-3   9.9 GB: 2-of-3   6.2 GB: 3-of-3
Supported Languagesen
Quantization Typefp16
Model ArchitectureLlamaForCausalLM
Context Length4096
Model Max Length4096
Transformers Version4.30.2
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Llama 2 13B Fp16

Best Alternatives
Context / RAM
Downloads
Likes
Llama13b 32K Illumeet Finetune32K / 26 GB90
...Maid V3 13B 32K 6.0bpw H6 EXL232K / 10 GB71
...Maid V3 13B 32K 8.0bpw H8 EXL232K / 13.2 GB71
WhiteRabbitNeo 13B V116K / 26 GB2625428
CodeLlama 13B Python Fp1616K / 26 GB264625
CodeLlama 13B Instruct Fp1616K / 26 GB266428
...Llama 13B Instruct Hf 4bit MLX16K / 7.8 GB12222
CodeLlama 13B Fp1616K / 26 GB666
Airophin 13B Pntk 16K Fp1616K / 26 GB16484
Codellama 13B Bnb 4bit16K / 7.2 GB445
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Llama-2-13B-fp16.

Rank the Llama 2 13B Fp16 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51435 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124