| Model Type | | text generation, multilingual | 
 | 
| Use Cases | 
| Areas: |  |  | Applications: | | assistant-like chat, knowledge retrieval, summarization, mobile AI powered writing assistants | 
 |  | Limitations: | | Use in any manner violating applicable laws or regulations, Use beyond explicitly referenced languages | 
 |  | 
| Supported Languages | | English (officially supported), German (officially supported), French (officially supported), Italian (officially supported), Portuguese (officially supported), Hindi (officially supported), Spanish (officially supported), Thai (officially supported) | 
 | 
| Training Details | 
| Data Sources: | | Publicly available online data | 
 |  | Data Volume: |  |  | Methodology: | | Supervised fine-tuning (SFT), reinforcement learning with human feedback (RLHF) | 
 |  | Context Length: |  |  | Training Time: |  |  | Hardware Used: | | Meta's custom built GPU cluster | 
 |  | Model Architecture: | | Auto-regressive language model using optimized transformer architecture | 
 |  | 
| Safety Evaluation | 
| Ethical Considerations: | | The model is not to be deployed in isolation but as part of a broader AI system with safety guardrails. | 
 |  | 
| Responsible Ai Considerations | 
| Transparency: | | Built to be safe and flexible for user-deployed systems. | 
 |  | Accountability: | | Developers are accountable for ensuring safety in their deployments. | 
 |  | 
| Input Output | 
| Accepted Modalities: |  |  | Output Format: |  |  | 
| Release Notes | | 
| Version: |  |  | Date: |  |  | Notes: | | Release of Llama 3.2 multilingual large language models. | 
 |  | 
 |