| Model Type |  | 
| Use Cases | 
| Areas: |  |  | Applications: | | Instruction tuned models for assistant-like chat | 
 |  | Primary Use Cases: | | Natural language generation, Multilingual dialogue interactions | 
 |  | Limitations: | | Out-of-the-box use only in English, Potential inaccurate or biased responses | 
 |  | Considerations: | | Developers should fine-tune based on specific needs. | 
 |  | 
| Additional Notes | | 100% carbon emissions offset by Metaβs sustainability program. | 
 | 
| Supported Languages |  | 
| Training Details | 
| Data Sources: | | publicly available online data | 
 |  | Data Volume: |  |  | Methodology: | | Supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) | 
 |  | Context Length: |  |  | Hardware Used: | | H100-80GB GPU with a cumulative 7.7M GPU hours | 
 |  | Model Architecture: | | Auto-regressive transformer architecture | 
 |  | 
| Safety Evaluation | 
| Methodologies: | | Red teaming exercises, Adversarial evaluations | 
 |  | Risk Categories: | | CBRNE, Cyber Security, Child Safety | 
 |  | Ethical Considerations: | | Leverages best practices for safety and responsible deployment. | 
 |  | 
| Responsible Ai Considerations | 
| Fairness: | | Inclusive and open approach, aiming to serve diverse user needs and perspectives. | 
 |  | Accountability: | | Developers responsible for end-user safety evaluations. | 
 |  | Mitigation Strategies: | | Tools like Meta Llama Guard 2 and Code Shield for layering safety measures. | 
 |  | 
| Input Output | 
| Input Format: |  |  | Accepted Modalities: |  |  | Output Format: |  |  | Performance Tips: | | Fine-tune with language-specific data where appropriate. | 
 |  | 
| Release Notes | | 
| Version: |  |  | Date: |  |  | Notes: | | Initial release of pre-trained and instruction tuned variants. | 
 |  | 
 |