| Model Type |  | 
| Use Cases | 
| Areas: |  |  | Applications: |  |  | Primary Use Cases: | | Natural language generation tasks | 
 |  | Limitations: | | Use cases not covered extensively in languages other than English. | 
 |  | Considerations: | | Developers should ensure the responsible use of models. | 
 |  | 
| Additional Notes | | Tuned models optimized for dialogue. High carbon footprint during pretraining offset by Meta's sustainability program. Modeled potential relationships between text sequences to predict next items in sequences safely and effectively. | 
 | 
| Supported Languages | | English (Primary language for intended use) | 
 | 
| Training Details | 
| Data Sources: | | Publicly available online data | 
 |  | Data Volume: |  |  | Methodology: | | Uses a mix of publicly available online data. Fine-tuned using Supervised Fine-Tuning (SFT) and Reinforcement Learning with Human Feedback (RLHF). | 
 |  | Context Length: |  |  | Hardware Used: | | A100-80GB (TDP of 350-400W) | 
 |  | Model Architecture: | | Optimized transformer architecture | 
 |  | 
| Responsible Ai Considerations | 
| Fairness: | | Model may produce inaccurate, biased, or objectionable outputs. | 
 |  | Transparency: | | Transparency measures are in place for users. | 
 |  | Accountability: | | Developers should perform safety testing tailored to specific applications. | 
 |  | Mitigation Strategies: | | Safety testing and tuning recommended by Meta before deployment. | 
 |  | 
| Input Output | 
| Input Format: |  |  | Accepted Modalities: |  |  | Output Format: |  |  | Performance Tips: | | Specific formatting needs for chat versions, including the use of `INST` and `<>` tags, `BOS` and `EOS` tokens, and appropriate whitespace management. | 
 |  |