| Model Type | |
| Use Cases |
| Primary Use Cases: | |
| Limitations: | | Use in languages other than English, Generating objectionable or biased content |
|
| Considerations: | | Developers should ensure safety testing and tuning before deploying applications. |
|
|
| Additional Notes | | Llama 2's potential outputs cannot be predicted. Developers need to perform application-specific safety testing. |
|
| Training Details |
| Data Sources: | | publicly available online data |
|
| Data Volume: | |
| Methodology: | | Supervised Fine-Tuning (SFT) and Reinforcement Learning with Human Feedback (RLHF) |
|
| Context Length: | |
| Hardware Used: | |
| Model Architecture: | | Auto-regressive transformer |
|
|
| Responsible Ai Considerations |
| Transparency: | | The model is fine-tuned to align with human preferences for safety and helpfulness. |
|
| Mitigation Strategies: | | Follow responsible use guidelines to prevent misuse. |
|
|
| Input Output |
| Input Format: | | Text input with specific formatting using tags like INST and special tokens. |
|
| Accepted Modalities: | |
| Output Format: | |
| Performance Tips: | | Output relies on using specific formatting and tokens for optimal results. |
|
|