| Model Type | | text generation, instruction tuned |
|
| Use Cases |
| Areas: | | commercial applications, research |
|
| Applications: | | assistant-like chat, natural language generation tasks |
|
| Primary Use Cases: | | Instruction tuned applications for dialogue, Pretrained models adaptation for various tasks |
|
| Limitations: | | Not intended for legal or medical advice |
|
| Considerations: | | Developers should follow the Responsible Use Guide |
|
|
| Additional Notes | | Model includes tools for responsible AI practices. |
|
| Supported Languages | | English (Full proficiency) |
|
| Training Details |
| Data Sources: | |
| Data Volume: | |
| Methodology: | |
| Context Length: | |
| Hardware Used: | | H100-80GB GPUs, Meta's Research SuperCluster |
|
| Model Architecture: | | Auto-regressive transformer |
|
|
| Safety Evaluation |
| Methodologies: | | red teaming, adversarial evaluations |
|
| Risk Categories: | | CBRNE threats, Cyber Security, Child Safety |
|
| Ethical Considerations: | | Iterative testing and expert consultations conducted |
|
|
| Responsible Ai Considerations |
| Fairness: | | Comprehensive steps were taken to ensure fairness. |
|
| Transparency: | | Open community collaboration and resources provided for transparency. |
|
| Accountability: | | Meta is responsible for the model's deployment. |
|
| Mitigation Strategies: | | Use of safeguards like Meta Llama Guard 2 and Code Shield |
|
|
| Input Output |
| Input Format: | |
| Accepted Modalities: | |
| Output Format: | |
| Performance Tips: | | Use appropriate hardware and settings for extended context. |
|
|
| Release Notes |
| Version: | |
| Notes: | | Optimized for usage with the transformer library. |
|
|
|