| Model Type | | text-generation, instruction-tuned |
|
| Use Cases |
| Areas: | |
| Applications: | | assistant-like chat, natural language generation |
|
| Primary Use Cases: | | dialogue systems, AI chatbots |
|
| Limitations: | | Not suitable for illegal activities, Prohibited use in non-compliant languages |
|
| Considerations: | | Requires compliance with Meta's Acceptable Use Policy and License. |
|
|
| Additional Notes | | Designed for English language applications. Future updates planned to enhance model safety. |
|
| Supported Languages | |
| Training Details |
| Data Sources: | | publicly available online data |
|
| Data Volume: | |
| Methodology: | | auto-regressive language model, optimized transformer architecture, supervised fine-tuning (SFT), reinforcement learning with human feedback (RLHF) |
|
| Context Length: | |
| Hardware Used: | | Meta's Research SuperCluster, third-party cloud compute |
|
| Model Architecture: | | optimized transformer architecture |
|
|
| Safety Evaluation |
| Methodologies: | | red teaming, adversarial evaluations |
|
| Findings: | | minimal false refusals, high level of safety maintained through Purple Llama safeguards |
|
| Risk Categories: | | misinformation, bias, cybersecurity, child safety |
|
| Ethical Considerations: | | Residual risks and potential biases remain; responsible deployment encouraged. |
|
|
| Responsible Ai Considerations |
| Fairness: | | Efforts to minimize bias and improve model safety. |
|
| Transparency: | | Model release and documentation publicly available. |
|
| Accountability: | | Meta holds accountability for model safety and performance. |
|
| Mitigation Strategies: | | Implemented safety techniques and feedback mechanisms for risk reduction. |
|
|
| Input Output |
| Input Format: | |
| Accepted Modalities: | |
| Output Format: | |
|
| Release Notes |
| Version: | |
| Date: | |
| Notes: | | Initial release with improved performance and safety features. |
|
|
|