| Model Type | |
| Use Cases |
| Areas: | |
| Applications: | | assistant-like chat, natural language generation tasks |
|
| Primary Use Cases: | | English instruction tuned chat models |
|
| Limitations: | | only tested in English, possible inaccuracies or biases |
|
| Considerations: | | Use with safety assessments tailored to use case. |
|
|
| Additional Notes | | Inherits best practices in safety from previous versions. |
|
| Supported Languages | |
| Training Details |
| Data Sources: | | publicly available online data |
|
| Data Volume: | |
| Methodology: | | supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) |
|
| Context Length: | |
| Training Time: | |
| Hardware Used: | |
| Model Architecture: | | optimized transformer architecture |
|
|
| Safety Evaluation |
| Methodologies: | | red teaming, adversarial evaluations |
|
| Findings: | |
| Risk Categories: | |
| Ethical Considerations: | | Evaluated under Responsible AI guidelines. |
|
|
| Responsible Ai Considerations |
| Fairness: | | Includes guidelines for fair operation and bias mitigation. |
|
| Transparency: | | Includes detailed safety evaluation methodologies. |
|
| Accountability: | | Meta responsible for model and derivatives. |
|
| Mitigation Strategies: | | Usage following Responsible AI guidelines |
|
|
| Input Output |
| Input Format: | |
| Accepted Modalities: | |
| Output Format: | |
| Performance Tips: | | Align with Responsible AI guidelines for best performance. |
|
|
| Release Notes |
| Version: | |
| Date: | |
| Notes: | | Released with 8k context length and GQA feature. |
|
| Version: | |
| Date: | |
| Notes: | | Includes advanced optimizations and RLHF. |
|
|
|