| Model Type | | pre-trained, instruction-tuned, large language model, text generation |
|
| Use Cases |
| Areas: | |
| Applications: | | assistant-like chat, natural language generation |
|
| Primary Use Cases: | |
| Limitations: | | English-only currently supported, not tested in all languages |
|
| Considerations: | | Developers allowed to fine-tune for other languages but must comply with licensing conditions. |
|
|
| Supported Languages | | English (high proficiency) |
|
| Training Details |
| Data Sources: | | publicly available online data |
|
| Data Volume: | |
| Methodology: | | supervised fine-tuning, reinforcement learning with human feedback |
|
| Context Length: | |
| Hardware Used: | |
| Model Architecture: | | optimized transformer architecture |
|
|
| Safety Evaluation |
| Methodologies: | | red-teaming, adversarial evaluations |
|
| Findings: | | reduced false refusals compared to Llama 2 |
|
| Risk Categories: | | disinformation, security, child safety |
|
| Ethical Considerations: | | Developers advised to perform safety testing and use additional safety tools. |
|
|
| Responsible Ai Considerations |
| Fairness: | |
| Transparency: | | Updates with responsible use guides |
|
| Accountability: | | Developers responsible for deployment safety |
|
| Mitigation Strategies: | | Implemented safety mitigation techniques |
|
|
| Input Output |
| Input Format: | |
| Accepted Modalities: | |
| Output Format: | |
|
| Release Notes |
| Date: | |
| Notes: | | Meta Llama 3 final release with benchmarks and community guidelines. |
|
|
|