| Model Type | |
| Use Cases |
| Areas: | |
| Applications: | |
| Primary Use Cases: | | Instruction tuned models for assistant-like chat |
|
| Limitations: | | English only, compliance with policies required |
|
|
| Supported Languages | |
| Training Details |
| Data Sources: | | publicly available online data, instruction datasets, 10M human-annotated examples |
|
| Data Volume: | |
| Methodology: | | Pretraining, fine-tuning, reinforcement learning with human feedback |
|
| Context Length: | |
| Hardware Used: | |
| Model Architecture: | | Optimized transformer architecture, auto-regressive |
|
|
| Safety Evaluation |
| Methodologies: | | Red teaming, Adversarial evaluations, Safety mitigations |
|
| Findings: | | Llama 3 has enhanced measures to reduce residual risks |
|
| Risk Categories: | | Cybersecurity, Child Safety, General misuse |
|
|
| Responsible Ai Considerations |
| Mitigation Strategies: | | Implement safety tools, community feedback |
|
|
| Input Output |
| Input Format: | |
| Accepted Modalities: | |
| Output Format: | |
|
| Release Notes |
| Version: | |
| Notes: | | Release of instruction tuned models optimized for dialogue use cases |
|
|
|