| Model Type | | auto-regressive, transformer, text generation |
|
| Use Cases |
| Areas: | | research, commercial applications |
|
| Applications: | | multilingual text generation, synthetic data generation |
|
| Primary Use Cases: | | assistant-like chat, instructional generation |
|
| Limitations: | | Prohibited for use in unsanctioned languages |
|
| Considerations: | | Must comply with Llama 3.1 Community License and Acceptable Use Policy. |
|
|
| Additional Notes | | Integrated safety features with community feedback. |
|
| Supported Languages | | en (high), de (high), fr (high), it (high), pt (high), hi (high), es (high), th (high) |
|
| Training Details |
| Data Sources: | | publicly available online data |
|
| Data Volume: | |
| Methodology: | | supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) |
|
| Context Length: | |
| Training Time: | |
| Hardware Used: | | Meta's custom GPU cluster |
|
| Model Architecture: | |
|
| Safety Evaluation |
| Methodologies: | | adversarial testing, red teaming |
|
| Risk Categories: | | misinformation, bias, child safety, cyber attack enablement |
|
| Ethical Considerations: | | Emphasizes openness, inclusivity, and helpfulness. |
|
|
| Responsible Ai Considerations |
| Fairness: | | Addressed through careful data selection and fine tuning methodology. |
|
| Transparency: | | Documented in various model reports and guides. |
|
| Accountability: | | Developers using Llama 3.1 are responsible for model deployment. |
|
| Mitigation Strategies: | | Incorporating Prompt Guard, Llama Guard 3. |
|
|
| Input Output |
| Input Format: | |
| Accepted Modalities: | |
| Output Format: | | Multilingual Text and code |
|
| Performance Tips: | | Fine-tuning recommended for non-8 supported languages. |
|
|
| Release Notes |
| Version: | |
| Date: | |
| Notes: | | Introduced longer context window, multilingual support etc. |
|
|
|