| Model Type | |
| Use Cases |
| Areas: | | commercial applications, research |
|
| Applications: | | chatbots, text generation, sensitivity analysis, multilingual assistance |
|
| Primary Use Cases: | | assistant-like chat, natural language generation tasks |
|
| Limitations: | | Use in languages beyond those explicitly referenced as supported is out of scope without additional fine-tuning. |
|
| Considerations: | | Developers may fine-tune models for unsupported languages while ensuring safe and responsible use. |
|
|
| Additional Notes | | Llama 3.1 models are not designed to be deployed in isolation and require additional safety guardrails when integrated into AI systems. |
|
| Supported Languages | | English (high), German (high), French (high), Italian (high), Portuguese (high), Hindi (high), Spanish (high), Thai (high) |
|
| Training Details |
| Data Sources: | | publicly available online data |
|
| Data Volume: | |
| Context Length: | |
| Model Architecture: | | Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. |
|
|
| Safety Evaluation |
| Methodologies: | | fine-tuning, adversarial testing, red teaming, multi-faceted data collection |
|
| Findings: | | Model refusals to benign prompts as well as refusal tone have been an area of focus., Adversarial prompts and comprehensive safety data responses have been incorporated. |
|
| Risk Categories: | | CBRNE helpfulness, Child Safety, Cyber attack enablement |
|
| Ethical Considerations: | | Llama 3.1 addresses users and their needs without imposing unnecessary judgment or normativity, focusing on the values of free thought and expression. |
|
|
| Responsible Ai Considerations |
| Fairness: | | The model is designed to be accessible to people across different backgrounds and experiences. |
|
| Transparency: | | Includes transparency tools for safety and content evaluations. |
|
| Accountability: | | Llama models should be part of an overall AI system with additional safety guardrails deployed by developers. |
|
| Mitigation Strategies: | | Strategies include a three-pronged approach to managing trust & safety risks, developer guidance, and community engagement. |
|
|
| Input Output |
| Input Format: | | ChatML prompt template or Alpaca prompt template |
|
| Accepted Modalities: | |
| Output Format: | |
| Performance Tips: | | Use specific prompt templates for better performance. |
|
|
| Release Notes |
| Version: | |
| Date: | |
| Notes: | | Introduces new capabilities including longer context window and multilingual inputs. |
|
|
|