| Model Type | | text generation, auto-regressive |
|
| Use Cases |
| Areas: | |
| Applications: | | Chat assistants, Various NLP tasks |
|
| Primary Use Cases: | |
| Limitations: | | English only, Potential for biased responses |
|
| Considerations: | | Adherence to the Acceptable Use Policy and suitability testing for specific use cases required. |
|
|
| Additional Notes | |
| Supported Languages | |
| Training Details |
| Data Sources: | | publicly available online data |
|
| Data Volume: | |
| Methodology: | | Supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) |
|
| Context Length: | |
| Training Time: | |
| Hardware Used: | |
| Model Architecture: | | Optimized transformer architecture |
|
|
| Safety Evaluation |
| Methodologies: | | Red teaming, Adversarial evaluations |
|
| Findings: | | Model is less likely to falsely refuse prompts than previous models |
|
| Risk Categories: | | Misinformation, Bias, Child Safety |
|
| Ethical Considerations: | | Residual risks will likely remain; developers should assess these risks in context of their application. |
|
|
| Responsible Ai Considerations |
| Fairness: | | Enhancements to reduce refusals while maintaining usefulness. |
|
| Transparency: | | Open sourced tools for community use and contributions. |
|
| Accountability: | | Developers need to ensure sufficient model safety for their applications. |
|
| Mitigation Strategies: | | Use of Responsible Use Guide, Llama Guard, and community contributions to mitigate risks. |
|
|
| Input Output |
| Input Format: | | Text prompts formatted as chat messages. |
|
| Accepted Modalities: | |
| Output Format: | | Generated text or code output. |
|
|
| Release Notes |
| Version: | |
| Date: | |
| Notes: | | Introduction of Meta-Llama-3-8B-Instruct with improved safety and helper functions. |
|
|
|