| Model Type | |
| Use Cases |
| Areas: | |
| Applications: | |
| Primary Use Cases: | | Dialogue, Text generation |
|
| Limitations: | | Use beyond English, prohibited by license |
|
| Considerations: | | Developers can fine-tune for languages beyond English within policy constraints. |
|
|
| Additional Notes | | Optimized for efficiency using 8-bit quantization by bitsandbytes. |
|
| Supported Languages | |
| Training Details |
| Data Sources: | | Publicly available online data |
|
| Data Volume: | |
| Methodology: | | Pretrained and instruction tuned with supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) |
|
| Context Length: | |
| Training Time: | |
| Hardware Used: | |
| Model Architecture: | | Auto-regressive transformer model |
|
|
| Safety Evaluation |
| Methodologies: | | Red teaming, Adversarial evaluations |
|
| Findings: | | Conducted extensive risk assessments and implemented mitigation techniques |
|
| Risk Categories: | | Misinformation, Bias, Child Safety, Cybersecurity |
|
| Ethical Considerations: | | Emphasis placed on responsible AI development. |
|
|
| Responsible Ai Considerations |
| Fairness: | | Aimed at inclusivity and helpfulness without unnecessary judgment. |
|
| Transparency: | | Open approach to AI intended for a wide range of applications. |
|
| Accountability: | | Developers are advised to incorporate additional safety tools. |
|
| Mitigation Strategies: | | Purple Llama solutions, Llama Guard for safeguards. |
|
|
| Input Output |
| Input Format: | |
| Accepted Modalities: | |
| Output Format: | |
| Performance Tips: | | Use proper prompt structuring for best results. |
|
|
| Release Notes |
| Version: | |
| Date: | |
| Notes: | | Initial release of 8-bit quantized version |
|
|
|