| Model Type | | Multilingual, Generative, Instruction-tuned |
|
| Use Cases |
| Areas: | |
| Applications: | | Agentic retrieval, Summarization tasks, Knowledge retrieval, Prompt rewriting |
|
| Primary Use Cases: | | Multilingual dialogue and assistant-based tasks |
|
| Considerations: | | Deployments, including those in additional languages, must adhere to safety guidelines and responsible use principles. |
|
|
| Additional Notes | | This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. |
|
| Supported Languages | | English (official), German (official), French (official), Italian (official), Portuguese (official), Hindi (official), Spanish (official), Thai (official) |
|
| Training Details |
| Data Sources: | | A new mix of publicly available online data |
|
| Data Volume: | |
| Context Length: | |
| Hardware Used: | | Meta's custom built GPU cluster |
|
| Model Architecture: | | Auto-regressive with an optimized transformer architecture |
|
|
| Responsible Ai Considerations |
| Mitigation Strategies: | | Llama 3.2 was developed following best practices outlined in Meta's Responsible Use Guide. |
|
|
| Input Output |
| Input Format: | |
| Accepted Modalities: | |
| Output Format: | |
|