| Model Type | | text-generation, multilingual |
|
| Use Cases |
| Areas: | |
| Applications: | | Assistant-like chat, Knowledge retrieval, Summarization, Mobile AI-powered writing assistants, Query and prompt rewriting, Natural language generation tasks |
|
| Primary Use Cases: | | Multilingual dialogue use cases |
|
| Limitations: | | Use in any manner that violates applicable laws or regulations |
|
|
| Additional Notes | | Llama 3.2 models can be fine-tuned for additional languages beyond the primary 8 supported languages. |
|
| Supported Languages | | en (English), de (German), fr (French), it (Italian), pt (Portuguese), hi (Hindi), es (Spanish), th (Thai) |
|
| Training Details |
| Data Sources: | | A new mix of publicly available online data. |
|
| Data Volume: | |
| Methodology: | | Supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) |
|
| Context Length: | |
| Training Time: | |
| Hardware Used: | | H100-80GB (TDP of 700W) GPUs |
|
| Model Architecture: | | Auto-regressive with optimized transformer architecture |
|
|
| Input Output |
| Input Format: | |
| Accepted Modalities: | |
| Output Format: | | Multilingual Text and code |
|
|
| Release Notes |
| Version: | |
| Date: | |
| Notes: | | Introduced new multilingual capabilities and instruction tuning for agentic and summarization tasks. |
|
|
|