| Model Type | | pretrained, instruction tuned |
|
| Use Cases |
| Areas: | |
| Applications: | | assistant-like chat, natural language generation |
|
| Limitations: | | unsuitable for languages other than English |
|
| Considerations: | | Developers can fine-tune Llama 3 models for other languages adhering to policy. |
|
|
| Additional Notes | | This is a static model trained on an offline dataset. |
|
| Supported Languages | | languages_supported (en, ko), proficiency (N/A) |
|
| Training Details |
| Data Sources: | |
| Data Volume: | |
| Methodology: | | auto-regressive language model using transformer architecture |
|
| Context Length: | |
| Hardware Used: | |
| Model Architecture: | | optimized transformer architecture |
|
|
| Responsible Ai Considerations |
| Mitigation Strategies: | | Include safety mechanisms like Meta Llama Guard 2 and Code Shield |
|
|
| Input Output |
| Input Format: | |
| Accepted Modalities: | |
| Output Format: | |
|
| Release Notes |
| Version: | |
| Date: | |
| Notes: | | Release of Llama-3-Open-Ko-8B model. |
|
| Version: | |
| Date: | |
| Notes: | | Pre-Release of Llama-3-KoEn-8B model. |
|
| Version: | |
| Date: | |
| Notes: | | Re-Upload RoPE fixed model |
|
|
|