| Model Type | | text generation, conversation, assistant |
|
| Use Cases |
| Areas: | | Research on low-resource language modeling |
|
| Applications: | | Scientific experiments, Fine-tuning for deployment |
|
| Primary Use Cases: | | Understand challenges related to developing language models in low-resource languages |
|
| Limitations: | | Not intended for deployment, Not suitable for human-facing interactions, Not suitable for translation or generating text in other languages |
|
| Considerations: | | Users are warned of risks, biases, and the need for human moderation. |
|
|
| Additional Notes | | Use CAUTION in deploying as it inherits general language model biases and behaviors. |
|
| Supported Languages | | Brazilian Portuguese (high) |
|
| Training Details |
| Data Sources: | | nicholasKluge/instruct-aira-dataset-v2 |
|
| Data Volume: | |
| Methodology: | |
| Hardware Used: | |
| Model Architecture: | | quantized version using AutoAWQ |
|
|
| Responsible Ai Considerations |
| Fairness: | | The model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities. |
|
| Transparency: | | Users are encouraged to perform their risk analysis on these models. |
|
| Accountability: | | Developers of TeenyTinyLlama |
|
| Mitigation Strategies: | | Hasn't been outlined specifically. |
|
|
| Input Output |
| Input Format: | | Uses special tokens to demarcate user queries and model responses. |
|
| Accepted Modalities: | |
| Output Format: | | Generates text in response to queries. |
|
| Performance Tips: | | Ensure human moderation in interactive applications. |
|
|