| Model Type | | Question-Answer, Token-Classification, Sequence-Classification, Text Generation Inference |
|
| Use Cases |
| Areas: | | research, commercial applications |
|
| Applications: | | role play, medical resources, technological development, historical document storage |
|
| Primary Use Cases: | | Constructing shelters, Developing technology, Medical diagnosis and reporting, Historical data retrieval |
|
|
| Additional Notes | | The model is trained for multi-task operations, utilizing Chain of Thoughts, Agent generation, Mark Down with mermaid, and internal preprocessing with RAG systems for tasks. |
|
| Supported Languages | | en (full), sw (full), ig (full), zu (full), ca (full), es (full), pt (full), ha (full) |
|
| Training Details |
| Data Sources: | |
| Methodology: | | Chain of thoughts, graph of thoughts, tree of thoughts, dual agent response generation, agent ranking, function calling, self-guiding methods. |
|
| Context Length: | |
|
| Release Notes |
| Version: | |
| Notes: | | 32k context window, Rope-theta = 1e6, No Sliding-Window Attention. |
|
|
|