| Model Type | | Transformer style autoregressive language model |
|
| Use Cases |
| Areas: | | Research, Commercial applications |
|
| Primary Use Cases: | |
| Considerations: | | Model can still generate harmful or sensitive content |
|
|
| Additional Notes | | This is the instruct variant, newer versions available. |
|
| Supported Languages | |
| Training Details |
| Data Sources: | | allenai/dolma, allenai/tulu-v2-sft-mixture, allenai/ultrafeedback_binarized_cleaned |
|
| Data Volume: | | Variety, details in specific dataset documentations |
|
| Methodology: | |
| Context Length: | |
| Model Architecture: | | Based on base model OLMo with adaptations |
|
|
| Input Output |
| Input Format: | |
| Accepted Modalities: | |
| Output Format: | |
| Performance Tips: | | Use quantization for inference efficiency |
|
|