OLMoE 1B 7B 0924 SFT is an open-source language model by allenai. Features: 1b LLM, VRAM: 13.8GB, Context: 4K, License: apache-2.0, LLM Explorer Score: 0.15.
This model is an intermediate training checkpoint during post-training, after the Supervised Fine-Tuning (SFT) step. Recommended for best performance: Use the OLMoE-Instruct version.
Supported Languages
en (primary)
Training Details
Data Sources:
allenai/tulu-v3.1-mix-preview-4096-OLMoE
Methodology:
Intermediate training checkpoint post-Supervised Fine-Tuning (SFT), Direct Preference Optimization/Kahneman-Tversky Optimization (DPO/KTO)
Note: green Score (e.g. "73.2") means that the model is better than allenai/OLMoE-1B-7B-0924-SFT.
Rank the OLMoE 1B 7B 0924 SFT Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52721 in total.