OpenHermes 7B is the first fine tune of the Hermes dataset that has a fully open source dataset. Uniquely uses sample packing for efficient training.
Supported Languages
en (full)
Training Details
Data Sources:
GPTeacher - General Instruct, Roleplay v1, Roleplay v2, and Code Instruct Datasets by Teknium, WizardLM (v1, evol_instruct 70k) by WizardLM Team/nlpxucan, Airoboros GPT-4 (v1.0) by JonDurbin, Camel-AI's domain expert datasets by the Camel-AI Team, CodeAlpaca by Sahil2801, GPT4-LLM and Unnatural Instructions by Microsoft
Data Volume:
242,000 entries
Methodology:
Sample packing, filtering of OpenAI refusals, disclaimers, and "As an AI" type examples
Note: green Score (e.g. "73.2") means that the model is better than teknium/OpenHermes-7B.
Rank the OpenHermes 7B Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52392 in total.