The model might not consistently show improved abilities to follow instructions, and it could respond inappropriately or get stuck in loops., Although this model is aligned to human preferences and has been evaluated for performance, it is not guaranteed that it will refrain from generating harmful content exclusively., Caution is urged against relying on this model for production or adjacent use-cases.
Training Details
Data Sources:
Anthropic's HH-RLHF Dataset
Data Volume:
First 100K examples
Methodology:
Direct Preference Optimization (DPO)
Model Architecture:
OpenLLaMA 3B v2 architecture
Input Output
Input Format:
Modified version of the Alpaca prompt template
Performance Tips:
Utilize the Alpaca prompt template to obtain best responses for instruction-related tasks.
Note: green Score (e.g. "73.2") means that the model is better than SurgeGlobal/OpenBezoar-HH-RLHF-DPO.
Rank the OpenBezoar HH RLHF DPO Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52721 in total.