The model uses Q-LoRA with a configuration of r: 16, alpha: 16, dropout: 0.05 on target modules [q_proj, v_proj, k_proj]. Uses datasets LaMini, Orca, Evol-Instruct for instruction tuning.
Supported Languages
en (full)
Input Output
Input Format:
Modified Alpaca prompt template
Accepted Modalities:
text
Output Format:
Text
Performance Tips:
Use the prescribed prompt template for optimal results
Note: green Score (e.g. "73.2") means that the model is better than SurgeGlobal/OpenBezoar-SFT.
Rank the OpenBezoar SFT Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52721 in total.