Warning: This model is ranked first on the Open LLM Leaderboard among the 7B models as of January 28th, 2024, but note that it was produced from merges without fine-tuning and it was not confirmed whether any of the models were trained on evaluation benchmarks.
Supported Languages
en (proficient)
Training Details
Methodology:
Created with mergekit with the specified configuration: models: mncai/mistral-7b-dpo-v5, FelixChao/WestSeverus-7B-DPO-v2, BarryFutureman/NeuralTurdusVariant1-7B with parameters: density: 0.5, weight: 0.3 and 0.5, merge_method: ties, base_model: mncai/mistral-7b-dpo-v5, parameters: normalize: true, dtype: float16
Note: green Score (e.g. "73.2") means that the model is better than kaitchup/Mayonnaise-4in1-022.
Rank the Mayonnaise 4in1 022 Capabilities
🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 53185 in total.