Dolphin 2.5 Mixtral 8x7b 5.0bpw H6 EXL2 2 by LoneStriker

 ยป  All LLMs  ยป  LoneStriker  ยป  Dolphin 2.5 Mixtral 8x7b 5.0bpw H6 EXL2 2   URL Share it on

  Autotrain compatible   Conversational   Dataset:ehartford/dolphin Dataset:ehartford/dolphin-code... Dataset:ise-uiuc/magicoder-evo... Dataset:ise-uiuc/magicoder-oss... Dataset:jondurbin/airoboros-2....   Dataset:ldjnr/pure-dove Dataset:migtissera/synthia-v1....   Dataset:teknium/openhermes   En   Endpoints compatible   Exl2   Instruct   Mixtral   Moe   Pytorch   Quantized   Region:us   Sharded   Tensorflow

Dolphin 2.5 Mixtral 8x7b 5.0bpw H6 EXL2 2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Dolphin 2.5 Mixtral 8x7b 5.0bpw H6 EXL2 2 (LoneStriker/dolphin-2.5-mixtral-8x7b-5.0bpw-h6-exl2-2)
๐ŸŒŸ Advertise your project ๐Ÿš€

Dolphin 2.5 Mixtral 8x7b 5.0bpw H6 EXL2 2 Parameters and Internals

Model Type 
text generation, coding assistance
Use Cases 
Areas:
research, general coding assistance
Limitations:
Highly compliant, could respond to unethical requests
Considerations:
Implement safety measures before deployment.
Supported Languages 
en (fluent)
Training Details 
Data Sources:
ehartford/dolphin, jondurbin/airoboros-2.2.1, ehartford/dolphin-coder, migtissera/Synthia-v1.3, teknium/openhermes, ise-uiuc/Magicoder-OSS-Instruct-75K, ise-uiuc/Magicoder-Evol-Instruct-110K, LDJnr/Pure-Dove
Methodology:
qLoRA and Axolotl
Context Length:
16000
Training Time:
3 days
Hardware Used:
4x A100s
Responsible Ai Considerations 
Fairness:
The model has been filtered for alignment and bias, but uncensored.
Accountability:
User is responsible for content created with this model.
Mitigation Strategies:
Advised to implement an alignment layer before deploying as a service.
Input Output 
Input Format:
ChatML prompts
Accepted Modalities:
text
Output Format:
text
Performance Tips:
Encourage obedience in the system prompt for optimal response.
Release Notes 
Version:
2.5
Notes:
Removed Samantha and WizardLM. Added Synthia, OpenHermes, PureDove datasets, and new Dolphin-Coder dataset.
LLM NameDolphin 2.5 Mixtral 8x7b 5.0bpw H6 EXL2 2
Repository ๐Ÿค—https://huggingface.co/LoneStriker/dolphin-2.5-mixtral-8x7b-5.0bpw-h6-exl2-2 
Required VRAM29.5 GB
Updated2025-09-22
MaintainerLoneStriker
Model Typemixtral
Instruction-BasedYes
Model Files  8.6 GB: 1-of-4   8.6 GB: 2-of-4   8.6 GB: 3-of-4   3.7 GB: 4-of-4
Supported Languagesen
Quantization Typeexl2
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.36.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32002
Torch Data Typebfloat16

Best Alternatives to Dolphin 2.5 Mixtral 8x7b 5.0bpw H6 EXL2 2

Best Alternatives
Context / RAM
Downloads
Likes
...M 2 8x22B Beige 2.4bpw H6 EXL264K / 42.7 GB60
...M 2 8x22B Beige 3.0bpw H6 EXL264K / 53.2 GB60
...M 2 8x22B Beige 5.0bpw H6 EXL264K / 88.5 GB60
...M 2 8x22B Beige 4.0bpw H6 EXL264K / 70.8 GB50
...B Instruct V0.1 8.0bpw H8 EXL264K / 120.2 GB101
...8x22b Instruct Oh EXL2 2.25bpw64K / 40.1 GB51
...eryTour V2 8x7B 4.5bpw H6 EXL232K / 26.5 GB72
...it MoE 2bitgs8 Metaoffload HQQ32K / 24.1 GB1519
... 4bit MoE 3bit Metaoffload HQQ32K / 22.4 GB1113
...hin 2.7 Mixtral 8x7b 8bpw EXL232K / 46.8 GB72
Note: green Score (e.g. "73.2") means that the model is better than LoneStriker/dolphin-2.5-mixtral-8x7b-5.0bpw-h6-exl2-2.

Rank the Dolphin 2.5 Mixtral 8x7b 5.0bpw H6 EXL2 2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51544 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124