Dolphin 2.7 Mixtral 8x7b 8bpw EXL2 by Kearm

 ยป  All LLMs  ยป  Kearm  ยป  Dolphin 2.7 Mixtral 8x7b 8bpw EXL2   URL Share it on

Dolphin 2.7 Mixtral 8x7b 8bpw EXL2 is an open-source language model by Kearm. Features: LLM, VRAM: 46.8GB, Context: 32K, License: apache-2.0, MoE, Quantized, Instruction-Based, LLM Explorer Score: 0.12.

  Conversational Dataset:cognitivecomputations/... Dataset:cognitivecomputations/... Dataset:ise-uiuc/magicoder-evo... Dataset:ise-uiuc/magicoder-oss... Dataset:jondurbin/airoboros-2....   Dataset:ldjnr/capybara   Dataset:teknium/openhermes   En   Endpoints compatible   Exl2   Instruct   Mixtral   Moe   Pytorch   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Dolphin 2.7 Mixtral 8x7b 8bpw EXL2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Dolphin 2.7 Mixtral 8x7b 8bpw EXL2 (Kearm/dolphin-2.7-mixtral-8x7b-8bpw-exl2)
๐ŸŒŸ Advertise your project ๐Ÿš€

Dolphin 2.7 Mixtral 8x7b 8bpw EXL2 Parameters and Internals

Use Cases 
Limitations:
This model is highly compliant and may respond to unethical requests.
Considerations:
You are advised to implement your own alignment layer before exposing the model as a service.
Supported Languages 
en (English)
Training Details 
Data Sources:
cognitivecomputations/dolphin, jondurbin/airoboros-2.2.1, cognitivecomputations/dolphin-coder, teknium/openhermes, ise-uiuc/Magicoder-OSS-Instruct-75K, ise-uiuc/Magicoder-Evol-Instruct-110K, LDJnr/Capybara
Methodology:
The model was retrained with some Mixtral-specific fixes in transformers library, and with gate layer unfrozen.
Context Length:
16000
Training Time:
3 days
Hardware Used:
4x A100s using qLoRA and Axolotl
Safety Evaluation 
Methodologies:
Filtering the dataset to remove alignment and bias
Ethical Considerations:
The model is uncensored and highly compliant to any requests, including possibly unethical ones.
Input Output 
Input Format:
ChatML prompt format
Release Notes 
Version:
2.7
Notes:
Retrained with Mixtral-specific fixes, with gate layer unfrozen.
Version:
2.6
Notes:
Fixed training configuration issue, added samantha-based empathy data, replaced synthia and pure-dove with Capybara.
LLM NameDolphin 2.7 Mixtral 8x7b 8bpw EXL2
Repository ๐Ÿค—https://huggingface.co/Kearm/dolphin-2.7-mixtral-8x7b-8bpw-exl2 
Required VRAM46.8 GB
Updated2026-04-11
MaintainerKearm
Model Typemixtral
Instruction-BasedYes
Model Files  8.6 GB: 1-of-6   8.6 GB: 2-of-6   8.6 GB: 3-of-6   8.6 GB: 4-of-6   8.6 GB: 5-of-6   3.8 GB: 6-of-6
Supported Languagesen
Quantization Typeexl2
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.37.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32002
Torch Data Typebfloat16

Best Alternatives to Dolphin 2.7 Mixtral 8x7b 8bpw EXL2

Best Alternatives
Context / RAM
Downloads
Likes
...M 2 8x22B Beige 5.0bpw H6 EXL264K / 88.5 GB110
...M 2 8x22B Beige 2.4bpw H6 EXL264K / 42.7 GB60
...M 2 8x22B Beige 3.0bpw H6 EXL264K / 53.2 GB60
...M 2 8x22B Beige 4.0bpw H6 EXL264K / 70.8 GB50
...B Instruct V0.1 8.0bpw H8 EXL264K / 120.2 GB41
...8x22b Instruct Oh EXL2 2.25bpw64K / 40.1 GB21
...eryTour V2 8x7B 4.5bpw H6 EXL232K / 26.5 GB42
...it MoE 2bitgs8 Metaoffload HQQ32K / 24.1 GB2819
... 4bit MoE 3bit Metaoffload HQQ32K / 22.4 GB213
... 4bit MoE 2bit Metaoffload HQQ32K / 18.3 GB1716
Note: green Score (e.g. "73.2") means that the model is better than Kearm/dolphin-2.7-mixtral-8x7b-8bpw-exl2.

Rank the Dolphin 2.7 Mixtral 8x7b 8bpw EXL2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52721 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a