Dolphin 2.5 Mixtral 8x7b by cognitivecomputations

 »  All LLMs  »  cognitivecomputations  »  Dolphin 2.5 Mixtral 8x7b   URL Share it on

Dolphin 2.5 Mixtral 8x7b is an open-source language model by cognitivecomputations. Features: 46.7b LLM, VRAM: 93.6GB, Context: 32K, License: apache-2.0, MoE, Instruction-Based, LLM Explorer Score: 0.17.

  Autotrain compatible   Conversational   Dataset:ehartford/dolphin Dataset:ehartford/dolphin-code... Dataset:ise-uiuc/magicoder-evo... Dataset:ise-uiuc/magicoder-oss... Dataset:jondurbin/airoboros-2....   Dataset:ldjnr/pure-dove Dataset:migtissera/synthia-v1....   Dataset:teknium/openhermes   En   Endpoints compatible   Instruct   Mixtral   Moe   Pytorch   Region:us   Safetensors   Sharded   Tensorflow

Dolphin 2.5 Mixtral 8x7b Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Dolphin 2.5 Mixtral 8x7b Parameters and Internals

Model Type 
text generation, coding assistant
Use Cases 
Areas:
research, commercial applications
Limitations:
The model is uncensored and may comply with potentially unethical requests.
Considerations:
It is advised to implement your own alignment layer before exposing the model as a service.
Additional Notes 
The model was made possible by the sponsorship of Convai, and huge gratitude was extended to various contributors and dataset authors.
Supported Languages 
en (native)
Training Details 
Data Sources:
ehartford/dolphin, jondurbin/airoboros-2.2.1, ehartford/dolphin-coder, migtissera/Synthia-v1.3, teknium/openhermes, ise-uiuc/Magicoder-OSS-Instruct-75K, ise-uiuc/Magicoder-Evol-Instruct-110K, LDJnr/Pure-Dove
Methodology:
Finetuned from a base model using qLoRA and Axolotl.
Context Length:
32000
Training Time:
3 days
Hardware Used:
4x A100s
Input Output 
Input Format:
ChatML prompt format
Accepted Modalities:
text
Output Format:
text
Performance Tips:
trust_remote_code is required.
LLM NameDolphin 2.5 Mixtral 8x7b
Repository 🤗https://huggingface.co/cognitivecomputations/dolphin-2.5-mixtral-8x7b 
Model Size46.7b
Required VRAM93.6 GB
Updated2025-07-13
Maintainercognitivecomputations
Model Typemixtral
Instruction-BasedYes
Model Files  4.9 GB: 1-of-19   5.0 GB: 2-of-19   5.0 GB: 3-of-19   4.9 GB: 4-of-19   5.0 GB: 5-of-19   5.0 GB: 6-of-19   4.9 GB: 7-of-19   5.0 GB: 8-of-19   5.0 GB: 9-of-19   4.9 GB: 10-of-19   5.0 GB: 11-of-19   5.0 GB: 12-of-19   5.0 GB: 13-of-19   4.9 GB: 14-of-19   5.0 GB: 15-of-19   5.0 GB: 16-of-19   4.9 GB: 17-of-19   5.0 GB: 18-of-19   4.2 GB: 19-of-19   4.9 GB: 1-of-19   5.0 GB: 2-of-19   5.0 GB: 3-of-19   4.9 GB: 4-of-19   5.0 GB: 5-of-19   5.0 GB: 6-of-19   4.9 GB: 7-of-19   5.0 GB: 8-of-19   5.0 GB: 9-of-19   4.9 GB: 10-of-19   5.0 GB: 11-of-19   5.0 GB: 12-of-19   5.0 GB: 13-of-19   4.9 GB: 14-of-19   5.0 GB: 15-of-19   5.0 GB: 16-of-19   4.9 GB: 17-of-19   5.0 GB: 18-of-19   4.2 GB: 19-of-19
Supported Languagesen
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.36.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32002
Torch Data Typebfloat16

Quantized Models of the Dolphin 2.5 Mixtral 8x7b

Model
Likes
Downloads
VRAM
Dolphin 2.5 Mixtral 8x7b GGUF484117 GB

Best Alternatives to Dolphin 2.5 Mixtral 8x7b

Best Alternatives
Context / RAM
Downloads
Likes
Mixtral 8x7B Instruct V0.132K / 93.6 GB6362164670
Mixtral 8x7B Instruct V0.1 FP832K / 47.1 GB48450
...xtral 8x7B Yes Instruct LimaRP32K / 93.5 GB131
Merge Mixtral Prometheus 8x7B32K / 91.9 GB742
...rkrautLM Mixtral 8x7B Instruct32K / 93.6 GB53922
Notux 8x7b V132K / 93.6 GB86164
BagelMIsteryTour V2 8x7B32K / 93.5 GB7617
Mixtral 8x7B Instruct V0.1 FP832K / 47.1 GB3740
Sage Ft Mixtral 8x7b32K / 90 GB9124
GritLM 8x7B32K / 93.6 GB798538
Note: green Score (e.g. "73.2") means that the model is better than cognitivecomputations/dolphin-2.5-mixtral-8x7b.

Rank the Dolphin 2.5 Mixtral 8x7b Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 53232 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum — our secure, self-hosted AI agent for server management.
Release v20260328a