Nous Hermes 2 Mixtral 8x7B DPO AWQ by TheBloke

 »  All LLMs  »  TheBloke  »  Nous Hermes 2 Mixtral 8x7B DPO AWQ   URL Share it on

Nous Hermes 2 Mixtral 8x7B DPO AWQ is an open-source language model by TheBloke. Features: 46.7b LLM, VRAM: 24.7GB, Context: 32K, License: apache-2.0, MoE, Quantized, LLM Explorer Score: 0.12.

  4-bit   Awq Base model:nousresearch/nous-h... Base model:quantized:nousresea...   Chatml   Conversational   Distillation   Dpo   En   Finetuned   Gpt4   Instruct   Mixtral   Moe   Quantized   Region:us   Rlhf   Safetensors   Sharded   Synthetic data   Tensorflow

Nous Hermes 2 Mixtral 8x7B DPO AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Nous Hermes 2 Mixtral 8x7B DPO AWQ Parameters and Internals

Model Type 
mixtral, instruct, finetune, chatml, DPO, RLHF, gpt4, synthetic data, distillation
Additional Notes 
Not compatible with macOS for AWQ, use GGUF for macOS.
Supported Languages 
en (fluent)
Training Details 
Data Sources:
GPT-4 generated data, other high quality data from open datasets
Data Volume:
1 million entries
Methodology:
SFT+DPO
Context Length:
8192
Model Architecture:
Mixtral 8x7B MoE LLM
Input Output 
Input Format:
ChatML
Accepted Modalities:
text
Output Format:
formatted text
Performance Tips:
Use AutoAWQ v0.1.8 or later for best performance.
LLM NameNous Hermes 2 Mixtral 8x7B DPO AWQ
Repository 🤗https://huggingface.co/TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-AWQ 
Model NameNous Hermes 2 Mixtral 8X7B DPO
Model CreatorNousResearch
Base Model(s)  Nous Hermes 2 Mixtral 8x7B DPO   NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
Model Size46.7b
Required VRAM24.7 GB
Updated2026-04-21
MaintainerTheBloke
Model Typemixtral
Model Files  10.0 GB: 1-of-3   10.0 GB: 2-of-3   4.7 GB: 3-of-3
Supported Languagesen
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.37.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32002
Torch Data Typefloat16

Best Alternatives to Nous Hermes 2 Mixtral 8x7B DPO AWQ

Best Alternatives
Context / RAM
Downloads
Likes
Dolphin 2.7 Mixtral 8x7b AWQ32K / 24.7 GB380723
Open Gpt4 8x7B AWQ32K / 24.7 GB38432
Mixtral 8x7B Instruct V0.1 AWQ32K / 24.7 GB50
Mixtral 8x7b V0.1 AWQ32K / 24.7 GB271811
Functionary Medium V2.4 AWQ32K / 24.7 GB93
H2ogpt Mixtral 8x7b 32K AWQ32K / 24.7 GB150
Mixtral Instruct AWQ32K / 24.7 GB62043
Synatra Mixtral 8x7B AWQ32K / 24.7 GB23
SauerkrautLM Mixtral 8x7B AWQ32K / 27.4 GB51
Functionary Medium V2.2 AWQ32K / 24.7 GB31
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-AWQ.

Rank the Nous Hermes 2 Mixtral 8x7B DPO AWQ Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 53232 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum — our secure, self-hosted AI agent for server management.
Release v20260328a