Mixtral 8x7B Instruct V0.1 LimaRP ZLoss DARE TIES AWQ by TheBloke

 »  All LLMs  »  TheBloke  »  Mixtral 8x7B Instruct V0.1 LimaRP ZLoss DARE TIES AWQ   URL Share it on

Mixtral 8x7B Instruct V0.1 LimaRP ZLoss DARE TIES AWQ is an open-source language model by TheBloke. Features: 46.7b LLM, VRAM: 24.7GB, Context: 32K, MoE, Quantized, Instruction-Based, LLM Explorer Score: 0.11.

  Arxiv:2306.01708   Arxiv:2311.03099   4-bit   Awq Base model:ds-archive/mixtral-... Base model:quantized:ds-archiv...   Instruct   Merge   Mergekit   Mixtral   Moe   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Mixtral 8x7B Instruct V0.1 LimaRP ZLoss DARE TIES AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Mixtral 8x7B Instruct V0.1 LimaRP ZLoss DARE TIES AWQ Parameters and Internals

Model Type 
mixtral
LLM NameMixtral 8x7B Instruct V0.1 LimaRP ZLoss DARE TIES AWQ
Repository 🤗https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-AWQ 
Model NameMixtral 8X7B Instruct V0.1 LimaRP ZLoss DARE TIES
Model CreatorDoctor Shotgun
Base Model(s)  Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES   Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES
Model Size46.7b
Required VRAM24.7 GB
Updated2026-04-20
MaintainerTheBloke
Model Typemixtral
Instruction-BasedYes
Model Files  10.0 GB: 1-of-3   10.0 GB: 2-of-3   4.7 GB: 3-of-3
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureMixtralForCausalLM
Context Length32768
Model Max Length32768
Transformers Version4.37.0.dev0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Mixtral 8x7B Instruct V0.1 LimaRP ZLoss DARE TIES AWQ

Best Alternatives
Context / RAM
Downloads
Likes
Dolphin 2.7 Mixtral 8x7b AWQ32K / 24.7 GB380723
Mixtral 8x7B Instruct V0.1 AWQ32K / 24.7 GB50
Mixtral Instruct AWQ32K / 24.7 GB62043
...ixtral Instruct 8x7b Zloss AWQ32K / 24.7 GB82
Dolphin 2.6 Mixtral 8x7b AWQ32K / 24.7 GB10312
...utLM Mixtral 8x7B Instruct AWQ32K / 24.7 GB192
...1 Mixtral 8x7b Instruct V3 AWQ32K / 24.7 GB81
...Mixtral 8x7B V0.1 Dolly15K AWQ32K / 24.7 GB91
Mixtral 8x7B Instruct V0.132K / 93.6 GB6183864670
Mixtral 8x7B Instruct V0.1 FP832K / 47.1 GB46710
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-DARE-TIES-AWQ.

Rank the Mixtral 8x7B Instruct V0.1 LimaRP ZLoss DARE TIES AWQ Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 53089 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum — our secure, self-hosted AI agent for server management.
Release v20260328a