MixtralRPChat ZLoss 3.0bpw H6 EXL2 by LoneStriker

 ยป  All LLMs  ยป  LoneStriker  ยป  MixtralRPChat ZLoss 3.0bpw H6 EXL2   URL Share it on

  Arxiv:2202.08906   Autotrain compatible Base model:finetune:mistralai/... Base model:mistralai/mixtral-8...   Conversational Dataset:chargoddard/coedit-rew...   Dataset:chargoddard/rpguild Dataset:chargoddard/summarize ... Dataset:huggingfaceh4/no robot...   Dataset:lemonilia/limarp   Dataset:open-orca/slimorca   En   Endpoints compatible   Exl2   Mixtral   Moe   Quantized   Region:us   Safetensors   Sharded   Tensorflow

MixtralRPChat ZLoss 3.0bpw H6 EXL2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
MixtralRPChat ZLoss 3.0bpw H6 EXL2 (LoneStriker/MixtralRPChat-ZLoss-3.0bpw-h6-exl2)
๐ŸŒŸ Advertise your project ๐Ÿš€

MixtralRPChat ZLoss 3.0bpw H6 EXL2 Parameters and Internals

Model Type 
text generation
Additional Notes 
Trained using a custom branch of Transformers adding z-loss and router balancing loss.
Supported Languages 
en (proficient)
Training Details 
Data Sources:
Open-Orca/SlimOrca, lemonilia/LimaRP, chargoddard/rpguild, chargoddard/summarize_from_feedback_alpaca, HuggingFaceH4/no_robots, chargoddard/coedit-reworded
Methodology:
QLoRA tuned using altered router balancing loss with z-loss.
Input Output 
Input Format:
Messages should be prefixed with " ***System:", " ***Query:", or " ***Response:" for system, user, and model messages respectively. The space before the triple asterisk is mandatory.
LLM NameMixtralRPChat ZLoss 3.0bpw H6 EXL2
Repository ๐Ÿค—https://huggingface.co/LoneStriker/MixtralRPChat-ZLoss-3.0bpw-h6-exl2 
Base Model(s)  mistralai/Mixtral-8x7B-v0.1   mistralai/Mixtral-8x7B-v0.1
Required VRAM17.8 GB
Updated2025-09-23
MaintainerLoneStriker
Model Typemixtral
Model Files  8.6 GB: 1-of-3   8.6 GB: 2-of-3   0.6 GB: 3-of-3
Supported Languagesen
Quantization Typeexl2
Model ArchitectureMixtralForCausalLM
Licensecc-by-nc-4.0
Context Length32768
Model Max Length32768
Transformers Version4.36.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Torch Data Typebfloat16

Best Alternatives to MixtralRPChat ZLoss 3.0bpw H6 EXL2

Best Alternatives
Context / RAM
Downloads
Likes
...oE V0.1 DPO F16 4.0bpw H6 EXL2195K / 31.3 GB70
...oE V0.1 DPO F16 5.0bpw H6 EXL2195K / 38.8 GB70
...2 Mixtral 8x22b 6.0bpw H8 EXL264K / 105.8 GB51
WizardLM 2 8x22 EXL2 4.0bpw64K / 70.9 GB61
...M 2 8x22B Beige 2.4bpw H6 EXL264K / 42.7 GB60
...M 2 8x22B Beige 3.0bpw H6 EXL264K / 53.2 GB60
...M 2 8x22B Beige 4.0bpw H6 EXL264K / 70.8 GB50
...rdLM 2 8x22B Beige EXL2 5.0bpw64K / 88.4 GB50
...M 2 8x22B Beige 5.0bpw H6 EXL264K / 88.5 GB50
...B Instruct V0.1 8.0bpw H8 EXL264K / 120.2 GB101
Note: green Score (e.g. "73.2") means that the model is better than LoneStriker/MixtralRPChat-ZLoss-3.0bpw-h6-exl2.

Rank the MixtralRPChat ZLoss 3.0bpw H6 EXL2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51534 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124