MixtralOrochi8x7B GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  MixtralOrochi8x7B GPTQ   URL Share it on

  4-bit   Autotrain compatible Base model:quantized:smelborp/... Base model:smelborp/mixtraloro...   En   Gptq   High-intelligence   Mixtral   Quantized   Region:us   Safetensors   Uncensored

MixtralOrochi8x7B GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
MixtralOrochi8x7B GPTQ (TheBloke/MixtralOrochi8x7B-GPTQ)
๐ŸŒŸ Advertise your project ๐Ÿš€

MixtralOrochi8x7B GPTQ Parameters and Internals

Model Type 
Mixtral, uncensored, high-intelligence
Use Cases 
Primary Use Cases:
Intelligent, unrestricted dialogue environments.
Limitations:
Caution should be exercised due to its uncensored nature.
Considerations:
Implementation of suitable safeguards and ethical guidelines recommended.
Additional Notes 
Leverages the combined knowledge and capabilities of merged models for high intelligence.
Supported Languages 
en (Full proficiency)
Responsible Ai Considerations 
Mitigation Strategies:
Users are encouraged to implement their own content moderation or alignment strategies appropriate for their use case.
Input Output 
Input Format:
{prompt}
Accepted Modalities:
text
LLM NameMixtralOrochi8x7B GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/MixtralOrochi8x7B-GPTQ 
Model NameMixtralOrochi8X7B
Model CreatorSmelborp Bumblechump
Base Model(s)  MixtralOrochi8x7B   smelborp/MixtralOrochi8x7B
Model Size6.1b
Required VRAM23.8 GB
Updated2025-09-16
MaintainerTheBloke
Model Typemixtral
Model Files  23.8 GB
Supported Languagesen
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureMixtralForCausalLM
Licensecc-by-nc-4.0
Context Length32768
Model Max Length32768
Transformers Version4.37.0.dev0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typebfloat16

Best Alternatives to MixtralOrochi8x7B GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
...ixtral 8x7B Instruct V0.1 GPTQ32K / 23.8 GB258910138
Mixtral 8x7B V0.1 GPTQ32K / 23.8 GB1647127
...Hermes 2 Mixtral 8x7B DPO GPTQ32K / 23.8 GB215726
...tLM Mixtral 8x7B Instruct GPTQ32K / 23.8 GB16363
Dolphin 2.5 Mixtral 8x7b GPTQ32K / 23.8 GB57113
Dolphin 2.7 Mixtral 8x7b GPTQ32K / 23.8 GB14919
...Hermes 2 Mixtral 8x7B SFT GPTQ32K / 23.8 GB1311
Bagel DPO 8x7b V0.2 GPTQ32K / 23.8 GB42
Open Gpt4 8x7B V0.2 GPTQ32K / 23.8 GB86
Sensualize Mixtral GPTQ32K / 23.8 GB185
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/MixtralOrochi8x7B-GPTQ.

Rank the MixtralOrochi8x7B GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51408 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124