Notux 8x7b V1 GGUF by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Notux 8x7b V1 GGUF   URL Share it on

Base model:argilla/notux-8x7b-... Base model:quantized:argilla/n...   Conversational Dataset:argilla/ultrafeedback-...   De   Dpo   En   Es   Fr   Gguf   It   Mixtral   Moe   Preference   Quantized   Region:us   Rlaif   Ultrafeedback

Notux 8x7b V1 GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Notux 8x7b V1 GGUF (TheBloke/notux-8x7b-v1-GGUF)
๐ŸŒŸ Advertise your project ๐Ÿš€

Notux 8x7b V1 GGUF Parameters and Internals

Model Type 
Pretrained generative Sparse Mixture of Experts
Additional Notes 
Uses Direct Preference Optimization (DPO) and ranked top in MoE category on Hugging Face Open LLM Leaderboard.
Supported Languages 
English (yes), Spanish (yes), Italian (yes), German (yes), French (yes)
Training Details 
Data Sources:
argilla/ultrafeedback-binarized-preferences-cleaned
Methodology:
Preference tuning using DPO
Training Time:
~10hr for 1 epoch
Hardware Used:
8 x H100 80GB
Model Architecture:
Sparse Mixture of Experts
LLM NameNotux 8x7b V1 GGUF
Repository ๐Ÿค—https://huggingface.co/TheBloke/notux-8x7b-v1-GGUF 
Model NameNotux 8X7B v1
Model CreatorArgilla
Base Model(s)  Notux 8x7b V1   argilla/notux-8x7b-v1
Required VRAM15.6 GB
Updated2025-10-10
MaintainerTheBloke
Model Typemixtral
Model Files  15.6 GB   20.4 GB   20.4 GB   20.3 GB   26.4 GB   26.4 GB   26.4 GB   32.2 GB   32.2 GB   32.2 GB   38.4 GB   49.6 GB
Supported Languagesen de es fr it
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureAutoModel
Licenseapache-2.0

Best Alternatives to Notux 8x7b V1 GGUF

Best Alternatives
Context / RAM
Downloads
Likes
ComicBot V.2 Gguf32K / 5 GB390
Qwen3 Medical GRPO GGUF0K / 1.7 GB10382
Gemma2 WizardLM0K / 5.2 GB100
...ixtral 8x7B Instruct V0.1 GGUF0K / 15.6 GB25473639
Phi 2 GGUF0K / 1.2 GB109438228
Marco O1 GGUF0K / 3 GB2286
Mixtral 8x7B V0.1 GGUF0K / 15.6 GB7823430
Dolphin 2.5 Mixtral 8x7b GGUF0K / 15.6 GB11283303
Dolphin 2.7 Mixtral 8x7b GGUF0K / 15.6 GB9237147
GOAT Llama3.1 V0.10K / 0.2 GB23
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/notux-8x7b-v1-GGUF.

Rank the Notux 8x7b V1 GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51544 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124