Sensualize Mixtral GGUF by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Sensualize Mixtral GGUF   URL Share it on

Base model:quantized:sao10k/se... Base model:sao10k/sensualize-m... Dataset:nobodyexistsontheinter...   Gguf   Mixtral   Quantized   Region:us

Sensualize Mixtral GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Sensualize Mixtral GGUF (TheBloke/Sensualize-Mixtral-GGUF)
๐ŸŒŸ Advertise your project ๐Ÿš€

Sensualize Mixtral GGUF Parameters and Internals

Model Type 
mixtral
Use Cases 
Primary Use Cases:
Roleplay-based models, specifically ERP
Limitations:
Finicky with settings
Additional Notes 
Trained using 80M tokens over 1 epoch, with ZLoss and Megablocks-based fork of transformers. Experimental and finicky model, recommended to use Universal-Light or Universal-Creative settings in SillyTavern for improved performance.
Training Details 
Data Sources:
NobodyExistsOnTheInternet/full120k, My own NSFW Instruct & De-Alignment Data
Data Volume:
80M tokens
Methodology:
Trained on ZLoss and Megablocks-based fork of transformers, using Alpaca format
Training Time:
12 hours on 2xA100s
Hardware Used:
2xA100s
Input Output 
Input Format:
### Instruction: {system_message} ### Input: {prompt} ### Response:
LLM NameSensualize Mixtral GGUF
Repository ๐Ÿค—https://huggingface.co/TheBloke/Sensualize-Mixtral-GGUF 
Model NameSensualize Mixtral
Model CreatorSaofiq
Base Model(s)  Sensualize Mixtral Bf16   Sao10K/Sensualize-Mixtral-bf16
Required VRAM15.6 GB
Updated2025-07-31
MaintainerTheBloke
Model Typemixtral
Model Files  15.6 GB   20.4 GB   26.4 GB   26.4 GB   32.2 GB   32.2 GB   38.4 GB   49.6 GB
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureAutoModel
Licensecc-by-nc-4.0

Best Alternatives to Sensualize Mixtral GGUF

Best Alternatives
Context / RAM
Downloads
Likes
ComicBot V.2 Gguf32K / 5 GB470
Gemma2 WizardLM0K / 5.2 GB210
...ixtral 8x7B Instruct V0.1 GGUF0K / 15.6 GB44110631
Phi 2 GGUF0K / 1.2 GB178269221
Marco O1 GGUF0K / 3 GB786
Dolphin 2.5 Mixtral 8x7b GGUF0K / 15.6 GB11631303
Mixtral 8x7B V0.1 GGUF0K / 15.6 GB6851430
Dolphin 2.7 Mixtral 8x7b GGUF0K / 15.6 GB13741144
Open Gpt4 8x7B GGUF0K / 15.6 GB1390922
...Hermes 2 Mixtral 8x7B DPO GGUF0K / 17.3 GB719765
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Sensualize-Mixtral-GGUF.

Rank the Sensualize Mixtral GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50263 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124