NeuralDaredevil 8B Abliterated by mlabonne

 ยป  All LLMs  ยป  mlabonne  ยป  NeuralDaredevil 8B Abliterated   URL Share it on

  Autotrain compatible   Conversational Dataset:mlabonne/orpo-dpo-mix-...   Dpo   Endpoints compatible   Llama   Model-index   Region:us   Safetensors   Sharded   Tensorflow

NeuralDaredevil 8B Abliterated Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
NeuralDaredevil 8B Abliterated (mlabonne/NeuralDaredevil-8B-abliterated)
๐ŸŒŸ Advertise your project ๐Ÿš€

NeuralDaredevil 8B Abliterated Parameters and Internals

Model Type 
text-generation
Use Cases 
Areas:
research, commercial applications
Applications:
role-playing
Additional Notes 
Model supports different quantization formats via various contributors.
Training Details 
Data Sources:
mlabonne/orpo-dpo-mix-40k
Methodology:
DPO fine-tuning
Release Notes 
Notes:
This model is a DPO fine-tune of mlabonne/Daredevil-8B-abliterated, trained on mlabonne/orpo-dpo-mix-40k dataset, improving performance by addressing performance loss due to prior abliteration.
LLM NameNeuralDaredevil 8B Abliterated
Repository ๐Ÿค—https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated 
Model Size8b
Required VRAM16.1 GB
Updated2025-06-13
Maintainermlabonne
Model Typellama
Model Files  5.0 GB: 1-of-4   5.0 GB: 2-of-4   4.9 GB: 3-of-4   1.2 GB: 4-of-4
Model ArchitectureLlamaForCausalLM
Licensellama3
Context Length8192
Model Max Length8192
Transformers Version4.40.2
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size128256
Torch Data Typefloat16

Quantized Models of the NeuralDaredevil 8B Abliterated

Model
Likes
Downloads
VRAM
... Analysis June 05 2024 1 Epoch08916 GB
...alDaredevil 8B Abliterated AWQ0225 GB

Best Alternatives to NeuralDaredevil 8B Abliterated

Best Alternatives
Context / RAM
Downloads
Likes
...otron 8B UltraLong 4M Instruct4192K / 32.1 GB3994108
UltraLong Thinking4192K / 16.1 GB11302
...a 3.1 8B UltraLong 4M Instruct4192K / 32.1 GB17624
...otron 8B UltraLong 2M Instruct2096K / 32.1 GB120815
...a 3.1 8B UltraLong 2M Instruct2096K / 32.1 GB8759
Zero Llama 3.1 8B Beta61048K / 16.1 GB16151
...otron 8B UltraLong 1M Instruct1048K / 32.1 GB234145
...a 3.1 8B UltraLong 1M Instruct1048K / 32.1 GB138729
...xis Bookwriter Llama3.1 8B Sft1048K / 16.1 GB634
....1 1million Ctx Dark Planet 8B1048K / 32.3 GB952
Note: green Score (e.g. "73.2") means that the model is better than mlabonne/NeuralDaredevil-8B-abliterated.

Rank the NeuralDaredevil 8B Abliterated Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 48132 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124