Kellemar DPO Orca Distilled 7B SLERP 8.0bpw H8 EXL2 by LoneStriker

 »  All LLMs  »  LoneStriker  »  Kellemar DPO Orca Distilled 7B SLERP 8.0bpw H8 EXL2   URL Share it on

Kellemar DPO Orca Distilled 7B SLERP 8.0bpw H8 EXL2 is an open-source language model by LoneStriker. Features: 7b LLM, VRAM: 7.4GB, Context: 32K, License: apache-2.0, Quantized, LLM Explorer Score: 0.11.

Base model:finetune:mlabonne/m... Base model:mlabonne/marcoro14-... Dataset:argilla/distilabel-int...   Endpoints compatible   Exl2   Mistral   Quantized   Region:us   Safetensors

Kellemar DPO Orca Distilled 7B SLERP 8.0bpw H8 EXL2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Kellemar DPO Orca Distilled 7B SLERP 8.0bpw H8 EXL2 Parameters and Internals

Model Type 
text generation
Use Cases 
Primary Use Cases:
Basic inference, further fine-tuning
Training Details 
Data Sources:
https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs
Methodology:
Finetuned with DPO on Labonne's Google Colab Notebook for Mistral 7B
Context Length:
1024
LLM NameKellemar DPO Orca Distilled 7B SLERP 8.0bpw H8 EXL2
Repository 🤗https://huggingface.co/LoneStriker/kellemar-DPO-Orca-Distilled-7B-SLERP-8.0bpw-h8-exl2 
Base Model(s)  Marcoro14 7B Slerp   mlabonne/Marcoro14-7B-slerp
Model Size7b
Required VRAM7.4 GB
Updated2026-04-26
MaintainerLoneStriker
Model Typemistral
Model Files  7.4 GB
Quantization Typeexl2
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.36.2
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Kellemar DPO Orca Distilled 7B SLERP 8.0bpw H8 EXL2

Best Alternatives
Context / RAM
Downloads
Likes
...t 3.5 0106 128K 4.0bpw H6 EXL2128K / 3.9 GB41
...t 3.5 0106 128K 8.0bpw H8 EXL2128K / 7.4 GB11
Mistral 7B V0.3 Bnb 4bit32K / 4.1 GB35143322
Mistral 7B Instruct V0.2 Fp1632K / 14.4 GB260
Mistral 7B Instruct V0.2 4bit32K / 4.3 GB1711
...tral 7B Instruct V0.3 Bnb 4bit32K / 4.1 GB5957536
Mistral 7B Instruct V0.2 8bit32K / 7.6 GB41
...tral 7B Instruct V0.2 Bnb 4bit32K / 4.1 GB1275236
Mixtral V0.3 Full 16bit32K / 14.5 GB60
... 7B Instruct V0.3ContinuedFine32K / 14.5 GB130
Note: green Score (e.g. "73.2") means that the model is better than LoneStriker/kellemar-DPO-Orca-Distilled-7B-SLERP-8.0bpw-h8-exl2.

Rank the Kellemar DPO Orca Distilled 7B SLERP 8.0bpw H8 EXL2 Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 53286 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum — our secure, self-hosted AI agent for server management.
Release v20260328a