Kellemar DPO Orca Distilled 7B SLERP by decruz07

 ยป  All LLMs  ยป  decruz07  ยป  Kellemar DPO Orca Distilled 7B SLERP   URL Share it on

Kellemar DPO Orca Distilled 7B SLERP is an open-source language model by decruz07. Features: 7b LLM, VRAM: 14.4GB, Context: 32K, License: cc-by-nc-4.0, HF Score: 73.7, LLM Explorer Score: 0.12, Arc: 70.5, HellaSwag: 87.6, MMLU: 65.3, TruthfulQA: 65, WinoGrande: 81.9, GSM8K: 72.

Base model:finetune:mlabonne/m... Base model:mlabonne/marcoro14-... Dataset:argilla/distilabel-int...   Endpoints compatible   Mistral   Region:us   Safetensors   Sharded   Tensorflow

Kellemar DPO Orca Distilled 7B SLERP Benchmarks

Kellemar DPO Orca Distilled 7B SLERP (decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP)
๐ŸŒŸ Advertise your project ๐Ÿš€

Kellemar DPO Orca Distilled 7B SLERP Parameters and Internals

Use Cases 
Primary Use Cases:
basic inference, potential further finetuning
Training Details 
Data Sources:
argilla/distilabel-intel-orca-dpo-pairs
Methodology:
Finetuned with TrainingArguments including gradient_accumulation, using DPOTrainer
LLM NameKellemar DPO Orca Distilled 7B SLERP
Repository ๐Ÿค—https://huggingface.co/decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP 
Base Model(s)  Marcoro14 7B Slerp   mlabonne/Marcoro14-7B-slerp
Model Size7b
Required VRAM14.4 GB
Updated2026-04-10
Maintainerdecruz07
Model Typemistral
Model Files  4.9 GB: 1-of-3   5.0 GB: 2-of-3   4.5 GB: 3-of-3
Model ArchitectureMistralForCausalLM
Licensecc-by-nc-4.0
Context Length32768
Model Max Length32768
Transformers Version4.36.2
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Kellemar DPO Orca Distilled 7B SLERP

Best Alternatives
Context / RAM
Downloads
Likes
...Nemo Instruct 2407 Abliterated1000K / 24.5 GB25420
MegaBeam Mistral 7B 512K512K / 14.4 GB810354
SpydazWeb AI HumanAI RP512K / 14.4 GB141
SpydazWeb AI HumanAI 002512K / 14.4 GB181
...daz Web AI ChatML 512K Project512K / 14.5 GB120
MegaBeam Mistral 7B 300K282K / 14.4 GB377916
MegaBeam Mistral 7B 300K282K / 14.4 GB785016
Hebrew Mistral 7B 200K256K / 30 GB125115
Astral 256K 7B V2250K / 14.4 GB50
Astral 256K 7B250K / 14.4 GB50
Note: green Score (e.g. "73.2") means that the model is better than decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP.

Rank the Kellemar DPO Orca Distilled 7B SLERP Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52721 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a