Lelantos Maid DPO 7B by SanjiWatsuki

 ยป  All LLMs  ยป  SanjiWatsuki  ยป  Lelantos Maid DPO 7B   URL Share it on

  Merged Model   Autotrain compatible Base model:neversleep/noromaid... Base model:sanjiwatsuki/lelant...   Endpoints compatible   Mistral   Neversleep/noromaid-7b-0.4-dpo   Region:us   Safetensors   Sanjiwatsuki/lelantos-dpo-7b   Sharded   Tensorflow

Lelantos Maid DPO 7B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Lelantos Maid DPO 7B (SanjiWatsuki/Lelantos-Maid-DPO-7B)
๐ŸŒŸ Advertise your project ๐Ÿš€

Lelantos Maid DPO 7B Parameters and Internals

Training Details 
Methodology:
mergekit-based merging of two models
Model Architecture:
Customized merging architecture using parts from two different models.
LLM NameLelantos Maid DPO 7B
Repository ๐Ÿค—https://huggingface.co/SanjiWatsuki/Lelantos-Maid-DPO-7B 
Base Model(s)  Lelantos DPO 7B   NeverSleep/Noromaid-7B-0.4-DPO   SanjiWatsuki/Lelantos-DPO-7B   NeverSleep/Noromaid-7B-0.4-DPO
Merged ModelYes
Model Size7b
Required VRAM14.4 GB
Updated2025-08-18
MaintainerSanjiWatsuki
Model Typemistral
Model Files  1.9 GB: 1-of-8   1.9 GB: 2-of-8   1.9 GB: 3-of-8   1.9 GB: 4-of-8   1.9 GB: 5-of-8   2.0 GB: 6-of-8   2.0 GB: 7-of-8   0.9 GB: 8-of-8
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.35.2
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32002
Torch Data Typebfloat16

Quantized Models of the Lelantos Maid DPO 7B

Model
Likes
Downloads
VRAM
Lelantos Maid DPO 7B GGUF51302 GB
Lelantos Maid DPO 7B GPTQ5184 GB
Lelantos Maid DPO 7B AWQ264 GB

Best Alternatives to Lelantos Maid DPO 7B

Best Alternatives
Context / RAM
Downloads
Likes
...Nemo Instruct 2407 Abliterated1000K / 24.5 GB23118
MegaBeam Mistral 7B 512K512K / 14.4 GB941650
SpydazWeb AI HumanAI RP512K / 14.4 GB931
SpydazWeb AI HumanAI 002512K / 14.4 GB181
...daz Web AI ChatML 512K Project512K / 14.5 GB120
MegaBeam Mistral 7B 300K282K / 14.4 GB377916
MegaBeam Mistral 7B 300K282K / 14.4 GB833816
Hebrew Mistral 7B 200K256K / 30 GB132015
Astral 256K 7B V2250K / 14.4 GB50
Astral 256K 7B250K / 14.4 GB50
Note: green Score (e.g. "73.2") means that the model is better than SanjiWatsuki/Lelantos-Maid-DPO-7B.

Rank the Lelantos Maid DPO 7B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50738 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124