Cdn 1st Iteration DPO 700sm 3ep Merged by braginpawel

 ยป  All LLMs  ยป  braginpawel  ยป  Cdn 1st Iteration DPO 700sm 3ep Merged   URL Share it on

Cdn 1st Iteration DPO 700sm 3ep Merged is an open-source language model by braginpawel. Features: 22b LLM, VRAM: 44.7GB, Context: 32K, License: apache-2.0, LLM Explorer Score: 0.19.

  Autotrain compatible Base model:finetune:thedrummer... Base model:thedrummer/cydonia-...   Conversational   Dpo   En   Endpoints compatible   Mistral   Region:us   Safetensors   Sharded   Tensorflow   Trl   Unsloth

Cdn 1st Iteration DPO 700sm 3ep Merged Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Cdn 1st Iteration DPO 700sm 3ep Merged (braginpawel/cdn-1st-iteration-dpo-700sm-3ep-merged)
๐ŸŒŸ Advertise your project ๐Ÿš€

Cdn 1st Iteration DPO 700sm 3ep Merged Parameters and Internals

LLM NameCdn 1st Iteration DPO 700sm 3ep Merged
Repository ๐Ÿค—https://huggingface.co/braginpawel/cdn-1st-iteration-dpo-700sm-3ep-merged 
Base Model(s)  TheDrummer/Cydonia-22B-v1.2   TheDrummer/Cydonia-22B-v1.2
Model Size22b
Required VRAM44.7 GB
Updated2025-09-06
Maintainerbraginpawel
Model Typemistral
Model Files  4.9 GB: 1-of-9   5.0 GB: 2-of-9   5.0 GB: 3-of-9   4.9 GB: 4-of-9   5.0 GB: 5-of-9   5.0 GB: 6-of-9   4.9 GB: 7-of-9   5.0 GB: 8-of-9   5.0 GB: 9-of-9
Supported Languagesen
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.50.3
Tokenizer ClassLlamaTokenizer
Padding Token[control_748]
Vocabulary Size32768
Torch Data Typebfloat16

Best Alternatives to Cdn 1st Iteration DPO 700sm 3ep Merged

Best Alternatives
Context / RAM
Downloads
Likes
MS Schisandra 22B V0.2128K / 44.7 GB109
...ntheon RP Pure 1.6.2 22B Small128K / 44.7 GB1133
MS Meadowlark 22B128K / 44.7 GB2216
...rker The Final Abomination 22B128K / 44.7 GB66
...er The Final Transgression 22B128K / 44.7 GB33
...Darker The Final Directive 22B128K / 44.7 GB60
The Omega Directive M 22B V1.0128K / 44.7 GB122
Retrograde Omega M 22B V1.0128K / 44.7 GB50
... V4x1.6.2RP Cydonia VXXX 22B 8128K / 44.7 GB55
MS Inky 2409 22B128K / 44.7 GB70
Note: green Score (e.g. "73.2") means that the model is better than braginpawel/cdn-1st-iteration-dpo-700sm-3ep-merged.

Rank the Cdn 1st Iteration DPO 700sm 3ep Merged Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52721 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a