Yi Ko 6B DPO V6 by GAI-LLM

 ยป  All LLMs  ยป  GAI-LLM  ยป  Yi Ko 6B DPO V6   URL Share it on

  Autotrain compatible   Endpoints compatible   Instruct   Ko   Llama   Pytorch   Region:us   Sharded
Model Card on HF ๐Ÿค—: https://huggingface.co/GAI-LLM/Yi-Ko-6B-dpo-v6 

Yi Ko 6B DPO V6 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Yi Ko 6B DPO V6 (GAI-LLM/Yi-Ko-6B-dpo-v6)
๐ŸŒŸ Advertise your project ๐Ÿš€

Yi Ko 6B DPO V6 Parameters and Internals

Model Type 
text-generation
Supported Languages 
ko (unknown)
Training Details 
Data Sources:
Open Korean Dataset using mixed-strategy with DPO
Hardware Used:
A100 GPU 80GB * 8
Model Architecture:
auto-regressive language model based on the LLaMA2 transformer architecture
Input Output 
Input Format:
text only
Output Format:
text generation
LLM NameYi Ko 6B DPO V6
Repository ๐Ÿค—https://huggingface.co/GAI-LLM/Yi-Ko-6B-dpo-v6 
Model Size6b
Required VRAM26.4 GB
Updated2025-09-12
MaintainerGAI-LLM
Model Typellama
Instruction-BasedYes
Model Files  10.0 GB: 1-of-3   9.9 GB: 2-of-3   6.5 GB: 3-of-3
Supported Languagesko
Model ArchitectureLlamaForCausalLM
Licensecc-by-nc-4.0
Context Length2048
Model Max Length2048
Transformers Version4.32.0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size46336
Torch Data Typebfloat16

Best Alternatives to Yi Ko 6B DPO V6

Best Alternatives
Context / RAM
Downloads
Likes
Llama 3 6B Instruct V0.18K / 25.1 GB81
...nstruct Yi 6B Dolly CodeAlpaca4K / 12.1 GB17640
Instruct Yi 6B Dolly15K4K / 12.1 GB17590
AIFT Yi Ko 6B V1.112K / 24.6 GB50
...uare Instruct Yi Ko 6B V0.9.262K / 12.4 GB8700
DAVinCI Yi Ko 6B V1.12K / 24.6 GB60
DAVinCI Yi Ko 6B V0.82K / 24.6 GB150
Yi Ko 6B Instruct V1.02K / 12.4 GB2341
DAVinCI Yi Ko 6B V0.61 Ff E12K / 24.6 GB70
...uare Instruct Yi Ko 6B V0.9.302K / 12.4 GB3990
Note: green Score (e.g. "73.2") means that the model is better than GAI-LLM/Yi-Ko-6B-dpo-v6.

Rank the Yi Ko 6B DPO V6 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51369 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124