Faro Yi 9B DPO by wenbopan

 ยป  All LLMs  ยป  wenbopan  ยป  Faro Yi 9B DPO   URL Share it on

Faro Yi 9B DPO is an open-source language model by wenbopan. Features: 9b LLM, VRAM: 17.7GB, Context: 32K, License: mit, Merged, HF Score: 68.8, LLM Explorer Score: 0.18, Arc: 64.2, HellaSwag: 78.9, MMLU: 70.7, TruthfulQA: 56.3, WinoGrande: 77.8, GSM8K: 64.8.

  Merged Model   Arxiv:2303.08774   Conversational Dataset:argilla/ultrafeedback-...   Dataset:intel/orca dpo pairs Dataset:jondurbin/truthy-dpo-v... Dataset:wenbopan/chinese-dpo-p...   Deploy:azure   En   Endpoints compatible   Llama   Region:us   Safetensors   Sharded   Tensorflow   Zh
Model Card on HF ๐Ÿค—: https://huggingface.co/wenbopan/Faro-Yi-9B-DPO 

Faro Yi 9B DPO Benchmarks

Faro Yi 9B DPO (wenbopan/Faro-Yi-9B-DPO)
๐ŸŒŸ Advertise your project ๐Ÿš€

Faro Yi 9B DPO Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
research, commercial applications
Applications:
text generation, question answering
Primary Use Cases:
Chatbots, Virtual assistants
Additional Notes 
Supports 4bit-AWQ quantization to boost input length to 160K, with some performance impact.
Supported Languages 
en (proficient), zh (proficient)
Input Output 
Input Format:
chatml template
Accepted Modalities:
text
Output Format:
text
Performance Tips:
For longer inputs under 24GB of VRAM, it is recommended to use vLLM with a max prompt of 32K.
LLM NameFaro Yi 9B DPO
Repository ๐Ÿค—https://huggingface.co/wenbopan/Faro-Yi-9B-DPO 
Merged ModelYes
Model Size9b
Required VRAM17.7 GB
Updated2026-03-26
Maintainerwenbopan
Model Typellama
Model Files  10.0 GB: 1-of-2   7.7 GB: 2-of-2
Supported Languagesen zh
Model ArchitectureLlamaForCausalLM
Licensemit
Context Length32768
Model Max Length32768
Transformers Version4.39.3
Tokenizer ClassLlamaTokenizer
Padding Token<unk>
Vocabulary Size64000
Torch Data Typebfloat16

Best Alternatives to Faro Yi 9B DPO

Best Alternatives
Context / RAM
Downloads
Likes
Yi 9B 200K256K / 17.7 GB1145277
SekhmetX 9B V0.1 Test256K / 21.2 GB712
SekmetX 9B V0.1 Test256K / 21.2 GB692
Austral Xgen 9B Winton256K / 21.3 GB102
...rce Xgen Small 9B Rebased V0.1256K / 42.5 GB170
...rce Xgen Small 9B Rebased V0.1256K / 42.5 GB140
Mike Hawk 9B256K / 21.3 GB33
Xgen Small 9B Instruct R256K / 42.5 GB1257
Xgen Small 9B Base R256K / 42.5 GB122
BigYi 15.75B 200K256K / 30.3 GB140
Note: green Score (e.g. "73.2") means that the model is better than wenbopan/Faro-Yi-9B-DPO.

Rank the Faro Yi 9B DPO Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52473 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a