Zephyr Tiny DPO Qlora is an open-source language model by dball. Features: 1.1b LLM, VRAM: 0.1GB, License: apache-2.0, HF Score: 37.4, LLM Explorer Score: 0.16, Arc: 36.6, HellaSwag: 61.7, MMLU: 25.8, TruthfulQA: 36.4, WinoGrande: 61.6, GSM8K: 2.1.
| LLM Name | Zephyr Tiny DPO Qlora |
| Repository ๐ค | https://huggingface.co/dball/zephyr-tiny-dpo-qlora |
| Base Model(s) | |
| Model Size | 1.1b |
| Required VRAM | 0.1 GB |
| Updated | 2026-04-09 |
| Maintainer | dball |
| Model Files | |
| Model Architecture | Adapter |
| License | apache-2.0 |
| Model Max Length | 2048 |
| Is Biased | none |
| Tokenizer Class | LlamaTokenizer |
| Padding Token | </s> |
| PEFT Type | LORA |
| LoRA Model | Yes |
| PEFT Target Modules | up_proj|v_proj|q_proj|k_proj|gate_proj|o_proj|down_proj |
| LoRA Alpha | 16 |
| LoRA Dropout | 0.05 |
| R Param | 16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Testing | 0K / 0 GB | 5 | 0 |
| Openhermes Tinyllama Sft Qlora | 0K / 0 GB | 6 | 0 |
| Zephyr Tinyllama Sft Qlora | 0K / 0 GB | 5 | 0 |
| ...hyr Tiny Sft Qlora Quantized 2 | 0K / 0 GB | 77 | 0 |
| DPO Mcqa V1.02 | 0K / 0.4 GB | 5 | 0 |
| DPO Mcqa QuantizedBitsAndBytes | 0K / 0.8 GB | 5 | 0 |
| MCQDPO1 | 0K / 0 GB | 5 | 0 |
| Mcqa | 0K / 0 GB | 5 | 0 |
| ProjectElrondMCQv1 | 0K / 0 GB | 11 | 0 |
| Tinyllama Cs | 0K / 0 GB | 5 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐