| LLM Name | ORPO PM01 | 
| Repository ๐ค | https://huggingface.co/Truepeak/ORPO-PM01 | 
| Model Size | 8b | 
| Required VRAM | 16.1 GB | 
| Updated | 2025-09-23 | 
| Maintainer | Truepeak | 
| Model Files | |
| Model Architecture | AutoModelForCausalLM | 
| Is Biased | none | 
| Tokenizer Class | PreTrainedTokenizerFast | 
| Padding Token | <|im_end|> | 
| PEFT Type | LORA | 
| LoRA Model | Yes | 
| PEFT Target Modules | down_proj|q_proj|v_proj|o_proj|gate_proj|k_proj|up_proj | 
| LoRA Alpha | 32 | 
| LoRA Dropout | 0.05 | 
| R Param | 16 | 
| Best Alternatives | Context / RAM | Downloads | Likes | 
|---|---|---|---|
| Llama 3.1 8B Instuct Uz | 128K / 16.1 GB | 1000 | 15 | 
| Trillama 8B | 8K / 16.1 GB | 7 | 3 | 
| AutogenJune2000 | 8K / 33.8 GB | 16 | 0 | 
| Llama3 8B | 8K / 16.1 GB | 6 | 0 | 
| ...Datasets Layer 1 Decoder Fixed | 0K / 0.7 GB | 672 | 0 | 
| Medllama3 V20 | 0K / 16.1 GB | 25328 | 78 | 
| ConfTuner Ministral | 0K / 16.1 GB | 25 | 3 | 
| ...llem V1 Smilebase Llama 3.1 8B | 0K / 1.1 GB | 13 | 0 | 
| CryptoAI | 0K / 16.1 GB | 91 | 1 | 
| Llama3 2 8B To 1B Test | 0K / 2.5 GB | 8 | 0 | 
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐