| LLM Name | Finetuned Llava Lora |
| Repository ๐ค | https://huggingface.co/yetesam/finetuned_llava_lora |
| Required VRAM | 0.1 GB |
| Updated | 2025-09-23 |
| Maintainer | yetesam |
| Model Files | |
| Model Architecture | AutoModelForCausalLM |
| License | mit |
| Is Biased | none |
| PEFT Type | LORA |
| LoRA Model | Yes |
| PEFT Target Modules | k_proj|down_proj|v_proj|q_proj|gate_proj|up_proj|o_proj |
| LoRA Alpha | 256 |
| LoRA Dropout | 0.05 |
| R Param | 128 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Asdf | 0K / 0.1 GB | 14 | 0 |
| MS32 3 | 0K / 1.5 GB | 6 | 0 |
| MS32 2 | 0K / 0.7 GB | 5 | 0 |
| Tinyllama Cpt | 0K / 0.5 GB | 6 | 0 |
| Fine Tune Sentimental Llama | 0K / 0 GB | 5 | 0 |
| VLM2Vec LoRA | 0K / 0 GB | 31 | 11 |
| QuietStar Project | 0K / GB | 8 | 2 |
| Alphace Email | 0K / 0.1 GB | 6 | 0 |
| Qwen7B Haiguitang | 0K / 15.3 GB | 5 | 0 |
| Modelv3 | 0K / 13.5 GB | 5 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐