| Model Type |
| ||||||||||||
| Additional Notes |
| ||||||||||||
| Input Output |
|
| LLM Name | Deita 32B Adapter |
| Repository ๐ค | https://huggingface.co/KnutJaegersberg/Deita-32b-adapter |
| Model Size | 32b |
| Required VRAM | 0.5 GB |
| Updated | 2025-09-23 |
| Maintainer | KnutJaegersberg |
| Model Files | |
| Model Architecture | Adapter |
| License | mit |
| Is Biased | none |
| PEFT Type | LORA |
| LoRA Model | Yes |
| PEFT Target Modules | q_proj|up_proj|o_proj|k_proj|down_proj|gate_proj|v_proj |
| LoRA Alpha | 32 |
| LoRA Dropout | 0.05 |
| R Param | 16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Menghua Qwen3 32B Lora | 0K / 1.1 GB | 6 | 0 |
| Zhiming Qwen3 32B Lora | 0K / 1.1 GB | 5 | 0 |
| RHQE Aya Expanse 32b Layer 7 | 0K / 0 GB | 5 | 0 |
| SqlCoder Qwen2.5 8bit | 0K / 0.3 GB | 4 | 3 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐