| Model Type |
| ||||||||||||
| Use Cases |
| ||||||||||||
| Additional Notes |
| ||||||||||||
| Supported Languages |
| ||||||||||||
| Training Details |
| ||||||||||||
| Input Output |
|
| LLM Name | Small Fut Final |
| Repository π€ | https://huggingface.co/Dongwookss/small_fut_final |
| Model Size | 7.2b |
| Required VRAM | 14.4 GB |
| Updated | 2025-09-23 |
| Maintainer | Dongwookss |
| Model Files | |
| Supported Languages | ko |
| Model Architecture | AutoModelForCausalLM |
| License | apache-2.0 |
| Model Max Length | 32768 |
| Is Biased | none |
| Tokenizer Class | LlamaTokenizer |
| Padding Token | <unk> |
| PEFT Type | LORA |
| LoRA Model | Yes |
| PEFT Target Modules | o_proj|v_proj|down_proj|gate_proj|q_proj|k_proj|up_proj |
| LoRA Alpha | 32 |
| LoRA Dropout | 0.05 |
| R Param | 16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Zephyr Tuning V1 | 0K / 14.4 GB | 1747 | 0 |
| Spydaz Web AI 025 | 0K / 14.4 GB | 44 | 0 |
| Last Model Too Long Time | 0K / 14.4 GB | 8 | 1 |
| Big Fut Final | 0K / 14.4 GB | 5 | 1 |
| DummySFT | 0K / 28.9 GB | 7 | 0 |
| Ethicalsft | 0K / 28.9 GB | 6 | 0 |
| Tweet Sentiment Analyzer | 0K / 7.7 GB | 7 | 0 |
| MistralRAG | 0K / 14.4 GB | 7 | 1 |
| GAMA Code Generator V1.0 | 0K / 14.4 GB | 8 | 0 |
| G8 Mistral7b Qlora 1211 V09feb | 0K / 14.4 GB | 7 | 1 |
π Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! π