| LLM Name | Llama3 70b Ft |
| Repository ๐ค | https://huggingface.co/Kota123/llama3_70b_ft |
| Base Model(s) | |
| Model Size | 70b |
| Required VRAM | 39.9 GB |
| Updated | 2025-09-17 |
| Maintainer | Kota123 |
| Model Files | |
| Quantization Type | 4bit |
| Model Architecture | Adapter |
| Model Max Length | 8192 |
| Is Biased | none |
| Tokenizer Class | PreTrainedTokenizerFast |
| Padding Token | <|reserved_special_token_250|> |
| PEFT Type | LORA |
| LoRA Model | Yes |
| PEFT Target Modules | o_proj|q_proj|up_proj|v_proj|gate_proj|down_proj|k_proj |
| LoRA Alpha | 16 |
| LoRA Dropout | 0 |
| R Param | 16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| ...LM 2Bit QLoRA Function Calling | 0K / 0.1 GB | 5 | 8 |
| Llama 2 70B Chat 4bit Japanese | 0K / 3.3 GB | 5 | 5 |
| ...ma 2 70B Chat 4bit Japanese V1 | 0K / 3.3 GB | 5 | 4 |
| Llama 3.1 70B Abliterated Lora | 0K / 1.7 GB | 26757 | 3 |
| ...aiga Llama3 70b Sft M1 D5 Lora | 0K / 5.9 GB | 0 | 1 |
| Llama 3 70B Instruct Spider | 0K / 141.9 GB | 6 | 0 |
| Airoboros 70B 3.3 Peft | 0K / 0.4 GB | 0 | 2 |
| Llama3v1 | 0K / 0.1 GB | 6 | 0 |
| Xwin LM 70B V0.1 LORA | 0K / 1.7 GB | 4 | 1 |
| Euryale 1.3 L2 70B LORA | 0K / 1.7 GB | 4 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐