| LLM Name | Llama 3 Alpaca Tutor Quantized |
| Repository ๐ค | https://huggingface.co/AlpacaAAR/llama-3-alpaca-tutor-quantized |
| Model Size | 4.6b |
| Required VRAM | 6.1 GB |
| Updated | 2025-09-23 |
| Maintainer | AlpacaAAR |
| Model Files | |
| Model Architecture | AutoModelForCausalLM |
| License | apache-2.0 |
| Model Max Length | 8192 |
| Is Biased | none |
| Tokenizer Class | PreTrainedTokenizerFast |
| Padding Token | <|reserved_special_token_250|> |
| PEFT Type | LORA |
| LoRA Model | Yes |
| PEFT Target Modules | q_proj|up_proj|down_proj|gate_proj|k_proj|v_proj|o_proj |
| LoRA Alpha | 64 |
| LoRA Dropout | 0 |
| R Param | 64 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐