| LLM Name | Qwen2.5 7B Instruct V1.2 | 
| Repository ๐ค | https://huggingface.co/homeb82784/qwen2.5-7b-instruct-v1.2 | 
| Model Size | 7b | 
| Required VRAM | 15.2 GB | 
| Updated | 2025-09-15 | 
| Maintainer | homeb82784 | 
| Model Type | qwen2 | 
| Instruction-Based | Yes | 
| Model Files | |
| Quantization Type | 4bit | 
| Model Architecture | Qwen2ForCausalLM | 
| Context Length | 32768 | 
| Model Max Length | 32768 | 
| Transformers Version | 4.46.2 | 
| Tokenizer Class | Qwen2Tokenizer | 
| Padding Token | <|PAD_TOKEN|> | 
| Vocabulary Size | 152064 | 
| Torch Data Type | bfloat16 | 
| Errors | replace | 
| Best Alternatives | Context / RAM | Downloads | Likes | 
|---|---|---|---|
| Qwen2.5 7B Instruct 1M 4bit | 986K / 4.3 GB | 1137 | 10 | 
| ...B Instruct 1M Unsloth Bnb 4bit | 986K / 7.5 GB | 77 | 3 | 
| Qwen2.5 7B Instruct 1M 8bit | 986K / 8.1 GB | 15 | 3 | 
| Qwen2.5 7B Instruct 1M 6bit | 986K / 6.2 GB | 10 | 2 | 
| Alisia 7B Instruct V1 | 128K / 15.4 GB | 114 | 2 | 
| ...5 7B Instruct SFT Meme LoRA V7 | 128K / 15.2 GB | 6 | 0 | 
| Gte Qwen2 7B Instruct 4bit DWQ | 128K / 4.3 GB | 34 | 3 | 
| Distill Qw Test | 32K / 15.2 GB | 5 | 0 | 
| Zirel 7B Math | 32K / 15.2 GB | 22 | 0 | 
| Kyro N1 7B | 32K / 15.2 GB | 12 | 6 | 
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐