| LLM Name | Qwen1.5 1.8B Chat GGUF |
| Repository ๐ค | https://huggingface.co/second-state/Qwen1.5-1.8B-Chat-GGUF |
| Model Name | Qwen1.5 1.8B Chat |
| Model Creator | Qwen |
| Base Model(s) | |
| Model Size | 1.8b |
| Required VRAM | 0.9 GB |
| Updated | 2025-09-29 |
| Maintainer | second-state |
| Model Type | qwen2 |
| Model Files | |
| Supported Languages | en |
| GGUF Quantization | Yes |
| Quantization Type | gguf|q2|q4_k|q5_k |
| Model Architecture | Qwen2ForCausalLM |
| License | other |
| Context Length | 32768 |
| Model Max Length | 32768 |
| Transformers Version | 4.37.0 |
| Vocabulary Size | 151936 |
| Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Qwen1.5 1.8B Chat 4bit | 32K / 1.5 GB | 15 | 2 |
| Nano Q2 | 32K / 3.5 GB | 6 | 0 |
| Palmyra Mini Thinking A | 128K / 3.5 GB | 361 | 26 |
| Palmyra Mini Thinking B | 128K / 3.5 GB | 9 | 27 |
| Dhanishtha | 128K / 3.5 GB | 82 | 2 |
| LaConfiance PRYMMAL ECE TW3 | 128K / 7.1 GB | 5 | 0 |
| Qwen1.5 1.8B Chat | 32K / 3.7 GB | 44421 | 61 |
| Qwen1.5 1.8B | 32K / 3.7 GB | 25396 | 53 |
| Colegium Ai 1.8B | 32K / 3.7 GB | 15 | 0 |
| Qwen Fine Tuned V0 | 32K / 3.7 GB | 53 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐