| LLM Name | Qwen3 1.7B |
| Repository ๐ค | https://huggingface.co/Qwen/Qwen3-1.7B |
| Base Model(s) | |
| Model Size | 1.7b |
| Required VRAM | 4 GB |
| Updated | 2025-09-23 |
| Maintainer | Qwen |
| Model Type | qwen3 |
| Model Files | |
| Model Architecture | Qwen3ForCausalLM |
| License | apache-2.0 |
| Context Length | 40960 |
| Model Max Length | 40960 |
| Transformers Version | 4.51.0 |
| Tokenizer Class | Qwen2Tokenizer |
| Padding Token | <|endoftext|> |
| Vocabulary Size | 151936 |
| Torch Data Type | bfloat16 |
| Errors | replace |
Model |
Likes |
Downloads |
VRAM |
|---|---|---|---|
| Qwen3 1.7B GGUF | 44 | 46199 | 0 GB |
| Qwen3 1.7B Unsloth Bnb 4bit | 8 | 36259 | 1 GB |
| Qwen3 1.7B 4bit | 4 | 11683 | 1 GB |
| Qwen3 1.7B GPTQ Int8 | 4 | 3516 | 2 GB |
| Qwen3 1.7B Mixture Of Thought | 3 | 353 | 4 GB |
| Qwen3 1.7B 4bit DWQ 053125 | 2 | 1176 | 1 GB |
| ... Deepseek R1 0528 Distillation | 2 | 198 | 4 GB |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Lucy 128K | 128K / 3.4 GB | 1561 | 106 |
| Polaris 1.7B Preview | 128K / 3.4 GB | 46 | 7 |
| Smoothie Qwen3 1.7B | 40K / 3.4 GB | 56095 | 1 |
| Qwen3 1.7B FP8 | 40K / 2.6 GB | 20070 | 30 |
| Jan V1 Edge | 40K / 3.4 GB | 421 | 32 |
| Qwen3 1.7B | 40K / 3.4 GB | 19720 | 5 |
| Qwen3 1.7B Wordle | 40K / 3.4 GB | 3030 | 0 |
| Lucy | 40K / 3.4 GB | 148 | 64 |
| Kimina Prover Distill 1.7B | 40K / 4.1 GB | 1605 | 8 |
| E3 1.7B | 40K / 3.4 GB | 3098 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐