| LLM Name | DeepSeek R1 Distill Qwen 1.5B |
| Repository 🤗 | https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B |
| Model Size | 1.5b |
| Required VRAM | 3.5 GB |
| Updated | 2025-09-23 |
| Maintainer | deepseek-ai |
| Model Type | qwen2 |
| Model Files | |
| Model Architecture | Qwen2ForCausalLM |
| License | mit |
| Context Length | 131072 |
| Model Max Length | 131072 |
| Transformers Version | 4.44.0 |
| Tokenizer Class | LlamaTokenizerFast |
| Beginning of Sentence Token | <|begin▁of▁sentence|> |
| End of Sentence Token | <|end▁of▁sentence|> |
| Vocabulary Size | 151936 |
| Torch Data Type | bfloat16 |
Model |
Likes |
Downloads |
VRAM |
|---|---|---|---|
| ...Seek R1 Distill Qwen 1.5B GGUF | 120 | 22840 | 0 GB |
| ...Seek R1 Distill Qwen 1.5B GGUF | 6 | 1279 | 1 GB |
| OpenThink | 0 | 880 | 1 GB |
| ...ill Qwen 1.5B Unsloth Bnb 4bit | 17 | 5188 | 1 GB |
| ... R1 Distill Qwen 1.5B Bnb 4bit | 11 | 2607 | 1 GB |
| Dhanishtha | 4 | 58 | 3 GB |
| Atlas Flash 1.5B Preview | 2 | 106 | 3 GB |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| ReaderLM V2 | 500K / 3.1 GB | 17305 | 701 |
| Reader Lm 1.5B | 250K / 3.1 GB | 1411 | 607 |
| AceInstruct 1.5B | 128K / 3.5 GB | 76880 | 20 |
| ...n Research Reasoning Qwen 1.5B | 128K / 7.1 GB | 5716 | 221 |
| DeepScaleR 1.5B Preview | 128K / 7.1 GB | 12387 | 574 |
| Qwen2.5 1.5B | 128K / 3.1 GB | 261977 | 127 |
| OpenReasoning Nemotron 1.5B | 128K / 3.1 GB | 5188 | 47 |
| Palmyra Mini | 128K / 3.5 GB | 212 | 30 |
| ...1 Distill Qwen 1.5B GSPO Basic | 128K / 3.5 GB | 1806 | 0 |
| Qwen2 1.5B | 128K / 3.1 GB | 83734 | 97 |
🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟