LLM Name | DistillR1 1.5B Pruned 1.1B |
Repository 🤗 | https://huggingface.co/realYinkaIyiola/DistillR1-1.5B-Pruned-1.1B |
Model Size | 1.5b |
Required VRAM | 2.6 GB |
Updated | 2025-09-20 |
Maintainer | realYinkaIyiola |
Model Type | qwen2 |
Model Files | |
Model Architecture | Qwen2ForCausalLM |
License | mit |
Context Length | 131072 |
Model Max Length | 131072 |
Transformers Version | 4.44.0 |
Tokenizer Class | LlamaTokenizerFast |
Beginning of Sentence Token | <|begin▁of▁sentence|> |
End of Sentence Token | <|end▁of▁sentence|> |
Vocabulary Size | 151936 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
ReaderLM V2 | 500K / 3.1 GB | 17082 | 701 |
Reader Lm 1.5B | 250K / 3.1 GB | 1420 | 607 |
DeepSeek R1 Distill Qwen 1.5B | 128K / 3.5 GB | 604974 | 1340 |
...n Research Reasoning Qwen 1.5B | 128K / 7.1 GB | 882058 | 216 |
DeepScaleR 1.5B Preview | 128K / 7.1 GB | 28256 | 573 |
Palmyra Mini | 128K / 3.5 GB | 158 | 29 |
AceInstruct 1.5B | 128K / 3.5 GB | 52902 | 20 |
Qwen2.5 1.5B | 128K / 3.1 GB | 286777 | 126 |
OpenReasoning Nemotron 1.5B | 128K / 3.1 GB | 4521 | 47 |
...1 Distill Qwen 1.5B GSPO Basic | 128K / 3.5 GB | 1688 | 0 |
🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟