LLM Name | Qwen2.5 1.5B Lightr1 3 EN 4096 1p0 0p0 1p0 Sft |
Repository ๐ค | https://huggingface.co/mveroe/Qwen2.5-1.5B_lightr1_3_EN_4096_1p0_0p0_1p0_sft |
Base Model(s) | |
Model Size | 1.5b |
Required VRAM | 3.1 GB |
Updated | 2025-08-26 |
Maintainer | mveroe |
Model Type | qwen2 |
Model Files | |
Model Architecture | Qwen2ForCausalLM |
License | apache-2.0 |
Context Length | 131072 |
Model Max Length | 131072 |
Transformers Version | 4.55.0 |
Tokenizer Class | Qwen2Tokenizer |
Padding Token | <|im_end|> |
Vocabulary Size | 151667 |
Torch Data Type | bfloat16 |
Errors | replace |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
ReaderLM V2 | 500K / 3.1 GB | 24749 | 687 |
Reader Lm 1.5B | 250K / 3.1 GB | 1402 | 604 |
DeepSeek R1 Distill Qwen 1.5B | 128K / 3.5 GB | 677659 | 1314 |
...n Research Reasoning Qwen 1.5B | 128K / 7.1 GB | 11022 | 212 |
AceInstruct 1.5B | 128K / 3.5 GB | 57902 | 20 |
OpenReasoning Nemotron 1.5B | 128K / 3.1 GB | 3213 | 40 |
DeepScaleR 1.5B Preview | 128K / 7.1 GB | 16936 | 571 |
Qwen2.5 1.5B | 128K / 3.1 GB | 390650 | 114 |
Qwen2 1.5B | 128K / 3.1 GB | 83194 | 97 |
OpenMath Nemotron 1.5B | 128K / 3.1 GB | 5595 | 24 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐