Supertron2 1.7B is an open-source language model by Surpem. Features: 1.7b LLM, VRAM: 3.4GB, Context: 40K, License: apache-2.0, Instruction-Based.
| LLM Name | Supertron2 1.7B |
| Repository 🤗 | https://huggingface.co/Surpem/Supertron2-1.7B |
| Base Model(s) | |
| Model Size | 1.7b |
| Required VRAM | 3.4 GB |
| Updated | 2026-05-15 |
| Maintainer | Surpem |
| Model Type | qwen3 |
| Instruction-Based | Yes |
| Model Files | |
| Supported Languages | en |
| Model Architecture | Qwen3ForCausalLM |
| License | apache-2.0 |
| Context Length | 40960 |
| Model Max Length | 40960 |
| Transformers Version | 5.8.0.dev0 |
| Tokenizer Class | Qwen2Tokenizer |
| Padding Token | <|endoftext|> |
| Vocabulary Size | 151936 |
| Errors | replace |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| DictaLM 3.0 1.7B Instruct | 60K / 3.4 GB | 1105 | 1 |
| ...3 1.7B Instruction Noreasoning | 40K / 3.4 GB | 43386 | 12 |
| Luth 1.7B Instruct | 40K / 3.4 GB | 70 | 14 |
| ...ruct Fast CODER Reasoning 2.4B | 40K / 9.7 GB | 9 | 1 |
| ... Instruct CODER Reasoning 2.7B | 40K / 11 GB | 8 | 0 |
| Qwen3 1.7B ShiningValiant3 | 40K / 6.9 GB | 11 | 5 |
| ...wen3 1.7B Tamil 16bit Instruct | 40K / 3.4 GB | 7 | 0 |
🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟