| LLM Name | Shed Coder 0.2 |
| Repository ๐ค | https://huggingface.co/VortexHunter23/Shed-Coder-0.2 |
| Base Model(s) | |
| Model Size | 14b |
| Required VRAM | 9.9 GB |
| Updated | 2025-09-19 |
| Maintainer | VortexHunter23 |
| Model Type | qwen2 |
| Model Files | |
| Supported Languages | en |
| Quantization Type | 4bit |
| Generates Code | Yes |
| Model Architecture | Qwen2ForCausalLM |
| License | apache-2.0 |
| Context Length | 131072 |
| Model Max Length | 131072 |
| Transformers Version | 4.51.3 |
| Tokenizer Class | LlamaTokenizerFast |
| Padding Token | <|vision_pad|> |
| Vocabulary Size | 152064 |
| Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| CogitoZ14 | 32K / 29.7 GB | 25 | 1 |
| UIGEN T1.1 Qwen 14B | 32K / 29.7 GB | 1130 | 32 |
| ....5 Coder 14B Instruct Bnb 4bit | 32K / 9.9 GB | 7599 | 5 |
| Qwen 14B Codefourchan | 32K / 29.7 GB | 1 | 1 |
| UIGEN T1.1 Qwen 14B | 32K / 29.7 GB | 50 | 38 |
| Qwen2.5 Coder 14B Bnb 4bit | 32K / 9.9 GB | 1852 | 6 |
| ...wen2.5 Coder 14B Instruct 4bit | 32K / 8.3 GB | 1409 | 4 |
| UIGEN T1.2 Tailwind 14B | 32K / 29.7 GB | 0 | 1 |
| Korean QWEN Coder2.5 14B | 32K / 29.7 GB | 15 | 3 |
| ZYH LLM Qwen2.5 14B V4 | 986K / 29.7 GB | 41 | 8 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐