LLM Name | Qwen3 0.6B GGUF |
Repository ๐ค | https://huggingface.co/unsloth/Qwen3-0.6B-GGUF |
Base Model(s) | |
Model Size | 0.6b |
Required VRAM | 0.2 GB |
Updated | 2025-07-30 |
Maintainer | unsloth |
Model Type | qwen3 |
Model Files | |
Supported Languages | en |
GGUF Quantization | Yes |
Quantization Type | gguf|q2|q4_k|q5_k |
Model Architecture | Qwen3ForCausalLM |
License | apache-2.0 |
Context Length | 40960 |
Model Max Length | 40960 |
Transformers Version | 4.51.3 |
Vocabulary Size | 151936 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Qwen3 0.6B GGUF | 40K / 0.3 GB | 1117 | 8 |
Qwen3 0.6B Unsloth Bnb 4bit | 40K / 0.6 GB | 46619 | 13 |
... Model Era Lora 128 Qwen3 0.6B | 40K / 1.2 GB | 1424 | 0 |
Qwen3 0.6B 4bit | 40K / 0.3 GB | 4976 | 5 |
Qwen3 0.6B 8bit | 40K / 0.6 GB | 1034 | 3 |
Qwen3 0.6B MLX 6bit | 40K / 0.5 GB | 109 | 2 |
Qwen3 0.6B GRPO | 40K / 1.2 GB | 7 | 0 |
Qwe3 0.6B Acot | 40K / 1.2 GB | 18 | 0 |
Qwen3 0.6B Acot | 40K / 1.2 GB | 11 | 0 |
Fizik 0.6B Preview | 40K / 1.2 GB | 9 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐