LLM Name | Qwen3 4B GGUF |
Repository ๐ค | https://huggingface.co/second-state/Qwen3-4B-GGUF |
Model Name | Qwen3-4B |
Model Creator | Qwen |
Base Model(s) | |
Model Size | 4b |
Required VRAM | 1.7 GB |
Updated | 2025-08-26 |
Maintainer | second-state |
Model Type | qwen3 |
Model Files | |
Supported Languages | en |
GGUF Quantization | Yes |
Quantization Type | gguf|q2|q4_k|q5_k |
Model Architecture | Qwen3ForCausalLM |
License | other |
Context Length | 40960 |
Model Max Length | 40960 |
Transformers Version | 4.51.0 |
Vocabulary Size | 151936 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Qwen3 4B Tcomanr Merge V2.2 | 256K / 8 GB | 156 | 2 |
Qwen3 4B Tcomanr Merge V2 | 256K / 8 GB | 802 | 2 |
Qwen3 4B 128K GGUF | 128K / 1.1 GB | 2780 | 24 |
Qwen3 4B GGUF | 40K / 1.1 GB | 47859 | 71 |
Qwen3 Hermes 4B | 40K / 8.1 GB | 235 | 2 |
...wen3 4B Thinking 2507 MLX 8bit | 256K / 4.3 GB | 633786 | 6 |
...wen3 4B Thinking 2507 MLX 4bit | 256K / 2.3 GB | 637141 | 2 |
...Instruct 2507 Unsloth Bnb 4bit | 256K / 3.5 GB | 26158 | 3 |
Qwen3 4B Instruct 2507 4bit | 256K / 2.3 GB | 2046 | 4 |
Jan V1 4B 8bit | 256K / 4.3 GB | 661 | 3 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐