LLM Name | Qwen3 4B GGUF |
Repository ๐ค | https://huggingface.co/second-state/Qwen3-4B-GGUF |
Model Name | Qwen3-4B |
Model Creator | Qwen |
Base Model(s) | |
Model Size | 4b |
Required VRAM | 1.7 GB |
Updated | 2025-09-23 |
Maintainer | second-state |
Model Type | qwen3 |
Model Files | |
Supported Languages | en |
GGUF Quantization | Yes |
Quantization Type | gguf|q2|q4_k|q5_k |
Model Architecture | Qwen3ForCausalLM |
License | other |
Context Length | 40960 |
Model Max Length | 40960 |
Transformers Version | 4.51.0 |
Vocabulary Size | 151936 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...wen3 4B Toolcalling Gguf Codex | 256K / 4.3 GB | 594 | 13 |
...wen3 4B Thinking 2507 Hermes 3 | 256K / 8.1 GB | 541 | 1 |
Qwen3 4B Tcomanr Merge V2.2 | 256K / 8 GB | 671 | 2 |
...B Toolcall Gguf Llamacpp Codex | 256K / 4.3 GB | 82 | 2 |
Qwen3 4B Tcomanr Merge V2 | 256K / 8 GB | 82 | 2 |
Qwen3 4B 128K GGUF | 128K / 1.1 GB | 3649 | 24 |
Qwen3 4B GGUF | 40K / 1.1 GB | 38567 | 78 |
ReasonableQwen3 4B | 40K / 2.5 GB | 7163 | 2 |
Qwen3 Hermes 4B | 40K / 8.1 GB | 142 | 2 |
...wen3 4B Thinking 2507 MLX 8bit | 256K / 4.3 GB | 628150 | 7 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐