Qwen3 4B Self Thinking 16bit by uberkie

 »  All LLMs  »  uberkie  »  Qwen3 4B Self Thinking 16bit   URL Share it on

Qwen3 4B Self Thinking 16bit is an open-source language model by uberkie. Features: 4b LLM, VRAM: 8.1GB, Context: 256K, License: apache-2.0, Quantized, LLM Explorer Score: 0.29.

  16bit   4-bit   Agent Base model:quantized:qwen/qwen... Base model:qwen/qwen3-4b-think...   Bitsandbytes   Conversational Dataset:uberkie/claude-4.5-opu... Dataset:uberkie/reasoning clas...   En   Endpoints compatible   Quantized   Qwen3   Region:us   Safetensors   Self-think   Sharded   Tensorflow   Unsloth

Qwen3 4B Self Thinking 16bit Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Qwen3 4B Self Thinking 16bit Parameters and Internals

LLM NameQwen3 4B Self Thinking 16bit
Repository 🤗https://huggingface.co/uberkie/qwen3-4B-self-thinking-16bit 
Base Model(s)  Qwen3 4B Thinking 2507   Qwen/Qwen3-4B-Thinking-2507
Model Size4b
Required VRAM8.1 GB
Updated2026-05-01
Maintaineruberkie
Model Typeqwen3
Model Files  5.0 GB: 1-of-2   3.1 GB: 2-of-2
Supported Languagesen
Quantization Type16bit
Model ArchitectureQwen3ForCausalLM
Licenseapache-2.0
Context Length262144
Model Max Length262144
Tokenizer ClassQwen2Tokenizer
Padding Token<|PAD_TOKEN|>
Vocabulary Size151936
Torch Data Typefloat16
Errorsreplace

Best Alternatives to Qwen3 4B Self Thinking 16bit

Best Alternatives
Context / RAM
Downloads
Likes
...Instruct 2507 Unsloth Bnb 4bit256K / 3.5 GB9555414
Agent 4b256K / 8.1 GB3620
...wen3 4B Thinking 2507 MLX 4bit256K / 2.3 GB6926611
...wen3 4B Thinking 2507 MLX 8bit256K / 4.3 GB664767
Qwen3 4B ARC MLX 4bit256K / 2 GB2802
...Instruct Haiku 4.5 Merged FP16256K / 8.1 GB300
Qwen3 4B Instruct 2507 4bit256K / 2.3 GB157619
Qwen3 4B Finetune Merged 4bit256K / 3.5 GB140
Jan V3 4B Base Instruct 4bit256K / 2.3 GB5412
...wen3 4B Instruct 2507 Bnb 4bit256K / 2.6 GB75645
Note: green Score (e.g. "73.2") means that the model is better than uberkie/qwen3-4B-self-thinking-16bit.

Rank the Qwen3 4B Self Thinking 16bit Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 53743 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum — our secure, self-hosted AI agent for server management.
Release v20260328a