Qwen3 4B Finetune Merged 4bit by bibihhehe

 »  All LLMs  »  bibihhehe  »  Qwen3 4B Finetune Merged 4bit   URL Share it on

Qwen3 4B Finetune Merged 4bit is an open-source language model by bibihhehe. Features: 4b LLM, VRAM: 3.5GB, Context: 256K, License: apache-2.0, Quantized, Fine-Tuned, Merged, LLM Explorer Score: 0.27.

  Merged Model   4-bit   4bit Base model:bibihhehe/qwen3 4b ... Base model:quantized:bibihhehe...   Bitsandbytes   Conversational   En   Endpoints compatible   Finetuned   Quantized   Qwen3   Region:us   Safetensors   Unsloth

Qwen3 4B Finetune Merged 4bit Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Qwen3 4B Finetune Merged 4bit Parameters and Internals

LLM NameQwen3 4B Finetune Merged 4bit
Repository 🤗https://huggingface.co/bibihhehe/qwen3_4B_finetune_merged_4bit 
Base Model(s)  Qwen3 4B Finetune Merged 4bit   bibihhehe/qwen3_4B_finetune_merged_4bit
Merged ModelYes
Model Size4b
Required VRAM3.5 GB
Updated2026-05-11
Maintainerbibihhehe
Model Typeqwen3
Model Files  3.5 GB
Supported Languagesen
Quantization Type4bit
Model ArchitectureQwen3ForCausalLM
Licenseapache-2.0
Context Length262144
Model Max Length262144
Transformers Version4.56.2
Tokenizer ClassQwen2Tokenizer
Padding Token<|PAD_TOKEN|>
Vocabulary Size151936
Torch Data Typefloat16
Errorsreplace

Quantized Models of the Qwen3 4B Finetune Merged 4bit

Model
Likes
Downloads
VRAM
Qwen3 4B Finetune Merged 4bit0143 GB

Best Alternatives to Qwen3 4B Finetune Merged 4bit

Best Alternatives
Context / RAM
Downloads
Likes
Agent 4b256K / 8.1 GB3620
Qwen3 4B Self Thinking 16bit256K / 8.1 GB13100
...Instruct 2507 Unsloth Bnb 4bit256K / 3.5 GB9555414
...wen3 4B Thinking 2507 MLX 4bit256K / 2.3 GB6926611
...wen3 4B Thinking 2507 MLX 8bit256K / 4.3 GB664767
...Instruct Haiku 4.5 Merged FP16256K / 8.1 GB300
Qwen3 4B ARC MLX 4bit256K / 2 GB2802
Qwen3 4B Instruct 2507 4bit256K / 2.3 GB179109
...Thinking 2507 Unsloth Bnb 4bit256K / 3.5 GB150672
Jan V3 4B Base Instruct 4bit256K / 2.3 GB5412
Note: green Score (e.g. "73.2") means that the model is better than bibihhehe/qwen3_4B_finetune_merged_4bit.

Rank the Qwen3 4B Finetune Merged 4bit Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 53640 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum — our secure, self-hosted AI agent for server management.
Release v20260328a