Qwen3 4B Thinking Full Pretrain Mix High Tweet 1M En GPT by AmberYifan

 ยป  All LLMs  ยป  AmberYifan  ยป  Qwen3 4B Thinking Full Pretrain Mix High Tweet 1M En GPT   URL Share it on

  Autotrain compatible Base model:finetune:qwen/qwen3... Base model:qwen/qwen3-4b-think...   Conversational   Endpoints compatible   Full   Generated from trainer   Llama-factory   Qwen3   Region:us   Safetensors   Sharded   Tensorflow

Qwen3 4B Thinking Full Pretrain Mix High Tweet 1M En GPT Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Qwen3 4B Thinking Full Pretrain Mix High Tweet 1M En GPT (AmberYifan/qwen3-4b-thinking-full-pretrain-mix-high-tweet-1m-en-gpt)
๐ŸŒŸ Advertise your project ๐Ÿš€

Qwen3 4B Thinking Full Pretrain Mix High Tweet 1M En GPT Parameters and Internals

LLM NameQwen3 4B Thinking Full Pretrain Mix High Tweet 1M En GPT
Repository ๐Ÿค—https://huggingface.co/AmberYifan/qwen3-4b-thinking-full-pretrain-mix-high-tweet-1m-en-gpt 
Base Model(s)  Qwen3 4B Thinking 2507   Qwen/Qwen3-4B-Thinking-2507
Model Size4b
Required VRAM8.1 GB
Updated2025-09-01
MaintainerAmberYifan
Model Typeqwen3
Model Files  5.0 GB: 1-of-2   3.1 GB: 2-of-2   0.0 GB
Model ArchitectureQwen3ForCausalLM
Licenseapache-2.0
Context Length262144
Model Max Length262144
Transformers Version4.52.4
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size151936
Torch Data Typebfloat16
Errorsreplace

Best Alternatives to Qwen3 4B Thinking Full Pretrain Mix High Tweet 1M En GPT

Best Alternatives
Context / RAM
Downloads
Likes
Qwen3 4B Instruct 2507256K / 8.1 GB588760253
Qwen3 4B Thinking 2507256K / 8.1 GB164844336
Jan V1 4B256K / 8.1 GB11108320
Qwen3 4B Thinking 2507 FP8256K / 5.2 GB13631726
Qwen3 4B Instruct 2507 FP8256K / 5.2 GB2158626
Qwen3 4B Instruct 2507256K / 8.1 GB199848
...4B Thinking 2507 DAG Reasoning256K / 16.1 GB89904
Qwen3 4B Thinking 2507256K / 8.1 GB55583
Luna256K / 8.1 GB353
Test7256K / 8.1 GB4790
Note: green Score (e.g. "73.2") means that the model is better than AmberYifan/qwen3-4b-thinking-full-pretrain-mix-high-tweet-1m-en-gpt.

Rank the Qwen3 4B Thinking Full Pretrain Mix High Tweet 1M En GPT Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51043 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124