Qwen2.5 Taiwan 3B Reason SFT by benchang1110

 »  All LLMs  »  benchang1110  »  Qwen2.5 Taiwan 3B Reason SFT   URL Share it on

Qwen2.5 Taiwan 3B Reason SFT is an open-source language model by benchang1110. Features: 3b LLM, VRAM: 6.8GB, Context: 32K, License: other, Instruction-Based, LLM Explorer Score: 0.19.

  Arxiv:2501.12948 Base model:benchang1110/qwen2.... Base model:finetune:benchang11...   Conversational Dataset:congliu/chinese-deepse...   Endpoints compatible   Instruct   Qwen2   Region:us   Safetensors   Sharded   Tensorflow   Zh

Qwen2.5 Taiwan 3B Reason SFT Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Qwen2.5 Taiwan 3B Reason SFT Parameters and Internals

LLM NameQwen2.5 Taiwan 3B Reason SFT
Repository 🤗https://huggingface.co/benchang1110/Qwen2.5-Taiwan-3B-Reason-SFT 
Base Model(s)  benchang1110/Qwen2.5-Taiwan-3B-Instruct   benchang1110/Qwen2.5-Taiwan-3B-Instruct
Model Size3b
Required VRAM6.8 GB
Updated2026-03-29
Maintainerbenchang1110
Model Typeqwen2
Instruction-BasedYes
Model Files  5.0 GB: 1-of-2   1.8 GB: 2-of-2
Supported Languageszh
Model ArchitectureQwen2ForCausalLM
Licenseother
Context Length32768
Model Max Length32768
Transformers Version4.49.0
Tokenizer ClassLlamaTokenizerFast
Beginning of Sentence Token<|begin▁of▁sentence|>
End of Sentence Token<|end▁of▁sentence|>
Vocabulary Size151936
Torch Data Typebfloat16

Best Alternatives to Qwen2.5 Taiwan 3B Reason SFT

Best Alternatives
Context / RAM
Downloads
Likes
Saba2 3B128K / 6.2 GB70
Tessa T1 3B117K / 6.2 GB95
UIGEN T1.5 3B117K / 6.2 GB51
Qwen2.5 3B Instruct32K / 6.2 GB4746260319
SmallThinker 3B Preview32K / 6.8 GB30529412
Chirp 0132K / 6.2 GB714
Menda 3B 50032K / 6.2 GB60
Menda 3B 75032K / 6.2 GB31
Qwen2.5 3B Model Stock V3.132K / 6.8 GB63
Qwen2.5 3B Model Stock V4.132K / 6.8 GB32
Note: green Score (e.g. "73.2") means that the model is better than benchang1110/Qwen2.5-Taiwan-3B-Reason-SFT.

Rank the Qwen2.5 Taiwan 3B Reason SFT Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52554 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum — our secure, self-hosted AI agent for server management.
Release v20260328a