Qwen3 Swallow 8B SFT V0.2 by tokyotech-llm

 ยป  All LLMs  ยป  tokyotech-llm  ยป  Qwen3 Swallow 8B SFT V0.2   URL Share it on

Qwen3 Swallow 8B SFT V0.2 is an open-source language model by tokyotech-llm. Features: 8b LLM, VRAM: 16.4GB, Context: 40K, License: apache-2.0, LLM Explorer Score: 0.28.

  Arxiv:2404.17733   Arxiv:2412.02595   Arxiv:2505.09388 Base model:finetune:tokyotech-... Base model:tokyotech-llm/qwen3...   Conversational Dataset:tokyotech-llm/lmsys-ch... Dataset:tokyotech-llm/swallow-... Dataset:tokyotech-llm/swallow-... Dataset:tokyotech-llm/swallow-...   En   Endpoints compatible   Ja   Qwen3   Region:us   Safetensors   Sharded   Tensorflow

Qwen3 Swallow 8B SFT V0.2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Qwen3 Swallow 8B SFT V0.2 (tokyotech-llm/Qwen3-Swallow-8B-SFT-v0.2)
๐ŸŒŸ Advertise your project ๐Ÿš€

Qwen3 Swallow 8B SFT V0.2 Parameters and Internals

LLM NameQwen3 Swallow 8B SFT V0.2
Repository ๐Ÿค—https://huggingface.co/tokyotech-llm/Qwen3-Swallow-8B-SFT-v0.2 
Base Model(s)  tokyotech-llm/Qwen3-Swallow-8B-CPT-v0.2   tokyotech-llm/Qwen3-Swallow-8B-CPT-v0.2
Model Size8b
Required VRAM16.4 GB
Updated2026-04-01
Maintainertokyotech-llm
Model Typeqwen3
Model Files  4.0 GB: 1-of-5   4.0 GB: 2-of-5   4.0 GB: 3-of-5   3.2 GB: 4-of-5   1.2 GB: 5-of-5
Supported Languagesen ja
Model ArchitectureQwen3ForCausalLM
Licenseapache-2.0
Context Length40960
Model Max Length40960
Transformers Version4.57.3
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size151936
Errorsreplace

Best Alternatives to Qwen3 Swallow 8B SFT V0.2

Best Alternatives
Context / RAM
Downloads
Likes
...n3 8B 320K Context 10X Massive320K / 16.4 GB200
...r Of Horror Jan V1 256K Ctx 8B256K / 16.1 GB333
... BIG Jan Horror V1 256K Ctx 8B256K / 16.1 GB440
Qwen3 8B 256K Context 8X Grand256K / 16.4 GB950
...wen3 8B 192K Context 6X Larger192K / 16.4 GB550
DeepSeek R1 0528 Qwen3 8B128K / 16.4 GB124167959
DeepSeek R1 0528 Qwen3 8B128K / 16.4 GB1249016
...1 0528 Qwen3 8B Abliterated V1128K / 16.4 GB108529
...1 Qwen3 8B ArliAI RpR V4 Small128K / 16.4 GB111217
...8 Qwen3 8B Abliterated V1 Bf16128K / 16.3 GB3991
Note: green Score (e.g. "73.2") means that the model is better than tokyotech-llm/Qwen3-Swallow-8B-SFT-v0.2.

Rank the Qwen3 Swallow 8B SFT V0.2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52438 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a