Qwen3 30B A3B Thinking 2507 Deepseek V3.1 Distill FP32 by BasedBase

 ยป  All LLMs  ยป  BasedBase  ยป  Qwen3 30B A3B Thinking 2507 Deepseek V3.1 Distill FP32   URL Share it on

Base model:finetune:qwen/qwen3... Base model:qwen/qwen3-30b-a3b-...   Code-generation   Distillation   Lora   Lora-merged   Mixture-of-experts   Moe   Qwen   Qwen3 moe   Region:us   Safetensors   Sharded   Svd   Tensorflow

Qwen3 30B A3B Thinking 2507 Deepseek V3.1 Distill FP32 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Qwen3 30B A3B Thinking 2507 Deepseek V3.1 Distill FP32 (BasedBase/Qwen3-30B-A3B-Thinking-2507-Deepseek-v3.1-Distill-FP32)
๐ŸŒŸ Advertise your project ๐Ÿš€

Qwen3 30B A3B Thinking 2507 Deepseek V3.1 Distill FP32 Parameters and Internals

LLM NameQwen3 30B A3B Thinking 2507 Deepseek V3.1 Distill FP32
Repository ๐Ÿค—https://huggingface.co/BasedBase/Qwen3-30B-A3B-Thinking-2507-Deepseek-v3.1-Distill-FP32 
Base Model(s)  Qwen3 30B A3B Thinking 2507   Qwen/Qwen3-30B-A3B-Thinking-2507
Model Size30b
Required VRAM122.2 GB
Updated2025-08-31
MaintainerBasedBase
Model Typeqwen3_moe
Model Files  8.0 GB: 1-of-16   8.0 GB: 2-of-16   8.0 GB: 3-of-16   8.0 GB: 4-of-16   8.0 GB: 5-of-16   8.0 GB: 6-of-16   8.0 GB: 7-of-16   8.0 GB: 8-of-16   8.0 GB: 9-of-16   8.0 GB: 10-of-16   8.0 GB: 11-of-16   8.0 GB: 12-of-16   8.0 GB: 13-of-16   8.0 GB: 14-of-16   8.0 GB: 15-of-16   2.2 GB: 16-of-16
Model ArchitectureQwen3MoeForCausalLM
Licenseapache-2.0
Context Length262144
Model Max Length262144
Transformers Version4.55.4
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size151936
LoRA ModelYes
Torch Data Typebfloat16
Errorsreplace

Best Alternatives to Qwen3 30B A3B Thinking 2507 Deepseek V3.1 Distill FP32

Best Alternatives
Context / RAM
Downloads
Likes
...07 YOYO2 TOTAL RECALL Instruct986K / 84.8 GB220
Qwen3 30B A3B YOYO V2986K / 61.1 GB121
Qwen3 30B A3B Instruct 2507256K / 61.1 GB906142519
Qwen3 Coder 30B A3B Instruct256K / 61.1 GB364250526
Qwen3 30B A3B Thinking 2507256K / 61.1 GB203915250
...wen3 30B A3B Instruct 2507 FP8256K / 31.2 GB11507667
...en3 Coder 30B A3B Instruct FP8256K / 31.2 GB8529764
...wen3 30B A3B Thinking 2507 FP8256K / 31.2 GB2917132
Qwen3 Coder 30B A3B Instruct256K / 61.1 GB668714
Qwen3 30B A3B Instruct 2507256K / 61.1 GB727611
Note: green Score (e.g. "73.2") means that the model is better than BasedBase/Qwen3-30B-A3B-Thinking-2507-Deepseek-v3.1-Distill-FP32.

Rank the Qwen3 30B A3B Thinking 2507 Deepseek V3.1 Distill FP32 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51022 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124