Qwen3 30B A6B 16 Extreme 128K Context by DavidAU

 ยป  All LLMs  ยป  DavidAU  ยป  Qwen3 30B A6B 16 Extreme 128K Context   URL Share it on

  128 k context   16 experts   Autotrain compatible Base model:finetune:qwen/qwen3... Base model:qwen/qwen3-30b-a3b-...   Conversational   Endpoints compatible   Qwen3   Qwen3 moe   Reasoning   Region:us   Safetensors   Sharded   Tensorflow   Thinking

Qwen3 30B A6B 16 Extreme 128K Context Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Qwen3 30B A6B 16 Extreme 128K Context (DavidAU/Qwen3-30B-A6B-16-Extreme-128k-context)
๐ŸŒŸ Advertise your project ๐Ÿš€

Qwen3 30B A6B 16 Extreme 128K Context Parameters and Internals

LLM NameQwen3 30B A6B 16 Extreme 128K Context
Repository ๐Ÿค—https://huggingface.co/DavidAU/Qwen3-30B-A6B-16-Extreme-128k-context 
Base Model(s)  Qwen3 30B A3B Base   Qwen/Qwen3-30B-A3B-Base
Model Size30b
Required VRAM61.1 GB
Updated2025-06-17
MaintainerDavidAU
Model Typeqwen3_moe
Model Files  4.0 GB: 1-of-16   4.0 GB: 2-of-16   4.0 GB: 3-of-16   4.0 GB: 4-of-16   4.0 GB: 5-of-16   4.0 GB: 6-of-16   4.0 GB: 7-of-16   4.0 GB: 8-of-16   4.0 GB: 9-of-16   4.0 GB: 10-of-16   4.0 GB: 11-of-16   4.0 GB: 12-of-16   4.0 GB: 13-of-16   4.0 GB: 14-of-16   4.0 GB: 15-of-16   1.1 GB: 16-of-16
Model ArchitectureQwen3MoeForCausalLM
Context Length131072
Model Max Length131072
Transformers Version4.51.0
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size151936
Torch Data Typebfloat16
Errorsreplace

Best Alternatives to Qwen3 30B A6B 16 Extreme 128K Context

Best Alternatives
Context / RAM
Downloads
Likes
Qwen3 30B A3B40K / 61.1 GB317297649
Qwen3 30B A3B FP840K / 32.5 GB18352663
Qwen3 16B A3B40K / 32.1 GB172482
Qwen3 30B A6B 16 Extreme40K / 61.1 GB227151
Qwen3 30B A3B Abliterated40K / 61.1 GB217732
...en3 30B A3B Abliterated Erotic40K / 61.1 GB135512
...en3 30B A3B ArliAI RpR V4 Fast40K / 61.1 GB1658
Qwen3 30B A1.5B High Speed40K / 61.1 GB130210
Qwen3 30B A3B.w4a1640K / 16.7 GB17177
Qwen3 30B A3B MLX Bf1640K / 61.2 GB473
Note: green Score (e.g. "73.2") means that the model is better than DavidAU/Qwen3-30B-A6B-16-Extreme-128k-context.

Rank the Qwen3 30B A6B 16 Extreme 128K Context Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 48225 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124