LLM Name | Qwen3 235B A22B Thinking 2507 FP8 |
Repository ๐ค | https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507-FP8 |
Base Model(s) | |
Model Size | 235b |
Required VRAM | 236.5 GB |
Updated | 2025-09-14 |
Maintainer | Qwen |
Model Type | qwen3_moe |
Model Files | |
Model Architecture | Qwen3MoeForCausalLM |
License | apache-2.0 |
Context Length | 262144 |
Model Max Length | 262144 |
Transformers Version | 4.51.0 |
Tokenizer Class | Qwen2Tokenizer |
Padding Token | <|endoftext|> |
Vocabulary Size | 151936 |
Torch Data Type | bfloat16 |
Errors | replace |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...en3 235B A22B Instruct 2507 1M | 986K / 171.9 GB | 18 | 1 |
Qwen3 235B A22B Instruct 2507 | 256K / 471 GB | 90770 | 673 |
Qwen3 235B A22B Thinking 2507 | 256K / 179.9 GB | 58006 | 348 |
...n3 235B A22B Instruct 2507 FP8 | 256K / 236.5 GB | 73333 | 123 |
Qwen3 235B A22B Instruct 2507 | 256K / 471 GB | 401 | 8 |
...n3 235B A22B Instruct 2507 FP8 | 256K / 236.5 GB | 489 | 3 |
Qwen3 235B A22B Thinking 2507 | 256K / 167.9 GB | 118 | 2 |
...n3 235B A22B Thinking 2507 FP8 | 256K / 236.5 GB | 28 | 2 |
Qwen3 235B A22B | 40K / 175.9 GB | 146631 | 1037 |
Qwen3 235B A22B FP8 | 40K / 220 GB | 96693 | 87 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐