| LLM Name | OpenBuddy R1 0528 Distill Qwen3 32B Preview7 QAT 200K | 
| Repository ๐ค | https://huggingface.co/OpenBuddy/OpenBuddy-R1-0528-Distill-Qwen3-32B-Preview7-QAT-200K | 
| Base Model(s) | |
| Model Size | 32b | 
| Required VRAM | 65.8 GB | 
| Updated | 2025-10-07 | 
| Maintainer | OpenBuddy | 
| Model Type | qwen3 | 
| Model Files | |
| Supported Languages | zh en fr de ja ko it fi | 
| Model Architecture | Qwen3ForCausalLM | 
| License | apache-2.0 | 
| Context Length | 200000 | 
| Model Max Length | 200000 | 
| Transformers Version | 4.53.0.dev0 | 
| Tokenizer Class | Qwen2Tokenizer | 
| Padding Token | <|endoftext|> | 
| Vocabulary Size | 152064 | 
| Torch Data Type | bfloat16 | 
| Errors | replace | 
| Best Alternatives | Context / RAM | Downloads | Likes | 
|---|---|---|---|
| ...ll Qwen3 32B Preview6 QAT 200K | 195K / 65.8 GB | 13 | 2 | 
| ...ll Qwen3 32B Preview4 QAT 200K | 195K / 65.8 GB | 7 | 2 | 
| Qwen3 32B | 40K / 65.6 GB | 5942227 | 549 | 
| UIGEN X 32B 0727 | 40K / 65.8 GB | 1266 | 154 | 
| DeepSWE Preview | 40K / 131.6 GB | 2030 | 181 | 
| Goedel Prover V2 32B | 40K / 65.8 GB | 12711 | 51 | 
| Qwen3 32B FP8 | 40K / 34.5 GB | 67063 | 63 | 
| T Pro It 2.0 | 40K / 65.8 GB | 1716 | 116 | 
| MiroThinker 32B DPO V0.1 | 40K / 65.6 GB | 1142 | 36 | 
| Qwen3 32B | 40K / 65.8 GB | 17408 | 11 | 
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐