QwQ 32B Preview AWQ by KirillR

 »  All LLMs  »  KirillR  »  QwQ 32B Preview AWQ   URL Share it on

QwQ 32B Preview AWQ is an open-source language model by KirillR. Features: 32b LLM, VRAM: 19.4GB, Context: 32K, License: apache-2.0, Quantized, LLM Explorer Score: 0.17.

  Arxiv:2407.10671   4-bit   Awq Base model:quantized:qwen/qwq-... Base model:qwen/qwq-32b-previe...   Conversational   Deploy:azure   En   Endpoints compatible   Quantized   Qwen2   Region:us   Safetensors   Sharded   Tensorflow

QwQ 32B Preview AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

QwQ 32B Preview AWQ Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
research
Applications:
mathematics, coding
Primary Use Cases:
AI reasoning in mathematics and coding
Limitations:
Language Mixing and Code Switching, Recursive Reasoning Loops, Performance limitations in common sense reasoning and nuanced language understanding
Additional Notes 
The quantized model significantly reduces memory usage and computational requirements, making it suitable for deployment on hardware with limited resources.
Supported Languages 
en (proficient)
Responsible Ai Considerations 
Mitigation Strategies:
Enhanced safety measures are needed to ensure reliable and secure performance.
LLM NameQwQ 32B Preview AWQ
Repository 🤗https://huggingface.co/KirillR/QwQ-32B-Preview-AWQ 
Base Model(s)  QwQ 32B Preview   Qwen/QwQ-32B-Preview
Model Size32b
Required VRAM19.4 GB
Updated2026-04-24
MaintainerKirillR
Model Typeqwen2
Model Files  3.9 GB: 1-of-5   4.0 GB: 2-of-5   4.0 GB: 3-of-5   4.0 GB: 4-of-5   3.5 GB: 5-of-5
Supported Languagesen
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.46.3
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size152064
Torch Data Typefloat16
Errorsreplace

Best Alternatives to QwQ 32B Preview AWQ

Best Alternatives
Context / RAM
Downloads
Likes
...epSeek R1 Distill Qwen 32B AWQ128K / 19.4 GB1188037
...ekR1 QwQ SkyT1 32B Preview AWQ128K / 19.4 GB108
QwQ 32B AWQ40K / 19.4 GB134483133
Qwen1.5 32B Chat AWQ32K / 21.2 GB208818
Qwen2.5 32B Instruct AWQ32K / 19.4 GB134208098
Qwen2.5 Coder 32B Instruct AWQ32K / 19.4 GB68887635
Openhands Lm 32B V0.1 AWQ32K / 19.4 GB219
...k R1 Distill Qwen 32B Bnb 4bit128K / 19.2 GB807129
Hydraulic Deepseek 16bit128K / 65.8 GB520
...pSeek R1 Distill Qwen 32B 4bit128K / 18.5 GB440846
Note: green Score (e.g. "73.2") means that the model is better than KirillR/QwQ-32B-Preview-AWQ.

Rank the QwQ 32B Preview AWQ Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 53232 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum — our secure, self-hosted AI agent for server management.
Release v20260328a