Qwen2.5 32B AGI by AiCloser

 »  All LLMs  »  AiCloser  »  Qwen2.5 32B AGI   URL Share it on

Qwen2.5 32B AGI is an open-source language model by AiCloser. Features: 32b LLM, VRAM: 44.6GB, Context: 32K, License: apache-2.0, Instruction-Based, LLM Explorer Score: 0.18.

  Ara Base model:finetune:qwen/qwen2... Base model:qwen/qwen2.5-32b-in...   Conversational Dataset:anthracite-org/kalo-op... Dataset:orion-zhen/dpo-toxic-z... Dataset:unalignment/toxic-dpo-...   Deu   Endpoints compatible   Eng   Fra   Instruct   Ita   Jpn   Kor   Por   Qwen2   Region:us   Rus   Safetensors   Sharded   Spa   Tensorflow   Tha   Vie   Zho

Qwen2.5 32B AGI Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Qwen2.5 32B AGI Parameters and Internals

Additional Notes 
AGI means Aspirational Grand Illusion. The model is designed to address Hypercensuritis, suggesting an emphasis on reducing excessive censorship.
Supported Languages 
zh (Chinese), en (English)
LLM NameQwen2.5 32B AGI
Repository 🤗https://huggingface.co/AiCloser/Qwen2.5-32B-AGI 
Base Model(s)  Qwen/Qwen2.5-32B-Instruct   Qwen/Qwen2.5-32B-Instruct
Model Size32b
Required VRAM44.6 GB
Updated2026-04-13
MaintainerAiCloser
Model Typeqwen2
Instruction-BasedYes
Model Files  1.6 GB: 1-of-66   1.0 GB: 2-of-66   1.0 GB: 3-of-66   1.0 GB: 4-of-66   1.0 GB: 5-of-66   1.0 GB: 6-of-66   1.0 GB: 7-of-66   1.0 GB: 8-of-66   1.0 GB: 9-of-66   1.0 GB: 10-of-66   1.0 GB: 11-of-66   1.0 GB: 12-of-66   1.0 GB: 13-of-66   1.0 GB: 14-of-66   1.0 GB: 15-of-66   1.0 GB: 16-of-66   1.0 GB: 17-of-66   1.0 GB: 18-of-66   1.0 GB: 19-of-66   1.0 GB: 20-of-66   1.0 GB: 21-of-66   1.0 GB: 22-of-66   1.0 GB: 23-of-66   1.0 GB: 24-of-66   1.0 GB: 25-of-66   1.0 GB: 26-of-66   1.0 GB: 27-of-66   1.0 GB: 28-of-66   1.0 GB: 29-of-66   1.0 GB: 30-of-66   1.0 GB: 31-of-66   1.0 GB: 32-of-66   1.0 GB: 33-of-66   1.0 GB: 34-of-66   1.0 GB: 35-of-66   1.0 GB: 36-of-66   1.0 GB: 37-of-66   1.0 GB: 38-of-66   1.0 GB: 39-of-66   1.0 GB: 40-of-66   1.0 GB: 41-of-66   1.0 GB: 42-of-66   1.0 GB: 43-of-66   1.0 GB: 44-of-66
Supported Languagesen
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.44.2
Vocabulary Size152064
Torch Data Typebfloat16

Best Alternatives to Qwen2.5 32B AGI

Best Alternatives
Context / RAM
Downloads
Likes
...y Qwen2.5coder 32B V24.1q 200K195K / 65.8 GB82
...wen2.5 32B Inst BaseMerge TIES128K / 65.8 GB3017
...wen2.5 32B Inst BaseMerge TIES128K / 65.8 GB224
Franqwenstein 35B128K / 69.8 GB88
ELYZA Thinking 1.0 Qwen 32B128K / 65.8 GB2039
Hamanasu Magnum QwQ 32B128K / 65.8 GB2959
Archaeo 32B KTO128K / 65.8 GB44
Qwen2.5 32B Gokgok Step3128K / 65.7 GB60
Qwen2.5 32B Dark Days Stage2128K / 65.8 GB70
Qwen2.5 32B YOYO MIX128K / 65.7 GB52
Note: green Score (e.g. "73.2") means that the model is better than AiCloser/Qwen2.5-32B-AGI.

Rank the Qwen2.5 32B AGI Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52953 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum — our secure, self-hosted AI agent for server management.
Release v20260328a