Qwen3.6 27B Claude Opus Reasoning Distilled GGUF by rico03

 »  All LLMs  »  rico03  »  Qwen3.6 27B Claude Opus Reasoning Distilled GGUF   URL Share it on

Qwen3.6 27B Claude Opus Reasoning Distilled GGUF is an open-source language model by rico03. Features: 27b LLM, VRAM: 10.7GB, License: apache-2.0, Quantized, LLM Explorer Score: 0.39.

Base model:quantized:qwen/qwen...   Base model:qwen/qwen3.6-27b   Claude-opus   Conversational Dataset:jackrong/qwen3.5-reaso... Dataset:nohurry/opus-4.6-reaso... Dataset:roman1111111/claude-op...   Distillation   En   Endpoints compatible   Finetuned   Gguf   Llama-cpp   Multilingual   Ollama   Q2   Quantized   Qwen3.6   Qwen3 5   Reasoning   Region:us

Qwen3.6 27B Claude Opus Reasoning Distilled GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Qwen3.6 27B Claude Opus Reasoning Distilled GGUF Parameters and Internals

LLM NameQwen3.6 27B Claude Opus Reasoning Distilled GGUF
Repository 🤗https://huggingface.co/rico03/Qwen3.6-27B-Claude-Opus-Reasoning-Distilled-GGUF 
Model Nameunsloth/Qwen3.6-27B
Base Model(s)  Qwen/Qwen3.6-27B   Qwen/Qwen3.6-27B
Model Size27b
Required VRAM10.7 GB
Updated2026-04-24
Maintainerrico03
Model Typeqwen3_5
Model Files  10.7 GB   13.3 GB   16.5 GB   15.6 GB   19.2 GB   18.7 GB   22.1 GB   28.6 GB
Supported Languagesen
GGUF QuantizationYes
Quantization Typegguf|q2|q4_k|q5_k
Model ArchitectureQwen3_5ForConditionalGeneration
Licenseapache-2.0
Torch Data Typebfloat16

Best Alternatives to Qwen3.6 27B Claude Opus Reasoning Distilled GGUF

Best Alternatives
Context / RAM
Downloads
Likes
... Opus Reasoning Distilled 4bit0K / 15.1 GB40736
Qwen3.5 27B 4bit DWQ0K / 15.2 GB14185
MLX Qwopus3.5 27B V3 6bit0K / 21.8 GB9012
...e 4.6 Opus Reasoning Distilled0K / 55.2 GB2532591532
Qwen3.5 27B0K / 54.7 GB3908313
....6 Opus Reasoning Distilled V20K / 55.2 GB483226113
Qwopus3.5 27B V30K / 54.7 GB30121216
Qwen3.6 27B NVFP40K / 31 GB14643
...laude Opus Reasoning Distilled0K / 55.4 GB3013
Qwopus3.5 27B V3 HLWQ V7 GPTQ0K / 19.2 GB6675
Note: green Score (e.g. "73.2") means that the model is better than rico03/Qwen3.6-27B-Claude-Opus-Reasoning-Distilled-GGUF.

Rank the Qwen3.6 27B Claude Opus Reasoning Distilled GGUF Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 53185 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum — our secure, self-hosted AI agent for server management.
Release v20260328a