Kimi Dev 72B GPTQ 4bit by btbtyler09

 ยป  All LLMs  ยป  btbtyler09  ยป  Kimi Dev 72B GPTQ 4bit   URL Share it on

  4-bit   4bit   Autotrain compatible Base model:moonshotai/kimi-dev... Base model:quantized:moonshota...   Code   Conversational   Endpoints compatible   Gptq   Issue-resolving   Quantized   Qwen2   Region:us   Safetensors   Sharded   Software   Swebench   Tensorflow

Kimi Dev 72B GPTQ 4bit Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Kimi Dev 72B GPTQ 4bit (btbtyler09/Kimi-Dev-72B-GPTQ-4bit)
๐ŸŒŸ Advertise your project ๐Ÿš€

Kimi Dev 72B GPTQ 4bit Parameters and Internals

LLM NameKimi Dev 72B GPTQ 4bit
Repository ๐Ÿค—https://huggingface.co/btbtyler09/Kimi-Dev-72B-GPTQ-4bit 
Base Model(s)  Kimi Dev 72B   moonshotai/Kimi-Dev-72B
Model Size72b
Required VRAM45.8 GB
Updated2025-07-02
Maintainerbtbtyler09
Model Typeqwen2
Model Files  2.5 GB: 1-of-24   1.9 GB: 2-of-24   1.9 GB: 3-of-24   1.9 GB: 4-of-24   2.0 GB: 5-of-24   2.0 GB: 6-of-24   1.9 GB: 7-of-24   1.9 GB: 8-of-24   1.9 GB: 9-of-24   2.0 GB: 10-of-24   2.0 GB: 11-of-24   1.9 GB: 12-of-24   1.9 GB: 13-of-24   1.9 GB: 14-of-24   2.0 GB: 15-of-24   2.0 GB: 16-of-24   1.9 GB: 17-of-24   1.9 GB: 18-of-24   1.9 GB: 19-of-24   2.0 GB: 20-of-24   2.0 GB: 21-of-24   1.9 GB: 22-of-24   2.5 GB: 23-of-24   0.1 GB: 24-of-24
GPTQ QuantizationYes
Quantization Typegptq|4bit
Model ArchitectureQwen2ForCausalLM
Licensemit
Context Length131072
Model Max Length131072
Transformers Version4.52.4
Tokenizer ClassQwen2TokenizerFast
Padding Token<|fim_pad|>
Vocabulary Size152064
Torch Data Typefloat16
Errorsreplace

Best Alternatives to Kimi Dev 72B GPTQ 4bit

Best Alternatives
Context / RAM
Downloads
Likes
Qwen2.5 72B Instruct GPTQ Int432K / 41.6 GB2338037
Qwen2.5 72B Instruct GPTQ Int832K / 77 GB254227
Qwen2 72B Instruct GPTQ Int432K / 41.6 GB354932
Qwen1.5 72B Chat GPTQ Int432K / 41.3 GB435737
Qwen2 72B Instruct GPTQ Int832K / 77 GB30815
Qwen1.5 72B Chat GPTQ32K / 45.4 GB122
Qwen1.5 72B Chat GPTQ Int832K / 77 GB677
Qwen2 72B Bnb 4bit128K / 41.2 GB17514
...in 2.9.2 Qwen2 72B 6.0bpw EXL2128K / 56.1 GB131
...Qwen2 72B 4 0bpw H6 EXL2 Pippa128K / 38.6 GB101

Rank the Kimi Dev 72B GPTQ 4bit Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 49328 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124