LLM Name | Kimi Dev 72B GPTQ 4bit |
Repository ๐ค | https://huggingface.co/btbtyler09/Kimi-Dev-72B-GPTQ-4bit |
Base Model(s) | |
Model Size | 72b |
Required VRAM | 45.8 GB |
Updated | 2025-07-02 |
Maintainer | btbtyler09 |
Model Type | qwen2 |
Model Files | |
GPTQ Quantization | Yes |
Quantization Type | gptq|4bit |
Model Architecture | Qwen2ForCausalLM |
License | mit |
Context Length | 131072 |
Model Max Length | 131072 |
Transformers Version | 4.52.4 |
Tokenizer Class | Qwen2TokenizerFast |
Padding Token | <|fim_pad|> |
Vocabulary Size | 152064 |
Torch Data Type | float16 |
Errors | replace |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Qwen2.5 72B Instruct GPTQ Int4 | 32K / 41.6 GB | 23380 | 37 |
Qwen2.5 72B Instruct GPTQ Int8 | 32K / 77 GB | 2542 | 27 |
Qwen2 72B Instruct GPTQ Int4 | 32K / 41.6 GB | 3549 | 32 |
Qwen1.5 72B Chat GPTQ Int4 | 32K / 41.3 GB | 4357 | 37 |
Qwen2 72B Instruct GPTQ Int8 | 32K / 77 GB | 308 | 15 |
Qwen1.5 72B Chat GPTQ | 32K / 45.4 GB | 12 | 2 |
Qwen1.5 72B Chat GPTQ Int8 | 32K / 77 GB | 67 | 7 |
Qwen2 72B Bnb 4bit | 128K / 41.2 GB | 1751 | 4 |
...in 2.9.2 Qwen2 72B 6.0bpw EXL2 | 128K / 56.1 GB | 13 | 1 |
...Qwen2 72B 4 0bpw H6 EXL2 Pippa | 128K / 38.6 GB | 10 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐