LLM Name | AVA Qwen1.5 7B Chat Gptq 4bit |
Repository ๐ค | https://huggingface.co/MehdiHosseiniMoghadam/AVA-Qwen1.5-7B-Chat-gptq-4bit |
Model Size | 7b |
Required VRAM | 5.8 GB |
Updated | 2025-06-09 |
Maintainer | MehdiHosseiniMoghadam |
Model Type | qwen2 |
Model Files | |
GPTQ Quantization | Yes |
Quantization Type | gptq|4bit |
Model Architecture | Qwen2ForCausalLM |
Context Length | 32768 |
Model Max Length | 32768 |
Transformers Version | 4.40.2 |
Tokenizer Class | Qwen2Tokenizer |
Padding Token | <|im_end|> |
Vocabulary Size | 151646 |
Torch Data Type | float16 |
Errors | replace |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Qwen2 7B Int4 GPTQ Wikitext2 | 128K / 5.6 GB | 13 | 0 |
CodeQwen1.5 7B Chat GPTQ Int4 | 64K / 4.9 GB | 17 | 0 |
....5 Coder 7B Instruct GPTQ Int4 | 32K / 5.6 GB | 361273 | 8 |
Qwen2.5 7B Instruct GPTQ Int4 | 32K / 5.6 GB | 30741 | 23 |
Qwen2.5 7B Instruct GPTQ Int8 | 32K / 8.9 GB | 6798 | 15 |
....5 Coder 7B Instruct GPTQ Int8 | 32K / 8.9 GB | 856 | 4 |
Qwen2 7B Instruct GPTQ Int4 | 32K / 5.6 GB | 2248 | 27 |
Qwen2 7B Instruct GPTQ Int8 | 32K / 8.9 GB | 480 | 17 |
Qwen1.5 7B Int3 GPTQ Wikitext2 | 32K / 5 GB | 10 | 0 |
Qwen1.5 7B Int4 GPTQ Wikitext2 | 32K / 5.8 GB | 10 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐