Model Type |
| |
Additional Notes |
|
LLM Name | Phi 2 Quantize Gptq |
Repository ๐ค | https://huggingface.co/styalai/phi-2_quantize_gptq |
Model Size | 600.5m |
Required VRAM | 1.8 GB |
Updated | 2025-10-06 |
Maintainer | styalai |
Model Type | phi |
Model Files | |
GPTQ Quantization | Yes |
Quantization Type | gptq |
Model Architecture | PhiForCausalLM |
Context Length | 2048 |
Model Max Length | 2048 |
Transformers Version | 4.38.2 |
Tokenizer Class | CodeGenTokenizer |
Vocabulary Size | 51200 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
... 2 Electrical Engineering GPTQ | 2K / 1.8 GB | 10 | 7 |
Quantised Phi 2 | 2K / 1.8 GB | 8 | 0 |
Quantised Phi 2 | 2K / 1.8 GB | 8 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐