| LLM Name | Qwen2 1.5B Gguf Q4 K M | 
| Repository ๐ค | https://huggingface.co/inflaton/Qwen2-1.5B-gguf-q4_k_m | 
| Model Size | 1.5b | 
| Required VRAM | 0.7 GB | 
| Updated | 2024-11-13 | 
| Maintainer | inflaton | 
| Model Type | qwen2 | 
| Model Files | |
| GGUF Quantization | Yes | 
| Quantization Type | gguf|q4|q4_k | 
| Model Architecture | AutoModel | 
| Best Alternatives | Context / RAM | Downloads | Likes | 
|---|---|---|---|
| MedScholar 1.5B F32 GGUF | 0K / 0.7 GB | 297 | 3 | 
| Qwen2 1.5B Instruct GGUF | 0K / 0.4 GB | 303 | 9 | 
| Qwen2 1.5B Ita V2 | 0K / 0.1 GB | 14 | 0 | 
| Jenna V3 Qwen2 1.5 GGUF Q4 | 0K / 1 GB | 15 | 0 | 
| Jenna V3 Qwen2 1.5 GGUF | 0K / 3.1 GB | 6 | 1 | 
| Qwen2 1.5B | 0K / 1.1 GB | 6 | 0 | 
| Qwen2 1.5B MAC Gguf Q8 0 | 0K / 0.4 GB | 23 | 0 | 
| Qwen2 1.5B MAC Gguf Q4 K M | 0K / 0.3 GB | 22 | 0 | 
| Qwen2 1.5B MAC Gguf F16 | 0K / 3.1 GB | 17 | 0 | 
| Qwen2 1.5B Gguf F16 | 0K / 3.1 GB | 16 | 0 | 
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐