| Model Type |
| ||||||||||||
| Additional Notes |
| ||||||||||||
| Input Output |
|
| LLM Name | LLaMa 30B GGML |
| Repository ๐ค | https://huggingface.co/TheBloke/LLaMa-30B-GGML |
| Model Size | 30b |
| Required VRAM | 13.6 GB |
| Updated | 2025-09-23 |
| Maintainer | TheBloke |
| Model Type | llama |
| Model Files | |
| GGML Quantization | Yes |
| Quantization Type | ggml |
| Model Architecture | AutoModel |
| License | other |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| ...e Llama 30B Instruct 2048 GGML | 0K / 13.7 GB | 9 | 20 |
| 30B Epsilon GGML | 0K / 13.7 GB | 13 | 9 |
| Mpt 30B Chat GGML | 0K / 16.9 GB | 19 | 73 |
| Mpt 30B Instruct GGML | 0K / 16.9 GB | 13 | 43 |
| Mpt 30B GGML | 0K / 16.9 GB | 14 | 17 |
| Medalpaca Lora 30B 8bit | 0K / 0.2 GB | 0 | 15 |
| Yayi2 30B Llama GGUF | 0K / 12.9 GB | 219 | 10 |
| LLaMA 30B GGUF | 0K / 13.5 GB | 665 | 5 |
| ...e Llama 30B Instruct 2048 GGUF | 0K / 13.5 GB | 572 | 4 |
| 30B Epsilon GGUF | 0K / 13.5 GB | 512 | 5 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐