Model Type |
| ||||||||||||
Additional Notes |
| ||||||||||||
Input Output |
|
LLM Name | LLaMa 30B GGML |
Repository ๐ค | https://huggingface.co/TheBloke/LLaMa-30B-GGML |
Model Size | 30b |
Required VRAM | 13.6 GB |
Updated | 2025-08-18 |
Maintainer | TheBloke |
Model Type | llama |
Model Files | |
GGML Quantization | Yes |
Quantization Type | ggml |
Model Architecture | AutoModel |
License | other |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...e Llama 30B Instruct 2048 GGML | 0K / 13.7 GB | 4 | 20 |
30B Epsilon GGML | 0K / 13.7 GB | 2 | 9 |
Mpt 30B Chat GGML | 0K / 16.9 GB | 4 | 73 |
Mpt 30B Instruct GGML | 0K / 16.9 GB | 5 | 43 |
Mpt 30B GGML | 0K / 16.9 GB | 2 | 17 |
Medalpaca Lora 30B 8bit | 0K / 0.2 GB | 0 | 15 |
Yayi2 30B Llama GGUF | 0K / 12.9 GB | 157 | 10 |
LLaMA 30B GGUF | 0K / 13.5 GB | 705 | 5 |
...e Llama 30B Instruct 2048 GGUF | 0K / 13.5 GB | 176 | 4 |
Llama 30B Supercot GGUF | 0K / 13.5 GB | 152 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐