| Model Type | 
 | |||
| Additional Notes | 
 | |||
| Input Output | 
 | 
| LLM Name | Llama 2 70B Guanaco QLoRA GGUF | 
| Repository ๐ค | https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGUF | 
| Model Name | Llama2 70B Guanaco QLoRA | 
| Model Creator | Mikael110 | 
| Base Model(s) | |
| Model Size | 70b | 
| Required VRAM | 29.3 GB | 
| Updated | 2025-10-14 | 
| Maintainer | TheBloke | 
| Model Type | llama | 
| Model Files | |
| Supported Languages | en | 
| GGUF Quantization | Yes | 
| Quantization Type | gguf | 
| Model Architecture | AutoModel | 
| License | other | 
| Best Alternatives | Context / RAM | Downloads | Likes | 
|---|---|---|---|
| ...us Qwen3 R1 Llama Distill GGUF | 0K / 0.8 GB | 236 | 2 | 
| KafkaLM 70B German V0.1 GGUF | 0K / 25.5 GB | 3623 | 51 | 
| ...gekit Passthrough Yqhuxcv GGUF | 0K / 16.9 GB | 110 | 0 | 
| CodeLlama 70B Instruct GGUF | 0K / 25.5 GB | 2159 | 60 | 
| CodeLlama 70B Python GGUF | 0K / 25.5 GB | 2351 | 44 | 
| Meta Llama 3 70B Instruct GGUF | 0K / 26.4 GB | 133 | 4 | 
| DAD Model V2 70B Q4 | 0K / 42.5 GB | 10 | 0 | 
| CodeLlama 70B Hf GGUF | 0K / 25.5 GB | 702 | 42 | 
| Llama 2 70B Guanaco QLoRA GGUF | 0K / 29.3 GB | 27 | 0 | 
| Meditron 70B GGUF | 0K / 29.3 GB | 1772 | 21 | 
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐