Hermes 2 Pro 11B GGUF is an open-source language model by MaziyarPanahi. Features: 11b LLM, VRAM: 4.2GB, Quantized, LLM Explorer Score: 0.13.
| Model Type |
|
| LLM Name | Hermes 2 Pro 11B GGUF |
| Repository ๐ค | https://huggingface.co/MaziyarPanahi/Hermes-2-Pro-11B-GGUF |
| Model Name | Hermes-2-Pro-11B-GGUF |
| Model Creator | mattshumer |
| Base Model(s) | |
| Model Size | 11b |
| Required VRAM | 4.2 GB |
| Updated | 2026-03-29 |
| Maintainer | MaziyarPanahi |
| Model Type | mistral |
| Model Files | |
| GGUF Quantization | Yes |
| Quantization Type | gguf |
| Model Architecture | AutoModel |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Synaptica GGUF | 0K / 4 GB | 134 | 0 |
| YorkShire11 GGUF | 0K / 2.7 GB | 124 | 0 |
| Mergekit Slerp Oskyrzi GGUF | 0K / 2.7 GB | 66 | 0 |
| Fimburs11V3 GGUF | 0K / 4 GB | 22 | 0 |
| Llama 3 11B Instruct V0.1 GGUF | 0K / 4.5 GB | 291 | 7 |
| Velara 11B V2 GGUF | 0K / 4.8 GB | 281 | 9 |
| ...al 7B Instruct V0.2 Slerp GGUF | 0K / 2.7 GB | 147 | 1 |
| ...al 7B Instruct V0.2 Slerp GGUF | 0K / 2.7 GB | 67 | 2 |
| ... Mistral 7B Instruct V0.1 GGUF | 0K / 2.7 GB | 33 | 1 |
| ...al 7B Instruct V0.2 Slerp GGUF | 0K / 2.7 GB | 59 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐