Model Type |
| ||||||||||||||||||
Use Cases |
| ||||||||||||||||||
Additional Notes |
| ||||||||||||||||||
Supported Languages |
| ||||||||||||||||||
Training Details |
| ||||||||||||||||||
Input Output |
|
LLM Name | Nous Hermes 13B GGUF |
Repository ๐ค | https://huggingface.co/sirovub/Nous-Hermes-13b-GGUF |
Base Model(s) | |
Model Size | 13b |
Required VRAM | 13.8 GB |
Updated | 2025-09-23 |
Maintainer | sirovub |
Model Type | llama |
Model Files | |
Supported Languages | en |
GGUF Quantization | Yes |
Quantization Type | gguf|q8 |
Model Architecture | LlamaForCausalLM |
License | gpl |
Context Length | 2048 |
Model Max Length | 2048 |
Transformers Version | 4.29.2 |
Tokenizer Class | LlamaTokenizer |
Beginning of Sentence Token | <s> |
End of Sentence Token | </s> |
Unk Token | <unk> |
Vocabulary Size | 32001 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Llm Compiler 13B Ftd GGUF | 16K / 4.8 GB | 207 | 0 |
Llm Compiler 13B GGUF | 16K / 4.8 GB | 45 | 0 |
Llm Compiler 13B GGUF | 16K / 4.8 GB | 27 | 0 |
Llm Compiler 13B Ftd GGUF | 16K / 4.8 GB | 6 | 0 |
CodeLlama 13B Instruct GGUF | 16K / 5.4 GB | 1389 | 2 |
Luminia 13B V3 | 4K / 26 GB | 207 | 6 |
Mythomax L2 13B Q4 K M GGUF | 4K / 8.1 GB | 4006 | 8 |
DiarizationLM 13B Fisher V1 | 4K / 26 GB | 134 | 11 |
HyperLlama2Test | 4K / 26 GB | 5 | 0 |
...V2 13B L2 BetaTest Q4 K M GGUF | 4K / 7.9 GB | 5 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐