HuginnV5.5 12.6B is an open-source language model by The-Face-Of-Goonery. Features: 12.6b LLM, VRAM: 25.8GB, Context: 32K, License: cc-by-4.0, Quantized, HF Score: 72.9, LLM Explorer Score: 0.26, Arc: 72, HellaSwag: 86.7, MMLU: 64.5, TruthfulQA: 70.5, WinoGrande: 81.3, GSM8K: 62.6.
| LLM Name | HuginnV5.5 12.6B |
| Repository 🤗 | https://huggingface.co/The-Face-Of-Goonery/HuginnV5.5-12.6B |
| Model Size | 12.6b |
| Required VRAM | 25.8 GB |
| Updated | 2024-07-04 |
| Maintainer | The-Face-Of-Goonery |
| Model Type | mistral |
| Model Files | |
| GGUF Quantization | Yes |
| Quantization Type | gguf |
| Model Architecture | MistralForCausalLM |
| License | cc-by-4.0 |
| Context Length | 32768 |
| Model Max Length | 32768 |
| Transformers Version | 4.36.2 |
| Tokenizer Class | LlamaTokenizer |
| Vocabulary Size | 32000 |
| Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| ...al Nemo Instruct 2407 Bnb 4bit | 128K / 8.3 GB | 17568 | 31 |
| ...istral Nemo Base 2407 Bnb 4bit | 128K / 8.3 GB | 3607 | 15 |
| ...uginnV5.5 12.6B 8.0bpw H8 EXL2 | 32K / 13.1 GB | 3 | 1 |
🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟