| Model Type |
|
| LLM Name | Gemma 2 27B It GGUF |
| Repository ๐ค | https://huggingface.co/second-state/gemma-2-27b-it-GGUF |
| Model Name | gemma-2-27b-it |
| Model Creator | |
| Base Model(s) | |
| Model Size | 27b |
| Required VRAM | 10.4 GB |
| Updated | 2025-09-25 |
| Maintainer | second-state |
| Model Type | gemma2 |
| Model Files | |
| GGUF Quantization | Yes |
| Quantization Type | fp16|gguf|q2|q4_k|q5_k |
| Model Architecture | Gemma2ForCausalLM |
| License | gemma |
| Context Length | 8192 |
| Model Max Length | 8192 |
| Transformers Version | 4.42.0.dev0 |
| Vocabulary Size | 256000 |
| Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| ...Horizon AI Korean Advanced 27B | 8K / 54.7 GB | 1107 | 0 |
| Gemma 2 27B It GGUF | 8K / 10.4 GB | 69 | 0 |
| Gemma2 Mixed Therapy Ft | 8K / 15.8 GB | 3 | 2 |
| Gemma 2 27B It Bnb 4bit | 8K / 15.8 GB | 1203 | 12 |
| Gemma 2 27B Bnb 4bit | 8K / 15.8 GB | 1338 | 13 |
| Gemma 2 27B It 4bit | 8K / 15.3 GB | 1101 | 8 |
| GEMMA2 27B NLI 16bit | 8K / 54.7 GB | 5 | 0 |
| Gemma 2 27B It 8bit | 8K / 28.8 GB | 91 | 10 |
| Gemma 2 27B It 4bit | 8K / 15.8 GB | 6 | 3 |
| Gemma 2 27B 8bit | 8K / 28.8 GB | 15 | 2 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐