Model Type |
|
LLM Name | Gemma 2B GGUF |
Repository ๐ค | https://huggingface.co/hoangton/gemma-2b-GGUF |
Model Size | 2b |
Required VRAM | 0.9 GB |
Updated | 2025-08-18 |
Maintainer | hoangton |
Model Type | gemma |
Model Files | |
GGUF Quantization | Yes |
Quantization Type | gguf |
Model Architecture | GemmaForCausalLM |
License | other |
Context Length | 8192 |
Model Max Length | 8192 |
Transformers Version | 4.38.0.dev0 |
Vocabulary Size | 256000 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Gemma 2B It | 8K / 5.1 GB | 231250 | 786 |
Gemma 2B | 8K / 5.1 GB | 236135 | 1061 |
Gemma 2B It | 8K / 1.5 GB | 6 | 0 |
Gemma 2B It | 8K / 5.1 GB | 35 | 1 |
Gemma Help Tiny Sft | 8K / 5.1 GB | 373 | 1 |
Gemma 2B T | 8K / 5.1 GB | 6 | 0 |
Gemma 2B It Code | 8K / 5.1 GB | 12 | 0 |
Gemma 2B It Q | 8K / 1.6 GB | 54 | 1 |
Gemma 2B It Q4 K M GGUF | 8K / 1.6 GB | 2402 | 3 |
GemSUra Edu | 8K / 5 GB | 169 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐