| Model Type |
| |||||||||
| Use Cases |
| |||||||||
| Additional Notes |
| |||||||||
| Training Details |
| |||||||||
| Input Output |
|
| LLM Name | CodeGemma 2B GPTQ |
| Repository ๐ค | https://huggingface.co/TechxGenus/CodeGemma-2b-GPTQ |
| Model Size | 2b |
| Required VRAM | 3.1 GB |
| Updated | 2025-10-27 |
| Maintainer | TechxGenus |
| Model Type | gemma |
| Model Files | |
| GPTQ Quantization | Yes |
| Quantization Type | gptq |
| Model Architecture | GemmaForCausalLM |
| License | other |
| Context Length | 8192 |
| Model Max Length | 8192 |
| Transformers Version | 4.39.0.dev0 |
| Tokenizer Class | GemmaTokenizer |
| Padding Token | <eos> |
| Vocabulary Size | 256000 |
| Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Gemma 1.1 2B It GPTQ | 8K / 3.1 GB | 15212 | 1 |
| Gemma 2B Gptq 4bit | 8K / 2.1 GB | 11 | 0 |
| Gemma 2B Gptq 8bit | 8K / 3.1 GB | 8 | 0 |
| Gemma 2B GPTQ | 8K / 2.1 GB | 1157 | 1 |
| ... 2B It Hermes Function Calling | 8K / 5.1 GB | 5 | 0 |
| Gemma 1.1 2B It Bnb 4bit | 8K / 2.1 GB | 2321 | 5 |
| Vi Gemma 2B RAG | 8K / 5.1 GB | 112 | 13 |
| Gemma 2B Bnb 4bit | 8K / 2.1 GB | 3962 | 16 |
| STEMerald 2B 4bit | 8K / 2.2 GB | 136 | 1 |
| My AwesomeFinance Model | 8K / 2.1 GB | 5 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐