| Model Type |
| |||||||||
| Use Cases |
| |||||||||
| Additional Notes |
| |||||||||
| Training Details |
| |||||||||
| Input Output |
|
| LLM Name | CodeGemma 7B GPTQ |
| Repository ๐ค | https://huggingface.co/TechxGenus/CodeGemma-7b-GPTQ |
| Base Model(s) | |
| Model Size | 7b |
| Required VRAM | 7.2 GB |
| Updated | 2025-09-23 |
| Maintainer | TechxGenus |
| Model Type | gemma |
| Model Files | |
| GPTQ Quantization | Yes |
| Quantization Type | gptq |
| Model Architecture | GemmaForCausalLM |
| License | other |
| Context Length | 8192 |
| Model Max Length | 8192 |
| Transformers Version | 4.39.0.dev0 |
| Tokenizer Class | GemmaTokenizer |
| Padding Token | <eos> |
| Vocabulary Size | 256000 |
| Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Codegemma 7B It GPTQ | 8K / 7.2 GB | 0 | 1 |
| Gemma 1.1 7B It GPTQ | 8K / 7.2 GB | 6 | 1 |
| Google Gemma 7B 8 Bit Gptq | 8K / 9.5 GB | 8 | 0 |
| Google Gemma 7B 4 Bit Gptq | 8K / 5.6 GB | 6 | 1 |
| Gemma 7B Instruct GPTQ 4bit | 8K / 5.6 GB | 6 | 0 |
| Codegemma 1.1 7B It GPTQ | 8K / 7.2 GB | 4 | 1 |
| SeaLLM 7B V2.5 4bit | 8K / 7.2 GB | 6 | 2 |
| Gemma 7B It GPTQ | 8K / 7.2 GB | 7 | 0 |
| Gemma 7B GPTQ | 8K / 7.2 GB | 6 | 0 |
| ...t Cleaner Gemma 32k Merged 16b | 31K / 17.1 GB | 5 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐