LLM Name | Gemma 3 12B It 4bit DWQ |
Repository ๐ค | https://huggingface.co/mlx-community/gemma-3-12b-it-4bit-DWQ |
Base Model(s) | |
Model Size | 12b |
Required VRAM | 7.2 GB |
Updated | 2025-09-13 |
Maintainer | mlx-community |
Model Type | gemma3 |
Model Files | |
Quantization Type | 4bit |
Model Architecture | Gemma3ForConditionalGeneration |
License | gemma |
Transformers Version | 4.50.0.dev0 |
Tokenizer Class | GemmaTokenizer |
Padding Token | <pad> |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
Fallen Gemma3 12B V1 | 128K / 24.3 GB | 1232 | 20 |
Gemma 3 R1 12B V1 | 0K / 24.3 GB | 127 | 12 |
Safeword Casual V1 12B | 0K / 24.3 GB | 26 | 3 |
Amoral Fallen Omega Gemma3 12B | 0K / 24.2 GB | 9 | 17 |
...ga Abomination Gemma3 12B V1.0 | 0K / 24.2 GB | 58 | 9 |
Oni Mitsubishi 12B | 0K / 23.4 GB | 15 | 35 |
...mega Directive Gemma3 12B V1.0 | 0K / 24.3 GB | 22 | 4 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐