| LLM Name | VolareQuantized |
| Repository ๐ค | https://huggingface.co/MoxoffSrL/VolareQuantized |
| Model Size | 7b |
| Required VRAM | 5.3 GB |
| Updated | 2025-10-07 |
| Maintainer | MoxoffSrL |
| Model Type | gemma |
| Model Files | |
| Supported Languages | it en |
| GGML Quantization | Yes |
| GGUF Quantization | Yes |
| Quantization Type | gguf|ggml|q4|q4_k |
| Model Architecture | GemmaForCausalLM |
| License | mit |
| Context Length | 8192 |
| Model Max Length | 8192 |
| Transformers Version | 4.38.0 |
| Vocabulary Size | 256000 |
| Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Gemma 7B Uawiki 1B | 8K / 17.1 GB | 24 | 0 |
| CNCF | 8K / 5.3 GB | 19 | 0 |
| DeepCNCFQuantized | 8K / 5.3 GB | 14 | 1 |
| VolareQuantized | 8K / 5.3 GB | 226 | 1 |
| Gemma 7B It | 8K / 17.1 GB | 160844 | 1213 |
| Gemma 7B | 8K / 17.1 GB | 29740 | 3228 |
| Gemma 1.1 7B It GGUF | 8K / 5.3 GB | 128 | 1 |
| Train06 | 8K / 9.1 GB | 7 | 0 |
| Llama2 Kazakh 7B GGUF | 8K / 4.1 GB | 15 | 0 |
| Gemma 7B | 8K / 17.1 GB | 15 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐