Gemma3 Konkani 4B is an open-source language model by Reubencf. Features: 4b LLM, LLM Explorer Score: 0.22.
| LLM Name | Gemma3 Konkani 4B |
| Repository ๐ค | https://huggingface.co/konkani/gemma3-konkani-4B |
| Model Name | gemma3-konkani |
| Base Model(s) | |
| Model Size | 4b |
| Required VRAM | 0 GB |
| Updated | 2025-10-05 |
| Maintainer | Reubencf |
| Model Files | |
| Model Architecture | AutoModel |
| Is Biased | none |
| Tokenizer Class | GemmaTokenizer |
| Padding Token | <eos> |
| PEFT Type | LORA |
| LoRA Model | Yes |
| PEFT Target Modules | k_proj|v_proj|o_proj|q_proj |
| LoRA Alpha | 32 |
| LoRA Dropout | 0.1 |
| R Param | 16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Qwen3 4B Chunky | 0K / 0.3 GB | 19 | 0 |
| Translategemma Tok | 0K / 0.2 GB | 55 | 0 |
| Gemma3 Konkani | 0K / 0 GB | 119 | 5 |
| AYA Mistral7B Instruct TR 4B | 0K / 0.3 GB | 0 | 6 |
| ...emeter LongCoT Qwen3 1.7B GGUF | 0K / 0.8 GB | 839 | 2 |
| II Search 4B GGUF | 0K / 1.7 GB | 751 | 5 |
| ...upyter Agent Qwen3 4B AIO GGUF | 0K / 1.7 GB | 327 | 4 |
| Qwen3 4B Abliterated F32 GGUFs | 0K / 1.7 GB | 397 | 2 |
| Basically Human 4B F32 GGUF | 0K / 1.7 GB | 45 | 2 |
| Chinda Qwen3 4B F32 GGUF | 0K / 1.7 GB | 170 | 2 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐