Translategemma Tok by zhoucantd

 »  All LLMs  »  zhoucantd  »  Translategemma Tok   URL Share it on

Translategemma Tok is an open-source language model by zhoucantd. Features: 4b LLM, VRAM: 0.2GB, LLM Explorer Score: 0.26.

  Arxiv:1910.09700 Base model:adapter:google/tran... Base model:google/translategem...   Conversational   Llama-factory   Lora   Peft   Region:us   Safetensors

Translategemma Tok Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Translategemma Tok Parameters and Internals

LLM NameTranslategemma Tok
Repository 🤗https://huggingface.co/zhoucantd/translategemma-tok 
Base Model(s)  google/translategemma-4b-it   google/translategemma-4b-it
Model Size4b
Required VRAM0.2 GB
Updated2026-03-29
Maintainerzhoucantd
Model Files  0.2 GB   0.0 GB
Model ArchitectureAutoModel
Is Biasednone
Tokenizer ClassGemmaTokenizer
Padding Token<pad>
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modules29.self_attn.k_proj|down_proj|30.self_attn.v_proj|27.self_attn.k_proj|language_model.layers.14.self_attn.v_proj|language_model.layers.24.self_attn.k_proj|language_model.layers.7.self_attn.k_proj|language_model.layers.22.self_attn.k_proj|language_model.layers.14.self_attn.q_proj|28.self_attn.v_proj|30.self_attn.k_proj|31.self_attn.v_proj|language_model.layers.3.self_attn.k_proj|32.self_attn.v_proj|28.self_attn.k_proj|language_model.layers.1.self_attn.v_proj|29.self_attn.v_proj|language_model.layers.26.self_attn.q_proj|language_model.layers.10.self_attn.q_proj|29.self_attn.q_proj|language_model.layers.24.self_attn.v_proj|language_model.layers.16.self_attn.k_proj|language_model.layers.15.self_attn.k_proj|language_model.layers.6.self_attn.q_proj|28.self_attn.q_proj|language_model.layers.12.self_attn.q_proj|language_model.layers.25.self_attn.q_proj|30.self_attn.q_proj|language_model.layers.17.self_attn.v_proj|language_model.layers.25.self_attn.v_proj|language_model.layers.19.self_attn.v_proj|language_model.layers.4.self_attn.k_proj|27.self_attn.q_proj|language_model.layers.3.self_attn.q_proj|language_model.layers.1.self_attn.q_proj|language_model.layers.23.self_attn.v_proj|33.self_attn.k_proj|language_model.layers.19.self_attn.q_proj|language_model.layers.5.self_attn.q_proj|language_model.layers.0.self_attn.k_proj|language_model.layers.26.self_attn.k_proj|27.self_attn.v_proj|language_model.layers.13.self_attn.k_proj|language_model.layers.11.self_attn.k_proj|language_model.layers.24.self_attn.q_proj|language_model.layers.20.self_attn.k_proj|32.self_attn.k_proj|31.self_attn.k_proj|language_model.layers.12.self_attn.v_proj|language_model.layers.9.self_attn.k_proj|gate_proj|language_model.layers.12.self_attn.k_proj|language_model.layers.2.self_attn.q_proj|language_model.layers.6.self_attn.v_proj|language_model.layers.23.self_attn.k_proj|language_model.layers.21.self_attn.k_proj|language_model.layers.15.self_attn.q_proj|language_model.layers.7.self_attn.v_proj|language_model.layers.15.self_attn.v_proj|language_model.layers.20.self_attn.v_proj|language_model.layers.9.self_attn.v_proj|language_model.layers.0.self_attn.q_proj|language_model.layers.2.self_attn.k_proj|language_model.layers.0.self_attn.v_proj|33.self_attn.v_proj|language_model.layers.18.self_attn.q_proj|language_model.layers.21.self_attn.q_proj|language_model.layers.13.self_attn.q_proj|language_model.layers.10.self_attn.k_proj|33.self_attn.q_proj|language_model.layers.1.self_attn.k_proj|language_model.layers.20.self_attn.q_proj|language_model.layers.16.self_attn.q_proj|language_model.layers.18.self_attn.k_proj|language_model.layers.21.self_attn.v_proj|language_model.layers.3.self_attn.v_proj|language_model.layers.13.self_attn.v_proj|language_model.layers.17.self_attn.q_proj|language_model.layers.7.self_attn.q_proj|language_model.layers.10.self_attn.v_proj|language_model.layers.23.self_attn.q_proj|language_model.layers.9.self_attn.q_proj|language_model.layers.18.self_attn.v_proj|o_proj|language_model.layers.8.self_attn.q_proj|language_model.layers.14.self_attn.k_proj|language_model.layers.2.self_attn.v_proj|language_model.layers.8.self_attn.v_proj|language_model.layers.22.self_attn.q_proj|32.self_attn.q_proj|language_model.layers.19.self_attn.k_proj|31.self_attn.q_proj|language_model.layers.6.self_attn.k_proj|language_model.layers.4.self_attn.q_proj|language_model.layers.17.self_attn.k_proj|language_model.layers.22.self_attn.v_proj|language_model.layers.11.self_attn.v_proj|language_model.layers.5.self_attn.v_proj|language_model.layers.25.self_attn.k_proj|up_proj|language_model.layers.16.self_attn.v_proj|language_model.layers.4.self_attn.v_proj|language_model.layers.5.self_attn.k_proj|language_model.layers.11.self_attn.q_proj|language_model.layers.26.self_attn.v_proj|language_model.layers.8.self_attn.k_proj
LoRA Alpha64
LoRA Dropout0
R Param32

Best Alternatives to Translategemma Tok

Best Alternatives
Context / RAM
Downloads
Likes
Qwen3 4B Chunky0K / 0.3 GB190
Gemma3 Konkani0K / 0 GB1195
Gemma3 Konkani 4B0K / 0 GB1195
AYA Mistral7B Instruct TR 4B0K / 0.3 GB06
...emeter LongCoT Qwen3 1.7B GGUF0K / 0.8 GB8392
II Search 4B GGUF0K / 1.7 GB7515
...upyter Agent Qwen3 4B AIO GGUF0K / 1.7 GB3274
Qwen3 4B Abliterated F32 GGUFs0K / 1.7 GB3972
Basically Human 4B F32 GGUF0K / 1.7 GB452
Chinda Qwen3 4B F32 GGUF0K / 1.7 GB1702
Note: green Score (e.g. "73.2") means that the model is better than zhoucantd/translategemma-tok.

Rank the Translategemma Tok Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 53053 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum — our secure, self-hosted AI agent for server management.
Release v20260328a