EpistemeAI Codegemma 2 9B by EpistemeAI

 »  All LLMs  »  EpistemeAI  »  EpistemeAI Codegemma 2 9B   URL Share it on

EpistemeAI Codegemma 2 9B is an open-source language model by EpistemeAI. Features: 9b LLM, VRAM: 18.6GB, Context: 8K, License: gemma, Quantized, LLM Explorer Score: 0.15.

  4bit   Autotrain compatible Base model:finetune:unsloth/ge... Base model:unsloth/gemma-2-9b-...   En   Endpoints compatible   Gemma2   Pytorch   Quantized   Region:us   Safetensors   Sharded   Tensorflow   Trl   Unsloth

EpistemeAI Codegemma 2 9B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

EpistemeAI Codegemma 2 9B Parameters and Internals

Model Type 
text generation, text classification
Use Cases 
Areas:
Content Creation and Communication, Research and Education
Applications:
Chatbots and Conversational AI, Text Summarization, Code Generation
Primary Use Cases:
Text Generation, Language Learning Tools
Limitations:
Biases due to training data, Handling of open-ended tasks
Additional Notes 
Trained 2x faster using Unsloth and Huggingface's TRL library.
Supported Languages 
English (primary)
Training Details 
Data Sources:
Web Documents, Code, Mathematics
Data Volume:
8 trillion tokens
Methodology:
Fine-tuning with specific code instructions
Hardware Used:
TPUv5p
Responsible Ai Considerations 
Fairness:
Models underwent data pre-processing and posterior evaluations for bias.
Transparency:
Details are summarized in the model card.
Mitigation Strategies:
Continuous monitoring and de-biasing techniques during model development.
Input Output 
Input Format:
Text string, such as a question, a prompt, or a document to be summarized
Accepted Modalities:
text
Output Format:
Generated English-language text in response to the input
LLM NameEpistemeAI Codegemma 2 9B
Repository 🤗https://huggingface.co/EpistemeAI/EpistemeAI-codegemma-2-9b 
Base Model(s)  Gemma 2 9B Bnb 4bit   unsloth/gemma-2-9b-bnb-4bit
Model Size9b
Required VRAM18.6 GB
Updated2025-09-23
MaintainerEpistemeAI
Model Typegemma2
Model Files  4.9 GB: 1-of-4   5.0 GB: 2-of-4   5.0 GB: 3-of-4   3.7 GB: 4-of-4   4.9 GB: 1-of-4   5.0 GB: 2-of-4   5.0 GB: 3-of-4   3.7 GB: 4-of-4
Supported Languagesen
Quantization Type4bit
Model ArchitectureGemma2ForCausalLM
Licensegemma
Context Length8192
Model Max Length8192
Transformers Version4.44.0
Tokenizer ClassGemmaTokenizer
Padding Token<pad>
Vocabulary Size256000
Torch Data Typefloat16

Best Alternatives to EpistemeAI Codegemma 2 9B

Best Alternatives
Context / RAM
Downloads
Likes
GWQ 9B Preview8K / 18.6 GB163
Gemma 2 9B 4bit8K / 5.2 GB2242
Gemma 2 9B It Bnb 4bit8K / 6.1 GB1809731
SASTRI 1 9B8K / 6.1 GB60
Gemma 2 9B Bnb 4bit8K / 6.1 GB1054331
Gemma 2 9B It Finance8K / 18.6 GB60
Athena Gemma 2 2B It8K / 23.8 GB02
Gemma 2 9B It 4bit8K / 5.2 GB152372
Gemma 2 9B It Ko RAG Bnb 4bit8K / 6.1 GB1550
...ma 2 9B It Ko ChatRAG Bnb 4bit8K / 6.1 GB50
Note: green Score (e.g. "73.2") means that the model is better than EpistemeAI/EpistemeAI-codegemma-2-9b.

Rank the EpistemeAI Codegemma 2 9B Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 53232 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum — our secure, self-hosted AI agent for server management.
Release v20260328a