Apollo 1 4B by apexion-ai

 ยป  All LLMs  ยป  apexion-ai  ยป  Apollo 1 4B   URL Share it on

  Af   Ar   As   Ast   Autotrain compatible   Awa   Az   Ba   Ban Base model:finetune:qwen/qwen3...   Base model:qwen/qwen3-4b   Be   Bg   Bho   Bn   Bs   Ca   Ceb   Conversational   Cs   Cy   Da   De   El   En   Endpoints compatible   Es   Et   Eu   Fa   Fi   Fo   Fr   Fur   Ga   Gl   Gu   He   Hi   Hne   Hr   Ht   Hu   Hy   Id   Ilo   Is   It   Ja   Jv   Ka   Kea   Kk   Km   Kn   Ko   Lb   Li   Lij   Lmo   Lo   Lt   Lv   Mag   Mai   Min   Mk   Ml   Mr   Ms   Mt   My   Nb   Ne   Nl   Nn   Oc   Or   Pa   Pag   Pap   Pl   Prs   Pt   Qwen3   Region:us   Ro   Ru   Safetensors   Sc   Scn   Sd   Sharded   Si   Sk   Sl   Sq   Sr   Su   Sv   Sw   Szl   Ta   Te   Tensorflow   Tg   Th   Tl   Tpi   Tr   Tt   Uk   Unsloth   Ur   Uz   Vec   Vi   War   Yi   Yue   Zh

Apollo 1 4B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Apollo 1 4B (NoemaResearch/Apollo-1-4B)
๐ŸŒŸ Advertise your project ๐Ÿš€

Apollo 1 4B Parameters and Internals

LLM NameApollo 1 4B
Repository ๐Ÿค—https://huggingface.co/NoemaResearch/Apollo-1-4B 
Base Model(s)  Qwen/Qwen3-4B   Qwen/Qwen3-4B
Model Size4b
Required VRAM8.1 GB
Updated2025-09-23
Maintainerapexion-ai
Model Typeqwen3
Model Files  5.0 GB: 1-of-2   3.1 GB: 2-of-2
Supported Languagesen fr pt de ro sv da bg ru cs el uk es nl sk hr pl lt nb nn fa sl gu lv it oc ne mr be sr lb as cy sd ga fo hi pa bn or tg yi sc gl ca is sq li af mk si ur bs hy zh my ar he mt id ms tl jv su ta te kn ml tr az uz kk ba tt th lo fi et hu vi km ja ko ka eu ht sw
Model ArchitectureQwen3ForCausalLM
Licenseother
Context Length131072
Model Max Length131072
Transformers Version4.51.3
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size151936
Torch Data Typefloat16
Errorsreplace

Best Alternatives to Apollo 1 4B

Best Alternatives
Context / RAM
Downloads
Likes
Qwen3 4B Instruct 2507256K / 8.1 GB1473674320
Qwen3 4B Thinking 2507256K / 8.1 GB348927396
Jan V1 4B256K / 8.1 GB5605336
Qwen3 4B Thinking 2507 FP8256K / 5.2 GB20887035
WEBGEN 4B Preview256K / 8.1 GB147680
Qwen3 4B Instruct 2507 FP8256K / 5.2 GB4154231
Qwen3 4B Instruct 2507256K / 8.1 GB4428910
Jan V1.2509256K / 8.1 GB40528
Qwen3 4B Thinking 2507256K / 8.1 GB75035
DistilGPT OSS Qwen3 4B256K / 8.1 GB22417
Note: green Score (e.g. "73.2") means that the model is better than NoemaResearch/Apollo-1-4B.

Rank the Apollo 1 4B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51539 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124