Apollo 1 4B by apexion-ai

 ยป  All LLMs  ยป  apexion-ai  ยป  Apollo 1 4B   URL Share it on

  Af   Ar   As   Ast   Autotrain compatible   Awa   Az   Ba   Ban Base model:finetune:qwen/qwen3...   Base model:qwen/qwen3-4b   Be   Bg   Bho   Bn   Bs   Ca   Ceb   Conversational   Cs   Cy   Da   De   El   En   Endpoints compatible   Es   Et   Eu   Fa   Fi   Fo   Fr   Fur   Ga   Gl   Gu   He   Hi   Hne   Hr   Ht   Hu   Hy   Id   Ilo   Is   It   Ja   Jv   Ka   Kea   Kk   Km   Kn   Ko   Lb   Li   Lij   Lmo   Lo   Lt   Lv   Mag   Mai   Min   Mk   Ml   Mr   Ms   Mt   My   Nb   Ne   Nl   Nn   Oc   Or   Pa   Pag   Pap   Pl   Prs   Pt   Qwen3   Region:us   Ro   Ru   Safetensors   Sc   Scn   Sd   Sharded   Si   Sk   Sl   Sq   Sr   Su   Sv   Sw   Szl   Ta   Te   Tensorflow   Tg   Th   Tl   Tpi   Tr   Tt   Uk   Unsloth   Ur   Uz   Vec   Vi   War   Yi   Yue   Zh

Apollo 1 4B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Apollo 1 4B (NoemaResearch/Apollo-1-4B)
๐ŸŒŸ Advertise your project ๐Ÿš€

Apollo 1 4B Parameters and Internals

LLM NameApollo 1 4B
Repository ๐Ÿค—https://huggingface.co/NoemaResearch/Apollo-1-4B 
Base Model(s)  Qwen/Qwen3-4B   Qwen/Qwen3-4B
Model Size4b
Required VRAM8.1 GB
Updated2025-08-30
Maintainerapexion-ai
Model Typeqwen3
Model Files  5.0 GB: 1-of-2   3.1 GB: 2-of-2
Supported Languagesen
Model ArchitectureQwen3ForCausalLM
Licensecc-by-nc-sa-4.0
Context Length131072
Model Max Length131072
Transformers Version4.51.3
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size151936
Torch Data Typefloat16
Errorsreplace

Best Alternatives to Apollo 1 4B

Best Alternatives
Context / RAM
Downloads
Likes
Qwen3 4B Instruct 2507256K / 8.1 GB562208249
Qwen3 4B Thinking 2507256K / 8.1 GB155952335
Jan V1 4B256K / 8.1 GB10426317
Qwen3 4B Thinking 2507 FP8256K / 5.2 GB12954026
Qwen3 4B Instruct 2507 FP8256K / 5.2 GB2096626
Qwen3 4B Instruct 2507256K / 8.1 GB184358
...4B Thinking 2507 DAG Reasoning256K / 16.1 GB89374
Qwen3 4B Thinking 2507256K / 8.1 GB51083
Test7256K / 8.1 GB4730
... Instruct 2507 Gabliterated V1256K / 8.1 GB3947
Note: green Score (e.g. "73.2") means that the model is better than NoemaResearch/Apollo-1-4B.

Rank the Apollo 1 4B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50998 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124