Vanta Research Apollo V1 4B by unmodeled-tyler

 ยป  All LLMs  ยป  unmodeled-tyler  ยป  Vanta Research Apollo V1 4B   URL Share it on

  Apollo   Autotrain compatible Base model:adapter:microsoft/p... Base model:microsoft/phi-4-min...   Conversational   Custom code   Dataset:anthropic/hh-rlhf Dataset:eleutherai/hendrycks m...   Dataset:nvidia/helpsteer2 Dataset:nvidia/opensciencereas... Dataset:openai/collective-alig...   En   Endpoints compatible   Instruct   Lora   Parameter-efficient   Phi3   Phi4   Reasoning   Region:us   Safetensors   Sharded   Tensorflow   Vanta-research

Vanta Research Apollo V1 4B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Vanta Research Apollo V1 4B (unmodeled-tyler/vanta-research-apollo-v1-4b)
๐ŸŒŸ Advertise your project ๐Ÿš€

Vanta Research Apollo V1 4B Parameters and Internals

LLM NameVanta Research Apollo V1 4B
Repository ๐Ÿค—https://huggingface.co/unmodeled-tyler/vanta-research-apollo-v1-4b 
Base Model(s)  Phi 4 Mini Instruct   microsoft/Phi-4-mini-instruct
Model Size4b
Required VRAM7.7 GB
Updated2025-09-14
Maintainerunmodeled-tyler
Model Typephi3
Instruction-BasedYes
Model Files  4.9 GB: 1-of-2   2.8 GB: 2-of-2
Supported Languagesen
Model ArchitecturePhi3ForCausalLM
Licenseapache-2.0
Context Length131072
Model Max Length131072
Transformers Version4.49.0
Tokenizer ClassGPT2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size200064
Torch Data Typefloat16

Best Alternatives to Vanta Research Apollo V1 4B

Best Alternatives
Context / RAM
Downloads
Likes
Calme 2.1 Phi3.5 4B128K / 7.7 GB184
AURORAV0.3 4B128K / 7.7 GB253
Lunar 4B128K / 7.7 GB51
Phi3.5 Gutenberg 4B128K / 7.7 GB95
Calme 2.1 Phi3 4B4K / 7.7 GB51
Calme 2.2 Phi3 4B4K / 7.7 GB92
Calme 2.3 Phi3 4B4K / 7.7 GB89
Ruphi 4B128K / 7.6 GB70
Phi 3.5 Mini Hyper128K / 7.7 GB50
...Instruct V0.3 HQQ 1bit Smashed4K / 0.9 GB50

Rank the Vanta Research Apollo V1 4B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51368 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124