Phi 2 MoE Test by AbstractPerspective

 ยป  All LLMs  ยป  AbstractPerspective  ยป  Phi 2 MoE Test   URL Share it on

  Merged Model   Arxiv:1910.09700   Autotrain compatible   Custom code   Endpoints compatible   Moe   Phi-msft   Region:us   Safetensors   Sharded   Tensorflow

Phi 2 MoE Test Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Phi 2 MoE Test (AbstractPerspective/Phi-2_MoE_test)
๐ŸŒŸ Advertise your project ๐Ÿš€

Phi 2 MoE Test Parameters and Internals

LLM NamePhi 2 MoE Test
Repository ๐Ÿค—https://huggingface.co/AbstractPerspective/Phi-2_MoE_test 
Merged ModelYes
Model Size4.5b
Required VRAM8.9 GB
Updated2025-09-14
MaintainerAbstractPerspective
Model Typephi-msft
Model Files  5.0 GB: 1-of-2   3.9 GB: 2-of-2
Model ArchitecturePhiForCausalLM
Model Max Length2048
Transformers Version4.37.0
Tokenizer ClassCodeGenTokenizer
Padding Token<|endoftext|>
Vocabulary Size51200
Torch Data Typefloat16
Activation Functiongelu_new

Best Alternatives to Phi 2 MoE Test

Best Alternatives
Context / RAM
Downloads
Likes
Phixtral 2x2 80K / 8.9 GB13149
Samantha Phixtral 2x2 80K / 8.9 GB31
Phixtral 4x2 8odd0K / 8.9 GB43
Note: green Score (e.g. "73.2") means that the model is better than AbstractPerspective/Phi-2_MoE_test.

Rank the Phi 2 MoE Test Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51368 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124