Phixtral 4x2 8 GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Phixtral 4x2 8 GPTQ   URL Share it on

  4-bit   Autotrain compatible Base model:mlabonne/phixtral-4... Base model:quantized:mlabonne/...   Code Cognitivecomputations/dolphin-...   Conversational   Custom code   En   Gptq   Lxuechen/phi-2-dpo   Moe   Mrm8488/phi-2-coder   Phi-msft   Quantized   Region:us   Safetensors Yhyu13/phi-2-sft-dpo-gpt4 en-e...

Phixtral 4x2 8 GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Phixtral 4x2 8 GPTQ (TheBloke/phixtral-4x2_8-GPTQ)
๐ŸŒŸ Advertise your project ๐Ÿš€

Phixtral 4x2 8 GPTQ Parameters and Internals

Model Type 
phi-msft
Additional Notes 
The model uses Mergekit with Mixtral branch and offers new quantisation parameters for VRAM optimization and inference quality.
Supported Languages 
en (unknown)
Training Details 
Data Sources:
cognitivecomputations/dolphin-2_6-phi-2, lxuechen/phi-2-dpo, Yhyu13/phi-2-sft-dpo-gpt4_en-ep1, mrm8488/phi-2-coder
Methodology:
Mixure of Experts (MoE) made with four microsoft/phi-2 models, inspired by the mistralai/Mixtral-8x7B-v0.1 architecture
Hardware Used:
Massed Compute
Model Architecture:
Mixure of Experts (MoE)
Input Output 
Input Format:
'{prompt}'
LLM NamePhixtral 4x2 8 GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/phixtral-4x2_8-GPTQ 
Model NamePhixtral 4X2 8
Model CreatorMaxime Labonne
Base Model(s)  Phixtral 4x2 8   mlabonne/phixtral-4x2_8
Model Size1.3b
Required VRAM4.5 GB
Updated2025-09-17
MaintainerTheBloke
Model Typephi-msft
Model Files  4.5 GB
Supported Languagesen
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitecturePhiForCausalLM
Licensemit
Model Max Length2048
Transformers Version4.37.0.dev0
Tokenizer ClassCodeGenTokenizer
Padding Token<|endoftext|>
Vocabulary Size51200
Torch Data Typefloat16
Activation Functiongelu_new

Rank the Phixtral 4x2 8 GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51415 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124