Deepseek V3.1 MLX 5.5bit by inferencerlabs

 ยป  All LLMs  ยป  inferencerlabs  ยป  Deepseek V3.1 MLX 5.5bit   URL Share it on

  5-bit Base model:deepseek-ai/deepsee... Base model:quantized:deepseek-...   Conversational   Custom code   Deepseek v3   Mlx   Region:us   Safetensors   Sharded   Tensorflow

Deepseek V3.1 MLX 5.5bit Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Deepseek V3.1 MLX 5.5bit (inferencerlabs/deepseek-v3.1-MLX-5.5bit)
๐ŸŒŸ Advertise your project ๐Ÿš€

Deepseek V3.1 MLX 5.5bit Parameters and Internals

LLM NameDeepseek V3.1 MLX 5.5bit
Repository ๐Ÿค—https://huggingface.co/inferencerlabs/deepseek-v3.1-MLX-5.5bit 
Base Model(s)  deepseek-ai/DeepSeek-V3.1   deepseek-ai/DeepSeek-V3.1
Model Size671b
Required VRAM231 GB
Updated2025-08-24
Maintainerinferencerlabs
Model Typedeepseek_v3
Model Files  4.5 GB: 1-of-88   5.3 GB: 2-of-88   5.2 GB: 3-of-88   5.3 GB: 4-of-88   5.3 GB: 5-of-88   5.2 GB: 6-of-88   5.3 GB: 7-of-88   5.3 GB: 8-of-88   5.2 GB: 9-of-88   5.3 GB: 10-of-88   5.3 GB: 11-of-88   5.2 GB: 12-of-88   5.3 GB: 13-of-88   5.3 GB: 14-of-88   5.2 GB: 15-of-88   5.3 GB: 16-of-88   5.3 GB: 17-of-88   5.2 GB: 18-of-88   5.3 GB: 19-of-88   5.3 GB: 20-of-88   5.2 GB: 21-of-88   5.3 GB: 22-of-88   5.3 GB: 23-of-88   5.2 GB: 24-of-88   5.3 GB: 25-of-88   5.3 GB: 26-of-88   5.2 GB: 27-of-88   5.3 GB: 28-of-88   5.3 GB: 29-of-88   5.2 GB: 30-of-88   5.3 GB: 31-of-88   5.3 GB: 32-of-88   5.2 GB: 33-of-88   5.3 GB: 34-of-88   5.3 GB: 35-of-88   5.2 GB: 36-of-88   5.3 GB: 37-of-88   5.3 GB: 38-of-88   5.2 GB: 39-of-88   5.3 GB: 40-of-88   5.3 GB: 41-of-88   5.2 GB: 42-of-88   5.3 GB: 43-of-88   5.3 GB: 44-of-88
Model ArchitectureDeepseekV3ForCausalLM
Licensemit
Context Length163840
Model Max Length163840
Transformers Version4.44.2
Vocabulary Size129280
Torch Data Typebfloat16

Best Alternatives to Deepseek V3.1 MLX 5.5bit

Best Alternatives
Context / RAM
Downloads
Likes
R1 1776160K / 243.4 GB733422303
DeepSeek Prover V2 671B160K / 189 GB1803810
...o V2 Preview Deepseek 671B MoE160K / 219.9 GB39332
MAI DS R1160K / 1346.5 GB1044284
DeepSeek Prover V2 671B160K / 184.7 GB103
DeepSeek V3.1 8bit160K / 177.8 GB6783
DeepSeek V3.1 4bit160K / 190.1 GB6564
DeepSeek V3.1 Base 4bit160K / 190.1 GB12242
DeepSeek R1 0528 5bit160K / 231 GB1833
DeepSeek Prover V2 671B GGUF160K / 250 GB22828
Note: green Score (e.g. "73.2") means that the model is better than inferencerlabs/deepseek-v3.1-MLX-5.5bit.

Rank the Deepseek V3.1 MLX 5.5bit Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50856 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124