Ling 2.6 Flash MLX 9bit by inferencerlabs

 »  All LLMs  »  inferencerlabs  »  Ling 2.6 Flash MLX 9bit   URL Share it on

Ling 2.6 Flash MLX 9bit is an open-source language model by inferencerlabs. Features: LLM, VRAM: 117.2GB, Context: 128K, Quantized.

  9bit   Bailing hybrid Base model:inclusionai/ling-2.... Base model:quantized:inclusion...   Conversational   Custom code   En   Mlx   Quantized   Region:us   Sharded   Tensorflow

Ling 2.6 Flash MLX 9bit Parameters and Internals

LLM NameLing 2.6 Flash MLX 9bit
Repository 🤗https://huggingface.co/inferencerlabs/Ling-2.6-flash-MLX-9bit 
Base Model(s)  inclusionAI/Ling-2.6-flash   inclusionAI/Ling-2.6-flash
Required VRAM117.2 GB
Updated2026-05-03
Maintainerinferencerlabs
Model Typebailing_hybrid
Model Files  9.7 GB: 1-of-12   10.0 GB: 2-of-12   9.8 GB: 3-of-12   10.0 GB: 4-of-12   10.0 GB: 5-of-12   9.8 GB: 6-of-12   10.0 GB: 7-of-12   10.0 GB: 8-of-12   9.8 GB: 9-of-12   10.0 GB: 10-of-12   10.0 GB: 11-of-12   8.1 GB: 12-of-12
Supported Languagesen
Quantization Type9bit
Model ArchitectureBailingMoeV2_5ForCausalLM
Context Length131072
Model Max Length131072
Transformers Version4.56.2
Tokenizer ClassTokenizersBackend
Padding Token<|endoftext|>
Vocabulary Size157184
Torch Data Typebfloat16

Rank the Ling 2.6 Flash MLX 9bit Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 53407 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum — our secure, self-hosted AI agent for server management.
Release v20260328a