Karakuri Lm 8x7b Chat V0.1 8bit by mlx-community

 ยป  All LLMs  ยป  mlx-community  ยป  Karakuri Lm 8x7b Chat V0.1 8bit   URL Share it on

Karakuri Lm 8x7b Chat V0.1 8bit is an open-source language model by mlx-community. Features: 13.1b LLM, VRAM: 49.8GB, Context: 32K, License: apache-2.0, MoE, Quantized, LLM Explorer Score: 0.13.

  8bit Base model:finetune:tokyotech-... Base model:tokyotech-llm/swall...   Conversational   Dataset:nvidia/helpsteer   Dataset:openassistant/oasst2   En   Endpoints compatible   Ja   Mixtral   Mlx   Model-index   Moe   Quantized   Region:us   Safetensors   Sharded   Steerlm   Tensorflow

Karakuri Lm 8x7b Chat V0.1 8bit Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Karakuri Lm 8x7b Chat V0.1 8bit (mlx-community/karakuri-lm-8x7b-chat-v0.1-8bit)
๐ŸŒŸ Advertise your project ๐Ÿš€

Karakuri Lm 8x7b Chat V0.1 8bit Parameters and Internals

Model Type 
text generation
Additional Notes 
This model was converted to MLX format using mlx-lm version 0.12.1.
Supported Languages 
en (unknown level), ja (unknown level)
LLM NameKarakuri Lm 8x7b Chat V0.1 8bit
Repository ๐Ÿค—https://huggingface.co/mlx-community/karakuri-lm-8x7b-chat-v0.1-8bit 
Base Model(s)  tokyotech-llm/Swallow-MX-8x7b-NVE-v0.1   tokyotech-llm/Swallow-MX-8x7b-NVE-v0.1
Model Size13.1b
Required VRAM49.8 GB
Updated2026-04-11
Maintainermlx-community
Model Typemixtral
Model Files  5.4 GB: 1-of-10   5.3 GB: 2-of-10   5.4 GB: 3-of-10   5.3 GB: 4-of-10   5.4 GB: 5-of-10   5.3 GB: 6-of-10   5.4 GB: 7-of-10   5.3 GB: 8-of-10   5.4 GB: 9-of-10   1.6 GB: 10-of-10
Supported Languagesen ja
Quantization Type8bit
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.39.3
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
Torch Data Typebfloat16

Best Alternatives to Karakuri Lm 8x7b Chat V0.1 8bit

Best Alternatives
Context / RAM
Downloads
Likes
...oE V0.1 DPO F16 4.0bpw H6 EXL2195K / 31.3 GB70
...oE V0.1 DPO F16 5.0bpw H6 EXL2195K / 38.8 GB60
...2 Mixtral 8x22b 6.0bpw H8 EXL264K / 105.8 GB11
WizardLM 2 8x22 EXL2 4.0bpw64K / 70.9 GB51
...M 2 8x22B Beige 5.0bpw H6 EXL264K / 88.5 GB110
...M 2 8x22B Beige 2.4bpw H6 EXL264K / 42.7 GB60
...M 2 8x22B Beige 3.0bpw H6 EXL264K / 53.2 GB60
...rdLM 2 8x22B Beige EXL2 5.0bpw64K / 88.4 GB60
...M 2 8x22B Beige 4.0bpw H6 EXL264K / 70.8 GB50
...B Instruct V0.1 8.0bpw H8 EXL264K / 120.2 GB41
Note: green Score (e.g. "73.2") means that the model is better than mlx-community/karakuri-lm-8x7b-chat-v0.1-8bit.

Rank the Karakuri Lm 8x7b Chat V0.1 8bit Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52721 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a