Lumosia MoE 4x10.7 GGUF by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Lumosia MoE 4x10.7 GGUF   URL Share it on

Lumosia MoE 4x10.7 GGUF is an open-source language model by TheBloke. Features: 10.7b LLM, VRAM: 12GB, License: apache-2.0, MoE, Quantized, LLM Explorer Score: 0.12.

Base model:quantized:steelstor... Base model:steelstorage/lumosi...   Conversational   Dopeornope/solarc-m-10.7b   Gguf Jeonsworld/carbonvillain-en-10...   Kyujinpy/sakura-solar-instruct   Lazymergekit Maywell/pivot-10.7b-mistral-v0...   Merge   Mergekit   Mixtral   Moe   Quantized   Region:us

Lumosia MoE 4x10.7 GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Lumosia MoE 4x10.7 GGUF (TheBloke/Lumosia-MoE-4x10.7-GGUF)
๐ŸŒŸ Advertise your project ๐Ÿš€

Lumosia MoE 4x10.7 GGUF Parameters and Internals

Model Type 
mixture of experts, text generation
Use Cases 
Areas:
research
Additional Notes 
This is a very experimental model made with Mixture of Experts (MoE) technique.
Training Details 
Data Sources:
DopeorNope/SOLARC-M-10.7B, maywell/PiVoT-10.7B-Mistral-v0.2-RP, kyujinpy/Sakura-SOLAR-Instruct, jeonsworld/CarbonVillain-en-10.7B-v1
Methodology:
Mixture of Experts ensemble made with multiple Solar models
Context Length:
16000
Model Architecture:
Mixture of Experts (MoE)
Input Output 
Input Format:
### System: ### USER:{prompt} ### Assistant:
Accepted Modalities:
text
Output Format:
text generation
LLM NameLumosia MoE 4x10.7 GGUF
Repository ๐Ÿค—https://huggingface.co/TheBloke/Lumosia-MoE-4x10.7-GGUF 
Model NameLumosia MoE 4X10.7
Model CreatorSteel
Base Model(s)  Steelskull/Lumosia-MoE-4x10.7   Steelskull/Lumosia-MoE-4x10.7
Model Size10.7b
Required VRAM12 GB
Updated2026-04-04
MaintainerTheBloke
Model Typemixtral
Model Files  12.0 GB   15.8 GB   15.7 GB   15.6 GB   20.3 GB   20.4 GB   20.3 GB   24.8 GB   24.9 GB   24.8 GB   29.6 GB   38.4 GB
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureAutoModel
Licenseapache-2.0

Best Alternatives to Lumosia MoE 4x10.7 GGUF

Best Alternatives
Context / RAM
Downloads
Likes
Nous Hermes 2 SOLAR 10.7B GGUF0K / 4.5 GB2326113
SOLAR 10.7B Instruct V1.0 GGUF0K / 4.5 GB205682
OPEN SOLAR KO 10.7B GGUF0K / 4.1 GB3211
Tess 10.7B V1.5B GGUF0K / 4 GB3527
Sensualize Solar 10.7B GGUF0K / 4.5 GB27810
CarbonVillain En 10.7B V4 GGUF0K / 4.5 GB3466
Frostwind 10.7B V1 GGUF0K / 4.5 GB2724
SOLAR 10.7B V1.0 GGUF0K / 4.5 GB30214
PiVoT 10.7B Mistral V0.2 GGUF0K / 4.5 GB3055
LMCocktail 10.7B V1 GGUF0K / 4.5 GB1095
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/Lumosia-MoE-4x10.7-GGUF.

Rank the Lumosia MoE 4x10.7 GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52721 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a