Umbra MoE 4x10.7 4.0bpw H6 EXL2 by LoneStriker

 ยป  All LLMs  ยป  LoneStriker  ยป  Umbra MoE 4x10.7 4.0bpw H6 EXL2   URL Share it on

  Autotrain compatible   Conversational   Endpoints compatible   Exl2 Fblgit/una-solar-10.7b-instruc...   Instruct Kodonho/solarm-sakurasolar-sle...   Merge   Mergekit   Mixtral   Moe Nousresearch/nous-hermes-2-sol...   Quantized   Region:us   Safetensors   Sao10k/sensualize-solar-10.7b   Sharded   Tensorflow

Umbra MoE 4x10.7 4.0bpw H6 EXL2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Umbra MoE 4x10.7 4.0bpw H6 EXL2 (LoneStriker/Umbra-MoE-4x10.7-4.0bpw-h6-exl2)
๐ŸŒŸ Advertise your project ๐Ÿš€

Umbra MoE 4x10.7 4.0bpw H6 EXL2 Parameters and Internals

Model Type 
General Assistant, Storytelling, RP/ERP
Use Cases 
Areas:
General Knowledge, RP/ERP
Additional Notes 
Built for general assistance and storytelling capabilities. Coherence extends up to 16,000 context length.
Training Details 
Methodology:
Mixure of Experts (MoE)
Context Length:
16000
Model Architecture:
Combination MoE of Solar models
Input Output 
Input Format:
ChatML
LLM NameUmbra MoE 4x10.7 4.0bpw H6 EXL2
Repository ๐Ÿค—https://huggingface.co/LoneStriker/Umbra-MoE-4x10.7-4.0bpw-h6-exl2 
Model Size10.7b
Required VRAM18.3 GB
Updated2025-09-22
MaintainerLoneStriker
Model Typemixtral
Instruction-BasedYes
Model Files  8.6 GB: 1-of-3   8.6 GB: 2-of-3   1.1 GB: 3-of-3
Quantization Typeexl2
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length4096
Model Max Length4096
Transformers Version4.36.2
Tokenizer ClassLlamaTokenizer
Padding Token<s>
Vocabulary Size32000
Torch Data Typefloat32

Best Alternatives to Umbra MoE 4x10.7 4.0bpw H6 EXL2

Best Alternatives
Context / RAM
Downloads
Likes
...mbra MoE 4x10.7 2.4bpw H6 EXL24K / 11.2 GB71
...Monarch EroSumika 2x10.7B 128K32K / 38.4 GB86
...EroSumika 2x10.7B 128K Bpw 4.032K / 9.9 GB63
Solar Merge V1.04K / 38.4 GB71
MetaModel Moex84K / 140 GB17305
Note: green Score (e.g. "73.2") means that the model is better than LoneStriker/Umbra-MoE-4x10.7-4.0bpw-h6-exl2.

Rank the Umbra MoE 4x10.7 4.0bpw H6 EXL2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51507 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124