L3.1 MoE 2x8B V0.2 by moeru-ai

 ยป  All LLMs  ยป  moeru-ai  ยป  L3.1 MoE 2x8B V0.2   URL Share it on

L3.1 MoE 2x8B V0.2 is an open-source language model by moeru-ai. Features: 8b LLM, VRAM: 27.3GB, Context: 128K, License: llama3.1, MoE, LLM Explorer Score: 0.25.

Base model:arliai/llama-3.1-8b... Base model:joseph717171/llama-... Base model:merge:arliai/llama-... Base model:merge:joseph717171/...   Conversational   Endpoints compatible   Frankenmoe   Merge   Mergekit   Mixtral   Model-index   Moe   Region:us   Safetensors   Sharded   Tensorflow

L3.1 MoE 2x8B V0.2 Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
L3.1 MoE 2x8B V0.2 (moeru-ai/L3.1-Moe-2x8B-v0.2)
๐ŸŒŸ Advertise your project ๐Ÿš€

L3.1 MoE 2x8B V0.2 Parameters and Internals

LLM NameL3.1 MoE 2x8B V0.2
Repository ๐Ÿค—https://huggingface.co/moeru-ai/L3.1-Moe-2x8B-v0.2 
Base Model(s)  Joseph717171/Llama-3.1-SuperNova-8B-Lite_TIES_with_Base   ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2   Joseph717171/Llama-3.1-SuperNova-8B-Lite_TIES_with_Base   ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2
Model Size8b
Required VRAM27.3 GB
Updated2026-04-05
Maintainermoeru-ai
Model Typemixtral
Model Files  5.0 GB: 1-of-6   4.9 GB: 2-of-6   5.0 GB: 3-of-6   5.0 GB: 4-of-6   4.9 GB: 5-of-6   2.5 GB: 6-of-6
Model ArchitectureMixtralForCausalLM
Licensellama3.1
Context Length131072
Model Max Length131072
Transformers Version4.45.2
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|begin_of_text|>
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to L3.1 MoE 2x8B V0.2

Best Alternatives
Context / RAM
Downloads
Likes
...ama 3 Aplite Instruct 4x8B MoE8K / 50 GB3639
Lamma3merge3 15B MoE8K / 27.5 GB111
Lamma3merge2 15B MoE8K / 27.5 GB100
Llama3merge7 15B MoE8K / 27.5 GB70
Mergkit 18K / 22.6 GB80
Llama 3 8B Shisa 2x8B8K / 7.4 GB42
Llama3merge8 15B MoE8K / 27.5 GB50
Llama3merge6 15B MoE8K / 27.5 GB50
...8B Finetune All V6 Epoch2 V0.12K / 18 GB41
...oE 8B Pretrain 0520 Iter1349992K / 18 GB150

Rank the L3.1 MoE 2x8B V0.2 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52721 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a