Llama3merge7 15B MoE by allknowingroger

 ยป  All LLMs  ยป  allknowingroger  ยป  Llama3merge7 15B MoE   URL Share it on

Llama3merge7 15B MoE is an open-source language model by allknowingroger. Features: 8b LLM, VRAM: 27.5GB, Context: 8K, License: apache-2.0, HF Score: 68.1, LLM Explorer Score: 0.14, Arc: 61.6, HellaSwag: 82.9, MMLU: 64.5, TruthfulQA: 55.3, WinoGrande: 77.8, GSM8K: 66.7.

  Autotrain compatible Base model:cognitivecomputatio... Base model:kukedlc/neuralllami... Base model:merge:cognitivecomp... Base model:merge:kukedlc/neura... Cognitivecomputations/dolphin-...   Conversational   Endpoints compatible   Frankenmoe Kukedlc/neuralllamita-3-8b-v0....   Lazymergekit   Merge   Mergekit   Mixtral   Moe   Region:us   Safetensors   Sharded   Tensorflow

Llama3merge7 15B MoE Benchmarks

Llama3merge7 15B MoE (allknowingroger/Llama3merge7-15B-MoE)
๐ŸŒŸ Advertise your project ๐Ÿš€

Llama3merge7 15B MoE Parameters and Internals

Model Type 
MoE
Additional Notes 
Model is a Mixture of Experts (MoE) model created using LazyMergekit.
LLM NameLlama3merge7 15B MoE
Repository ๐Ÿค—https://huggingface.co/allknowingroger/Llama3merge7-15B-MoE 
Base Model(s)  Kukedlc/NeuralLlamita-3-8B-v0.2   cognitivecomputations/dolphin-2.9-llama3-8b   Kukedlc/NeuralLlamita-3-8B-v0.2   cognitivecomputations/dolphin-2.9-llama3-8b
Model Size8b
Required VRAM27.5 GB
Updated2024-12-03
Maintainerallknowingroger
Model Typemixtral
Model Files  1.1 GB: 1-of-15   2.0 GB: 2-of-15   2.0 GB: 3-of-15   2.0 GB: 4-of-15   2.0 GB: 5-of-15   2.0 GB: 6-of-15   2.0 GB: 7-of-15   2.0 GB: 8-of-15   2.0 GB: 9-of-15   2.0 GB: 10-of-15   2.0 GB: 11-of-15   2.0 GB: 12-of-15   2.0 GB: 13-of-15   2.0 GB: 14-of-15   0.4 GB: 15-of-15
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length8192
Model Max Length8192
Transformers Version4.40.0
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|begin_of_text|>
Vocabulary Size128258
Torch Data Typebfloat16

Best Alternatives to Llama3merge7 15B MoE

Best Alternatives
Context / RAM
Downloads
Likes
L3.1 MoE 2x8B V0.2128K / 27.3 GB199
...ama 3 Aplite Instruct 4x8B MoE8K / 50 GB3639
Lamma3merge3 15B MoE8K / 27.5 GB111
Lamma3merge2 15B MoE8K / 27.5 GB100
Mergkit 18K / 22.6 GB80
Llama 3 8B Shisa 2x8B8K / 7.4 GB42
Llama3merge8 15B MoE8K / 27.5 GB50
Llama3merge6 15B MoE8K / 27.5 GB50
...8B Finetune All V6 Epoch2 V0.12K / 18 GB41
...oE 8B Pretrain 0520 Iter1349992K / 18 GB150
Note: green Score (e.g. "73.2") means that the model is better than allknowingroger/Llama3merge7-15B-MoE.

Rank the Llama3merge7 15B MoE Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52721 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a