Mistral 7B Instruct V0.2 2x7B MoE by perlthoughts

 ยป  All LLMs  ยป  perlthoughts  ยป  Mistral 7B Instruct V0.2 2x7B MoE   URL Share it on

Mistral 7B Instruct V0.2 2x7B MoE is an open-source language model by perlthoughts. Features: 12.9b LLM, VRAM: 25.8GB, Context: 32K, License: apache-2.0, MoE, Fine-Tuned, Instruction-Based, Merged, HF Score: 65.6, LLM Explorer Score: 0.14, Arc: 63, HellaSwag: 84.9, MMLU: 60.7, TruthfulQA: 68.2, WinoGrande: 77.4, GSM8K: 39.4.

  Merged Model   Arxiv:2310.06825   Autotrain compatible   Conversational   Finetuned   Instruct   License:apache-2.0   Mixtral   Model-index   Moe   Region:us   Safetensors   Sharded   Tensorflow

Mistral 7B Instruct V0.2 2x7B MoE Benchmarks

Mistral 7B Instruct V0.2 2x7B MoE (perlthoughts/Mistral-7B-Instruct-v0.2-2x7B-MoE)
๐ŸŒŸ Advertise your project ๐Ÿš€

Mistral 7B Instruct V0.2 2x7B MoE Parameters and Internals

LLM NameMistral 7B Instruct V0.2 2x7B MoE
Repository ๐Ÿค—https://huggingface.co/perlthoughts/Mistral-7B-Instruct-v0.2-2x7B-MoE 
Merged ModelYes
Model Size12.9b
Required VRAM25.8 GB
Updated2024-07-15
Maintainerperlthoughts
Model Typemixtral
Instruction-BasedYes
Model Files  5.0 GB: 1-of-6   4.9 GB: 2-of-6   5.0 GB: 3-of-6   5.0 GB: 4-of-6   4.9 GB: 5-of-6   1.0 GB: 6-of-6
Model ArchitectureMixtralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.37.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token<s>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to Mistral 7B Instruct V0.2 2x7B MoE

Best Alternatives
Context / RAM
Downloads
Likes
Inf Silent Kunoichi V0.1 2x7B32K / 25.6 GB50
Inf Silent Kunoichi V0.2 2x7B32K / 25.6 GB101
NearalMistral 2x7B32K / 25.8 GB521
Megatron V3 2x7B32K / 25.8 GB1083
MergedExpert 2x8b32K / 25.8 GB50
MergedExperts 2x8b32K / 25.8 GB50
MistarlingMaid 2x7B Base32K / 25.8 GB870
...afted Hermetic Platypus C 2x7B32K / 25.8 GB970
...tral Instruct MoE Experimental32K / 25.8 GB82
Orthogonal 2x7B Base32K / 25.8 GB2830
Note: green Score (e.g. "73.2") means that the model is better than perlthoughts/Mistral-7B-Instruct-v0.2-2x7B-MoE.

Rank the Mistral 7B Instruct V0.2 2x7B MoE Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52628 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a