Mpt 7B Instruct by mosaicml

 ยป  All LLMs  ยป  mosaicml  ยป  Mpt 7B Instruct   URL Share it on

  Arxiv:2010.04245   Arxiv:2108.12409   Arxiv:2205.14135   Autotrain compatible   Composer   Custom code   Dataset:mosaicml/dolly hhrlhf   Instruct   Llm-foundry   Mosaicml   Mpt   Pytorch   Region:us   Sharded

Mpt 7B Instruct Benchmarks

๐ŸŒŸ Advertise your project ๐Ÿš€

Mpt 7B Instruct Parameters and Internals

Model Type 
Instruction following, Text generation
Use Cases 
Areas:
Research, Commercial applications
Primary Use Cases:
Short-form instruction following
Limitations:
Can produce factually incorrect or offensive outputs
Considerations:
Should not be relied on to produce factually accurate information
Additional Notes 
Uses a custom MPT architecture and requires `trust_remote_code=True` in some processes.
Training Details 
Data Sources:
databricks/databricks-dolly-15k, Anthropic/hh-rlhf
Methodology:
Decoder-only transformer architecture with modifications such as FlashAttention, ALiBi, no positional embeddings, no biases
Context Length:
2048
Training Time:
2.3 hours
Hardware Used:
8 A100-40GB GPUs
Model Architecture:
Modified decoder-only transformer with 32 layers and 32 heads, d_model: 4096, vocab size: 50432
Input Output 
Input Format:
Formatted prompts using instruction and response keys from dolly-15k format
Accepted Modalities:
Text
Output Format:
Textual response
Performance Tips:
Use torch.autocast for lower precision runs. Increase max sequence length with ALiBi.
LLM NameMpt 7B Instruct
Repository ๐Ÿค—https://huggingface.co/mosaicml/mpt-7b-instruct 
Model Size7b
Required VRAM13.3 GB
Updated2025-06-09
Maintainermosaicml
Model Typempt
Instruction-BasedYes
Model Files  9.9 GB: 1-of-2   3.4 GB: 2-of-2
Model ArchitectureMPTForCausalLM
Licenseapache-2.0
Model Max Length2048
Transformers Version4.28.1
Tokenizer ClassGPTNeoXTokenizer
Vocabulary Size50432
Torch Data Typebfloat16
Mpt 7B Instruct (mosaicml/mpt-7b-instruct)

Quantized Models of the Mpt 7B Instruct

Model
Likes
Downloads
VRAM
Mpt 7B Instruct Q82196 GB

Best Alternatives to Mpt 7B Instruct

Best Alternatives
Context / RAM
Downloads
Likes
Mpt 7B Chat0K / 13.3 GB84546514
Mpt 7B Int8 Ov0K / 0 GB200
Sea Lion 7B Instruct0K / 15 GB20823
Mpt 7B 8K Instruct0K / 13.3 GB61526
Sea Lion 7B Instruct Research0K / 15 GB1114
Results0K / 13.3 GB210
...7B 8K Instruct Peft Compatible0K / 13.3 GB281
Mpt 7B 8K Chat Sharded Bf160K / 13.4 GB171
Vigogne Mpt 7B Instruct0K / 13.4 GB240
...pt 7B Instruct Peft Compatible0K / 13.3 GB370
Note: green Score (e.g. "73.2") means that the model is better than mosaicml/mpt-7b-instruct.

Rank the Mpt 7B Instruct Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 48046 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124