Mpt 7B Instruct Q8 by Abzu

 ยป  All LLMs  ยป  Abzu  ยป  Mpt 7B Instruct Q8   URL Share it on

Mpt 7B Instruct Q8 is an open-source language model by Abzu. Features: 7b LLM, VRAM: 6.9GB, License: cc-by-sa-3.0, Quantized, Instruction-Based, HF Score: 44.8, LLM Explorer Score: 0.12, Arc: 38.4, HellaSwag: 75.8, MMLU: 30.6, TruthfulQA: 35.1, WinoGrande: 70.5, GSM8K: 7.8.

  Arxiv:2010.04245   Arxiv:2108.12409   Arxiv:2205.14135   8-bit   Composer   Custom code   Dataset:mosaicml/dolly hhrlhf   Instruct   Llm-foundry   Mosaicml   Mpt   Q8   Quantized   Region:us   Safetensors
Model Card on HF ๐Ÿค—: https://huggingface.co/Abzu/mpt-7b-instruct-q8 

Mpt 7B Instruct Q8 Benchmarks

Mpt 7B Instruct Q8 (Abzu/mpt-7b-instruct-q8)
๐ŸŒŸ Advertise your project ๐Ÿš€

Mpt 7B Instruct Q8 Parameters and Internals

Model Type 
Instruction following
Use Cases 
Primary Use Cases:
Short-form instruction following
Limitations:
Can produce factually incorrect, lewd, biased, or offensive outputs.
Considerations:
The model should not be relied on to produce factually accurate information.
Training Details 
Data Sources:
Databricks Dolly-15k, Anthropic Helpful and Harmless (HH-RLHF)
Context Length:
2048
Training Time:
2.3 hours
Hardware Used:
8 A100-40GB GPUs
Model Architecture:
Modified decoder-only transformer
Input Output 
Input Format:
Dolly-15k format
Accepted Modalities:
text
Output Format:
text
LLM NameMpt 7B Instruct Q8
Repository ๐Ÿค—https://huggingface.co/Abzu/mpt-7b-instruct-q8 
Base Model(s)  Mpt 7B Instruct   mosaicml/mpt-7b-instruct
Model Size7b
Required VRAM6.9 GB
Updated2026-04-03
MaintainerAbzu
Model Typempt
Instruction-BasedYes
Model Files  6.9 GB
Quantization Typeq8
Model ArchitectureMPTForCausalLM
Licensecc-by-sa-3.0
Model Max Length2048
Transformers Version4.30.2
Tokenizer ClassGPTNeoXTokenizer
Vocabulary Size50432
Torch Data Typebfloat16

Best Alternatives to Mpt 7B Instruct Q8

Best Alternatives
Context / RAM
Downloads
Likes
Mpt 7B Chat Q80K / 6.9 GB211
Mpt 7B Chat0K / 13.3 GB80920518
Mpt 7B Instruct0K / 13.3 GB7946470
Mpt 7B Int8 Ov0K / 0 GB130
Mpt 7B 8K Instruct0K / 13.3 GB201227
SEA LION V1 7B IT GPTQ0K / 5.5 GB110
SEA LION V1 7B IT GBTQ0K / 5.5 GB70
Sea Lion 7B Instruct Gptq0K / 5.5 GB50
Sea Lion 7B Instruct0K / 15 GB20823
Sea Lion 7B Instruct Research0K / 15 GB1114
Note: green Score (e.g. "73.2") means that the model is better than Abzu/mpt-7b-instruct-q8.

Rank the Mpt 7B Instruct Q8 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52758 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a