ZephRP M7b GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  ZephRP M7b GPTQ   URL Share it on

  4-bit   Autotrain compatible Base model:quantized:royallab/...   Base model:royallab/zephrp-m7b   En   Gptq   Mistral   Quantized   Region:us   Safetensors

ZephRP M7b GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
ZephRP M7b GPTQ (TheBloke/ZephRP-m7b-GPTQ)
๐ŸŒŸ Advertise your project ๐Ÿš€

ZephRP M7b GPTQ Parameters and Internals

Model Type 
mistral, text-generation
Use Cases 
Limitations:
The model is not intended for supplying factual information or advice.
Additional Notes 
The model can control message length using specific length modifiers like 'micro', 'medium', etc., due to the LimaRP v3 training.
Supported Languages 
en (English)
Training Details 
Data Sources:
LimaRP dataset
Methodology:
Combination of LimaRPv3 instruction training with Zephyr model's capabilities using a PEFT adapter.
Hardware Used:
a single L40 GPU
Model Architecture:
Mistral 7B
Input Output 
Input Format:
Alpaca instruction format of LimaRP v3
Accepted Modalities:
text
LLM NameZephRP M7b GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/ZephRP-m7b-GPTQ 
Model NameZephrp m7b
Model CreatorThe Royal Lab
Base Model(s)  ZephRP M7b   royallab/ZephRP-m7b
Model Size1.2b
Required VRAM4.2 GB
Updated2025-09-16
MaintainerTheBloke
Model Typemistral
Model Files  4.2 GB
Supported Languagesen
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureMistralForCausalLM
Licensecc-by-nc-4.0
Context Length32768
Model Max Length32768
Transformers Version4.34.0
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32000
Torch Data Typefloat16

Best Alternatives to ZephRP M7b GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
... Finetune 16bit Ver9 Main GPTQ32K / 4.2 GB60
Dictalm2.0 GPTQ32K / 4.2 GB330
Dictalm2.0 Instruct GPTQ32K / 4.2 GB250
Multi Verse Model GPTQ32K / 4.2 GB61
Turdus GPTQ32K / 4.2 GB75
Garrulus GPTQ32K / 4.2 GB53
HamSter 0.1 GPTQ32K / 4.2 GB42
Phoenix GPTQ32K / 4.2 GB101
Mistral Ft Optimized 1227 GPTQ32K / 4.2 GB52
...hat 3.5 1210 Seraph Slerp GPTQ32K / 4.2 GB42
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/ZephRP-m7b-GPTQ.

Rank the ZephRP M7b GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51408 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124