Dolphin 2.8 Mistral 7B V02 AWQ by solidrust

 »  All LLMs  »  solidrust  »  Dolphin 2.8 Mistral 7B V02 AWQ   URL Share it on

Dolphin 2.8 Mistral 7B V02 AWQ is an open-source language model by solidrust. Features: 7b LLM, VRAM: 4.2GB, Context: 32K, License: apache-2.0, Quantized, Instruction-Based, LLM Explorer Score: 0.13.

  4-bit   Autotrain compatible   Awq Base model:dphn/dolphin-2.8-mi... Base model:quantized:dphn/dolp...   Chatml   Conversational Dataset:cognitivecomputations/... Dataset:cognitivecomputations/... Dataset:cognitivecomputations/... Dataset:jondurbin/airoboros-2....   Dataset:m-a-p/code-feedback Dataset:m-a-p/codefeedback-fil...   Dataset:teknium/openhermes-2.5   En   Endpoints compatible   Generated from trainer   Instruct   Mistral   Quantized   Region:us   Safetensors

Dolphin 2.8 Mistral 7B V02 AWQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Dolphin 2.8 Mistral 7B V02 AWQ Parameters and Internals

LLM NameDolphin 2.8 Mistral 7B V02 AWQ
Repository 🤗https://huggingface.co/solidrust/dolphin-2.8-mistral-7b-v02-AWQ 
Model Namedolphin-2.8-mistral-7b-v02
Model Creatorcognitivecomputations
Base Model(s)  cognitivecomputations/dolphin-2.8-mistral-7b-v02   cognitivecomputations/dolphin-2.8-mistral-7b-v02
Model Size7b
Required VRAM4.2 GB
Updated2026-04-22
Maintainersolidrust
Model Typemistral
Instruction-BasedYes
Model Files  4.2 GB
Supported Languagesen
AWQ QuantizationYes
Quantization Typeawq
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.38.2
Tokenizer ClassLlamaTokenizer
Padding Token</s>
Vocabulary Size32002
Torch Data Typefloat16

Best Alternatives to Dolphin 2.8 Mistral 7B V02 AWQ

Best Alternatives
Context / RAM
Downloads
Likes
Mistral 7B Instruct V0.2 AWQ32K / 4.2 GB50
Mistral 7B Instruct V0.2 AWQ32K / 4.2 GB37577552
Mistral 7B Instruct V0.3 AWQ32K / 4.2 GB35489
...reeze 7B 32K Instruct V1.0 AWQ32K / 4.7 GB90
Breeze 7B Instruct V1.0 AWQ32K / 4.6 GB60
Mistral 7B Instruct V0.1 AWQ32K / 4.2 GB41182
Breeze 7B Instruct V1.0 AWQ32K / 4.6 GB71
Mistral 7B Instruct V0.3 AWQ32K / 4.2 GB60
...Instruct V0.2 AWQ 4bit Smashed32K / 4.2 GB81
Mistral 7B Instruct V0.3 AWQ32K / 4.2 GB2691
Note: green Score (e.g. "73.2") means that the model is better than solidrust/dolphin-2.8-mistral-7b-v02-AWQ.

Rank the Dolphin 2.8 Mistral 7B V02 AWQ Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 53185 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum — our secure, self-hosted AI agent for server management.
Release v20260328a