| LLM Name | Mistral 7B Instruct V0.3 GGUF |
| Repository ๐ค | https://huggingface.co/second-state/Mistral-7B-Instruct-v0.3-GGUF |
| Model Name | Mistral-7B-Instruct-v0.3 |
| Model Creator | mistralai |
| Base Model(s) | |
| Model Size | 7b |
| Required VRAM | 2.7 GB |
| Updated | 2025-10-02 |
| Maintainer | second-state |
| Model Type | mistral |
| Instruction-Based | Yes |
| Model Files | |
| GGUF Quantization | Yes |
| Quantization Type | gguf|q2|q4_k|q5_k |
| Model Architecture | MistralForCausalLM |
| License | apache-2.0 |
| Context Length | 32768 |
| Model Max Length | 32768 |
| Transformers Version | 4.42.0.dev0 |
| Vocabulary Size | 32768 |
| Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Mistral 7B Instruct V0.2.gguf | 32K / 14.5 GB | 19 | 3 |
| Mistral 7B Instruct V0.2 GGUF | 32K / 4.4 GB | 36 | 2 |
| ...andle Mistral 7B Instruct V0.2 | 32K / 14.4 GB | 63 | 0 |
| Mistral V0.3 7B Cybersecurity | 32K / 14.5 GB | 47 | 1 |
| Tamil Mistral 7B Instruct V0.1 | 32K / 14.8 GB | 130 | 14 |
| BioMistralMerged | 32K / 14.4 GB | 346 | 1 |
| Mistral 7B Instruct V0.3 GGUF | 32K / 2.7 GB | 124 | 1 |
| ...stral 7B V0.3 Chinese Chat F16 | 32K / 14.5 GB | 103 | 1 |
| ...tral 7B V0.3 Chinese Chat 4bit | 32K / 4.1 GB | 84 | 2 |
| ...tral 7B V0.3 Chinese Chat 8bit | 32K / 7.7 GB | 51 | 3 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐