Mistral Trismegistus 7B Mistral 7B Instruct V0.1 GGUF by MaziyarPanahi

 ยป  All LLMs  ยป  MaziyarPanahi  ยป  Mistral Trismegistus 7B Mistral 7B Instruct V0.1 GGUF   URL Share it on

  2-bit   3-bit   4-bit   5-bit   6-bit   7b   8-bit   Autotrain compatible Base model:maziyarpanahi/mistr... Base model:mistralai/mistral-7... Base model:quantized:maziyarpa...   Conversational   Distillation   En   Endpoints compatible   Finetuned   Gguf   Gpt4   Instruct   Merge   Mistral   Mistral-7b Mistralai/mistral-7b-instruct-...   Pytorch   Quantized   Region:us   Safetensors   Synthetic data Teknium/mistral-trismegistus-7...

Mistral Trismegistus 7B Mistral 7B Instruct V0.1 GGUF Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Mistral Trismegistus 7B Mistral 7B Instruct V0.1 GGUF (MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1-GGUF)
๐ŸŒŸ Advertise your project ๐Ÿš€

Mistral Trismegistus 7B Mistral 7B Instruct V0.1 GGUF Parameters and Internals

Model Type 
text generation
Additional Notes 
GGUF format model with multiple quantization options available for diverse deployment environments.
Supported Languages 
en (high)
Training Details 
Methodology:
distillation
Context Length:
32768
Hardware Used:
GPU acceleration
Model Architecture:
transformers
Input Output 
Input Format:
Prompt-based input
Accepted Modalities:
text
Output Format:
Text completion
Performance Tips:
Use GPU acceleration for improved performance.
LLM NameMistral Trismegistus 7B Mistral 7B Instruct V0.1 GGUF
Repository ๐Ÿค—https://huggingface.co/MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1-GGUF 
Model NameMistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1-GGUF
Model CreatorMaziyarPanahi
Base Model(s)  MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1   MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1
Model Size7b
Required VRAM2.7 GB
Updated2025-09-14
MaintainerMaziyarPanahi
Model Typemistral
Instruction-BasedYes
Model Files  2.7 GB   3.8 GB   3.5 GB   3.2 GB   4.4 GB   4.1 GB   5.1 GB   5.0 GB   5.9 GB   7.7 GB
GGUF QuantizationYes
Quantization Typegguf
Model ArchitectureAutoModel
Licenseapache-2.0

Best Alternatives to Mistral Trismegistus 7B Mistral 7B Instruct V0.1 GGUF

Best Alternatives
Context / RAM
Downloads
Likes
Pixel8K / 4.4 GB170
Mistral 7B Instruct V0.3 GGUF0K / 1.6 GB147354112
Qwen2 7B Instruct GGUF0K / 1.9 GB11976011
Mistral 7B Instruct V0.2 GGUF0K / 3.1 GB108374459
...hemeng Qwen Math 7b 24 1 100 10K / 15.2 GB300
Mistral 7B Instruct V0.1 GGUF0K / 3.1 GB185168602
Qwen2 7B Instruct V0.6 GGUF0K / 4.5 GB135220
Mistral 7B Instruct V0.3 GGUF0K / 2.7 GB1655910
Qwen2 7B Instruct V0.1 GGUF0K / 4.5 GB97140
Qwen2 7B Instruct V0.7 GGUF0K / 4.5 GB95300
Note: green Score (e.g. "73.2") means that the model is better than MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.1-GGUF.

Rank the Mistral Trismegistus 7B Mistral 7B Instruct V0.1 GGUF Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51368 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124