Candle Tiny Mistral by DanielClough

 ยป  All LLMs  ยป  DanielClough  ยป  Candle Tiny Mistral   URL Share it on

  Autotrain compatible Dataset:openaccess-ai-collecti...   En   Endpoints compatible   Gguf   Mistral   Q4   Quantized   Region:us   Sharded   Tensorflow

Candle Tiny Mistral Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Candle Tiny Mistral (DanielClough/Candle_tiny-mistral)
๐ŸŒŸ Advertise your project ๐Ÿš€

Candle Tiny Mistral Parameters and Internals

Model Type 
text-generation
Additional Notes 
This repo includes `.gguf` built for HuggingFace/Candle and will not work with `llama.cpp`.
Supported Languages 
languages_supported (en), proficiency_level ()
LLM NameCandle Tiny Mistral
Repository ๐Ÿค—https://huggingface.co/DanielClough/Candle_tiny-mistral 
Required VRAM0.4 GB
Updated2025-07-31
MaintainerDanielClough
Model Typemistral
Model Files  0.4 GB   0.1 GB   0.1 GB   0.1 GB   0.1 GB   0.1 GB   0.1 GB   0.2 GB   0.1 GB   0.2 GB   0.2 GB   0.2 GB   0.2 GB   0.4 GB: 1-of-1
Supported Languagesen
GGUF QuantizationYes
Quantization Typeq4|gguf
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.34.0.dev0
Vocabulary Size32000
Torch Data Typebfloat16

Best Alternatives to Candle Tiny Mistral

Best Alternatives
Context / RAM
Downloads
Likes
...istral Nemo Instruct 2407 GGUF1000K / 4.8 GB491266
...istral Nemo Instruct 2407 GGUF1000K / 4.8 GB9548
Devstral Small 2505 GGUF128K / 0.9 GB21830101
...istral C64Wizard Instruct GGUF32K / 4.4 GB110
...dle Snorkel Mistral PairRM DPO32K / 14.4 GB240
Mistral 7B1.0 GGUF32K / 5.1 GB50
Equall Saul Instruct V132K / 5.1 GB60
Candle MistralTrix V132K / 17.9 GB850
AI G Expander V5 GGUF32K / 4.4 GB50
MHENN4 GGUF32K / 4.4 GB70
Note: green Score (e.g. "73.2") means that the model is better than DanielClough/Candle_tiny-mistral.

Rank the Candle Tiny Mistral Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50299 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124