Mistral Small Spellbound StoryWriter 22B Instruct 0.2 Chkpt 200 16 Bit by hf-100

 ยป  All LLMs  ยป  hf-100  ยป  Mistral Small Spellbound StoryWriter 22B Instruct 0.2 Chkpt 200 16 Bit   URL Share it on

Mistral Small Spellbound StoryWriter 22B Instruct 0.2 Chkpt 200 16 Bit is an open-source language model by hf-100. Features: 22b LLM, VRAM: 44.7GB, Context: 128K, License: apache-2.0, Quantized, Instruction-Based, LLM Explorer Score: 0.17.

  4bit   Conversational   En   Endpoints compatible   Instruct   Mistral   Quantized   Region:us   Safetensors   Sharded   Tensorflow   Trl   Unsloth

Mistral Small Spellbound StoryWriter 22B Instruct 0.2 Chkpt 200 16 Bit Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Mistral Small Spellbound StoryWriter 22B Instruct 0.2 Chkpt 200 16 Bit (hf-100/Mistral-Small-Spellbound-StoryWriter-22B-instruct-0.2-chkpt-200-16-bit)
๐ŸŒŸ Advertise your project ๐Ÿš€

Mistral Small Spellbound StoryWriter 22B Instruct 0.2 Chkpt 200 16 Bit Parameters and Internals

LLM NameMistral Small Spellbound StoryWriter 22B Instruct 0.2 Chkpt 200 16 Bit
Repository ๐Ÿค—https://huggingface.co/hf-100/Mistral-Small-Spellbound-StoryWriter-22B-instruct-0.2-chkpt-200-16-bit 
Base Model(s)  unsloth/mistral-small-instruct-2409-bnb-4bit   unsloth/mistral-small-instruct-2409-bnb-4bit
Model Size22b
Required VRAM44.7 GB
Updated2026-03-29
Maintainerhf-100
Model Typemistral
Instruction-BasedYes
Model Files  4.9 GB: 1-of-9   5.0 GB: 2-of-9   5.0 GB: 3-of-9   4.9 GB: 4-of-9   5.0 GB: 5-of-9   5.0 GB: 6-of-9   4.9 GB: 7-of-9   5.0 GB: 8-of-9   5.0 GB: 9-of-9
Supported Languagesen
Quantization Type4bit
Model ArchitectureMistralForCausalLM
Licenseapache-2.0
Context Length131072
Model Max Length131072
Transformers Version4.46.2
Tokenizer ClassLlamaTokenizer
Padding Token[control_748]
Vocabulary Size32768
Torch Data Typebfloat16

Best Alternatives to Mistral Small Spellbound StoryWriter 22B Instruct 0.2 Chkpt 200 16 Bit

Best Alternatives
Context / RAM
Downloads
Likes
MwM 22B Instruct128K / 44.7 GB60
MS Schisandra 22B V0.2128K / 44.7 GB109
...ntheon RP Pure 1.6.2 22B Small128K / 44.7 GB1133
MS Meadowlark 22B128K / 44.7 GB2216
... V4x1.6.2RP Cydonia VXXX 22B 8128K / 44.7 GB55
MS Inky 2409 22B128K / 44.7 GB70
Beeper King 22B128K / 44.7 GB17
MS Quadrosiac 2409 22B128K / 44.7 GB90
MS Fujin 2409 22B128K / 44.7 GB80
... V4x1.6.2RP Cydonia VXXX 22B 6128K / 44.7 GB53
Note: green Score (e.g. "73.2") means that the model is better than hf-100/Mistral-Small-Spellbound-StoryWriter-22B-instruct-0.2-chkpt-200-16-bit.

Rank the Mistral Small Spellbound StoryWriter 22B Instruct 0.2 Chkpt 200 16 Bit Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52721 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a