Flammen8 Mistral 7B GGUF Q4 K M is an open-source language model by nbeerbower. Features: 7b LLM, VRAM: 4.4GB, Context: 32K, License: apache-2.0, Quantized, LLM Explorer Score: 0.13.
| Additional Notes |
|
| LLM Name | Flammen8 Mistral 7B GGUF Q4 K M |
| Repository ๐ค | https://huggingface.co/nbeerbower/flammen8-mistral-7B-GGUF-Q4_K_M |
| Base Model(s) | |
| Model Size | 7b |
| Required VRAM | 4.4 GB |
| Updated | 2025-11-10 |
| Maintainer | nbeerbower |
| Model Type | mistral |
| Model Files | |
| GGML Quantization | Yes |
| GGUF Quantization | Yes |
| Quantization Type | gguf|ggml|q4|q4_k |
| Model Architecture | MistralForCausalLM |
| License | apache-2.0 |
| Context Length | 32768 |
| Model Max Length | 32768 |
| Transformers Version | 4.37.1 |
| Tokenizer Class | LlamaTokenizer |
| Padding Token | <unk> |
| Vocabulary Size | 32000 |
| Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| Mistrilitary 7B | 32K / 7.7 GB | 99 | 22 |
| Mistral Scary Story Finetune | 32K / 14.5 GB | 39 | 0 |
| Mistral 7B V0.2 Csn SFT | 32K / 14.4 GB | 5 | 0 |
| Instruct 16bit | 32K / 0.4 GB | 5 | 0 |
| BioMistral 7B GGUF | 32K / 1 GB | 1085 | 19 |
| Merlinite 7B Ocp4.15 V0.3 | 32K / 14.5 GB | 5 | 0 |
| Shiftdocs 7B Ocp4.15 V0.3 | 32K / 14.5 GB | 13 | 0 |
| AzzurroQuantized | 32K / 4.4 GB | 333 | 4 |
| Merlinite 7B Ocp4.15 V0.1 | 32K / 4.4 GB | 8 | 0 |
| Filiberto 7B Instruct Exp1 | 32K / 14.5 GB | 19 | 0 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐