| Model Type | 
 | |||||||||||||||
| Use Cases | 
 | |||||||||||||||
| Additional Notes | 
 | |||||||||||||||
| Supported Languages | 
 | |||||||||||||||
| Training Details | 
 | |||||||||||||||
| Input Output | 
 | |||||||||||||||
| Release Notes | 
 | 
| LLM Name | Karen TheEditor V2 CREATIVE Mistral 7B | 
| Repository ๐ค | https://huggingface.co/FPHam/Karen_TheEditor_V2_CREATIVE_Mistral_7B | 
| Model Size | 7b | 
| Required VRAM | 14.4 GB | 
| Updated | 2025-10-14 | 
| Maintainer | FPHam | 
| Model Type | mistral | 
| Model Files | |
| Model Architecture | MistralForCausalLM | 
| License | llama2 | 
| Context Length | 32768 | 
| Model Max Length | 32768 | 
| Transformers Version | 4.34.1 | 
| Tokenizer Class | LlamaTokenizer | 
| Vocabulary Size | 32002 | 
| Torch Data Type | float16 | 
| Model | Likes | Downloads | VRAM | 
|---|---|---|---|
| ...or V2 CREATIVE Mistral 7B GGUF | 8 | 220 | 3 GB | 
| ...or V2 CREATIVE Mistral 7B GPTQ | 2 | 6 | 4 GB | 
| Best Alternatives | Context / RAM | Downloads | Likes | 
|---|---|---|---|
| ...Nemo Instruct 2407 Abliterated | 1000K / 24.5 GB | 141 | 18 | 
| MegaBeam Mistral 7B 512K | 512K / 14.4 GB | 8908 | 50 | 
| SpydazWeb AI HumanAI RP | 512K / 14.4 GB | 17 | 1 | 
| SpydazWeb AI HumanAI 002 | 512K / 14.4 GB | 18 | 1 | 
| ...daz Web AI ChatML 512K Project | 512K / 14.5 GB | 12 | 0 | 
| MegaBeam Mistral 7B 300K | 282K / 14.4 GB | 3779 | 16 | 
| MegaBeam Mistral 7B 300K | 282K / 14.4 GB | 8082 | 16 | 
| Hebrew Mistral 7B 200K | 256K / 30 GB | 1316 | 15 | 
| Astral 256K 7B V2 | 250K / 14.4 GB | 5 | 0 | 
| Astral 256K 7B | 250K / 14.4 GB | 5 | 0 | 
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐