| Model Type | 
  | |
| Additional Notes | 
  | 
| LLM Name | Mergekit Slerp Oskyrzi GGUF | 
| Repository ๐ค | https://huggingface.co/MaziyarPanahi/mergekit-slerp-oskyrzi-GGUF | 
| Model Name | mergekit-slerp-oskyrzi-GGUF | 
| Model Creator | mergekit-community | 
| Base Model(s) | |
| Model Size | 11b | 
| Required VRAM | 2.7 GB | 
| Updated | 2025-09-23 | 
| Maintainer | MaziyarPanahi | 
| Model Type | mistral | 
| Model Files | |
| GGUF Quantization | Yes | 
| Quantization Type | gguf | 
| Model Architecture | AutoModel | 
Best Alternatives  | 
        Context / RAM  | 
        Downloads  | 
        Likes  | 
    
|---|---|---|---|
| Synaptica GGUF | 0K / 4 GB | 134 | 0 | 
| YorkShire11 GGUF | 0K / 2.7 GB | 124 | 0 | 
| Fimburs11V3 GGUF | 0K / 4 GB | 68 | 0 | 
| Llama 3 11B Instruct V0.1 GGUF | 0K / 4.5 GB | 648 | 7 | 
| Hermes 2 Pro 11B GGUF | 0K / 4.2 GB | 52 | 4 | 
| Velara 11B V2 GGUF | 0K / 4.8 GB | 281 | 9 | 
| Synthia V3.0 11B GGUF | 0K / 4.5 GB | 414 | 9 | 
| ... Mistral 7B Instruct V0.1 GGUF | 0K / 2.7 GB | 52 | 1 | 
| ...al 7B Instruct V0.2 Slerp GGUF | 0K / 2.7 GB | 85 | 0 | 
| ...al 7B Instruct V0.2 Slerp GGUF | 0K / 2.7 GB | 84 | 0 | 
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐