| Model Type |
| |
| Additional Notes |
|
| LLM Name | LWM 14B Text Chat 1M GGUF |
| Repository ๐ค | https://huggingface.co/MaziyarPanahi/LWM-14b-Text-Chat-1M-GGUF |
| Model Name | LWM-14b-Text-Chat-1M-GGUF |
| Model Creator | mergekit-community |
| Base Model(s) | |
| Model Size | 14b |
| Required VRAM | 2.5 GB |
| Updated | 2025-09-23 |
| Maintainer | MaziyarPanahi |
| Model Type | mistral |
| Model Files | |
| GGUF Quantization | Yes |
| Quantization Type | gguf |
| Model Architecture | AutoModel |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| ....5 Coder 14B Instruct F16 GGUF | 0K / 29.5 GB | 58 | 2 |
| CausalLM 14B GGUF | 0K / 8.2 GB | 3217 | 189 |
| Phi 4 14B 1M RRP V1 Lora | 0K / 0.3 GB | 0 | 1 |
| ....5 14B Instruct Abliterated V2 | 0K / 0.6 GB | 0 | 3 |
| RWKV V4 Raven 14B One State | 0K / 28.3 GB | 4 | 2 |
| Ct2fast Docsgpt 14B | 0K / 13 GB | 6 | 1 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐