LLM Name | DeepSeek R1 GGUF UD |
Repository ๐ค | https://huggingface.co/unsloth/DeepSeek-R1-GGUF-UD |
Base Model(s) | |
Updated | 2025-06-09 |
Maintainer | unsloth |
Model Type | deepseek_v3 |
Supported Languages | en |
GGUF Quantization | Yes |
Quantization Type | gguf |
Model Architecture | DeepseekV3ForCausalLM |
License | mit |
Context Length | 163840 |
Model Max Length | 163840 |
Transformers Version | 4.48.1 |
Vocabulary Size | 129280 |
Torch Data Type | bfloat16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
DeepSeek R1 0528 GGUF | 160K / GB | 81173 | 138 |
DeepSeek R1 GGUF | 160K / GB | 54347 | 1075 |
DeepSeek V3.0324 GGUF | 160K / GB | 109917 | 186 |
DeepSeek V3.0324 GGUF UD | 160K / GB | 8632 | 9 |
MAI DS R1 GGUF | 160K / GB | 1834 | 6 |
R1 1776 GGUF | 160K / GB | 1293 | 101 |
DeepSeek V3 AWQ | 160K / 351.9 GB | 2888 | 34 |
DeepSeek R1 0528 AWQ | 160K / 205 GB | 6962 | 12 |
DeepSeek R1 FP4 | 160K / 226.6 GB | 45291 | 253 |
DeepSeek R1 0528 FP4 | 160K / 107 GB | 372 | 17 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐