Model Type |
| |
Additional Notes |
|
LLM Name | Mergekit Passthrough Yqhuxcv GGUF |
Repository ๐ค | https://huggingface.co/MaziyarPanahi/mergekit-passthrough-yqhuxcv-GGUF |
Model Name | mergekit-passthrough-yqhuxcv-GGUF |
Model Creator | mergekit-community |
Base Model(s) | |
Model Size | 70b |
Required VRAM | 16.9 GB |
Updated | 2025-08-19 |
Maintainer | MaziyarPanahi |
Model Type | mistral |
Model Files | |
GGUF Quantization | Yes |
Quantization Type | gguf |
Model Architecture | AutoModel |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
CodeLlama 70B Instruct GGUF | 0K / 25.5 GB | 2824 | 59 |
KafkaLM 70B German V0.1 GGUF | 0K / 25.5 GB | 1637 | 45 |
CodeLlama 70B Python GGUF | 0K / 25.5 GB | 1446 | 43 |
Meta Llama 3 70B Instruct GGUF | 0K / 26.4 GB | 92 | 4 |
DAD Model V2 70B Q4 | 0K / 42.5 GB | 6 | 0 |
CodeLlama 70B Hf GGUF | 0K / 25.5 GB | 588 | 42 |
Llama 2 70B Guanaco QLoRA GGUF | 0K / 29.3 GB | 12 | 0 |
Swallow 70B Instruct GGUF | 0K / 29.4 GB | 558 | 9 |
Euryale 1.3 L2 70B GGUF | 0K / 29.3 GB | 2090 | 17 |
Aurora Nights 70B V1.0 GGUF | 0K / 29.3 GB | 171 | 8 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐