Open Gpt4 8x7B GPTQ is an open-source language model by TheBloke. Features: 46.7b LLM, VRAM: 23.8GB, Context: 32K, License: apache-2.0, MoE, Quantized, LLM Explorer Score: 0.11.
| Model Type |
| |||
| Additional Notes |
| |||
| Input Output |
|
| LLM Name | Open Gpt4 8x7B GPTQ |
| Repository ๐ค | https://huggingface.co/TheBloke/Open_Gpt4_8x7B-GPTQ |
| Model Name | Open Gpt4 8X7B |
| Model Creator | rombo dawg |
| Base Model(s) | |
| Model Size | 46.7b |
| Required VRAM | 23.8 GB |
| Updated | 2026-01-24 |
| Maintainer | TheBloke |
| Model Type | mixtral |
| Model Files | |
| GPTQ Quantization | Yes |
| Quantization Type | gptq |
| Model Architecture | MixtralForCausalLM |
| License | apache-2.0 |
| Context Length | 32768 |
| Model Max Length | 32768 |
| Transformers Version | 4.37.0.dev0 |
| Tokenizer Class | LlamaTokenizer |
| Vocabulary Size | 32000 |
| Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| ...ixtral 8x7B Instruct V0.1 GPTQ | 32K / 23.8 GB | 306636 | 141 |
| Mixtral 8x7B V0.1 GPTQ | 32K / 23.8 GB | 115 | 127 |
| Dolphin 2.5 Mixtral 8x7b GPTQ | 32K / 23.8 GB | 45 | 112 |
| ...Hermes 2 Mixtral 8x7B DPO GPTQ | 32K / 23.8 GB | 16 | 26 |
| ...Hermes 2 Mixtral 8x7B SFT GPTQ | 32K / 23.8 GB | 9 | 11 |
| Bagel DPO 8x7b V0.2 GPTQ | 32K / 23.8 GB | 5 | 2 |
| ...xtral Instruct 8x7b Zloss GPTQ | 32K / 23.8 GB | 30 | 2 |
| Open Gpt4 8x7B V0.2 GPTQ | 32K / 23.8 GB | 5 | 6 |
| ....1 LimaRP ZLoss DARE TIES GPTQ | 32K / 23.8 GB | 4 | 6 |
| Sensualize Mixtral GPTQ | 32K / 23.8 GB | 5 | 5 |
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐