Meta Llama 3 8B GPTQ Instruct is an open-source language model by boadisamson. Features: 8b LLM, VRAM: 5.7GB, Context: 8K, Quantized, Instruction-Based, HF Score: 66.3, LLM Explorer Score: 0.14, Arc: 59.9, HellaSwag: 80, MMLU: 64.8, TruthfulQA: 51.9, WinoGrande: 75.6, GSM8K: 65.7.
| LLM Name | Meta Llama 3 8B GPTQ Instruct |
| Repository 🤗 | https://huggingface.co/boadisamson/Meta-Llama-3-8B-GPTQ-Instruct |
| Base Model(s) | |
| Model Size | 8b |
| Required VRAM | 5.7 GB |
| Updated | 2026-04-12 |
| Maintainer | boadisamson |
| Model Type | llama |
| Instruction-Based | Yes |
| Model Files | |
| GPTQ Quantization | Yes |
| Quantization Type | gptq|4bit |
| Model Architecture | LlamaForCausalLM |
| Context Length | 8192 |
| Model Max Length | 8192 |
| Transformers Version | 4.39.3 |
| Vocabulary Size | 128256 |
| Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
|---|---|---|---|
| ... 8B Instruct 262K 4bit GPTQ 02 | 256K / 5.7 GB | 6 | 0 |
| ...a 3 8B Instruct 262K 4bit GPTQ | 256K / 5.8 GB | 2 | 1 |
| ...lama 3.1 8B Instruct GPTQ INT4 | 128K / 5.8 GB | 25538 | 42 |
| ...Instruct 80K Qlora Merged GPTQ | 80K / 5.8 GB | 6 | 0 |
| ...oLeo Instruct 8B 32K V0.1 GPTQ | 64K / 5.7 GB | 6 | 0 |
| Meta Llama 3 8B Instruct GPTQ | 8K / 5.7 GB | 1114 | 6 |
| Meta Llama 3 8B Instruct GPTQ | 8K / 5.8 GB | 6 | 0 |
| ...ta Llama 3 8B Instruct GPTQ 8B | 8K / 9.2 GB | 7 | 0 |
| Meta Llama 3 8B Instruct GPTQ | 8K / 5.8 GB | 15 | 1 |
| ...truct Abliterated V3 GPTQ 4bit | 8K / 5.8 GB | 18 | 0 |
🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟