Model Type |
| ||||||||||||||||||
Use Cases |
| ||||||||||||||||||
Supported Languages |
| ||||||||||||||||||
Training Details |
| ||||||||||||||||||
Safety Evaluation |
| ||||||||||||||||||
Responsible Ai Considerations |
| ||||||||||||||||||
Input Output |
|
LLM Name | Llama 2 70B GPTQ |
Repository π€ | https://huggingface.co/localmodels/Llama-2-70B-GPTQ |
Model Size | 70b |
Required VRAM | 35.3 GB |
Updated | 2025-09-23 |
Maintainer | localmodels |
Model Type | llama |
Model Files | |
GPTQ Quantization | Yes |
Quantization Type | gptq|4bit |
Model Architecture | LlamaForCausalLM |
Context Length | 2048 |
Model Max Length | 2048 |
Transformers Version | 4.32.0.dev0 |
Tokenizer Class | LlamaTokenizer |
Beginning of Sentence Token | <s> |
End of Sentence Token | </s> |
Unk Token | <unk> |
Vocabulary Size | 32000 |
Torch Data Type | float16 |
Best Alternatives |
Context / RAM |
Downloads |
Likes |
---|---|---|---|
...B Instruct AutoRound GPTQ 4bit | 128K / 39.9 GB | 1271 | 6 |
...B Instruct AutoRound GPTQ 4bit | 128K / 39.9 GB | 1059 | 0 |
...ama 3.1 70B Instruct Gptq 4bit | 128K / 39.9 GB | 24 | 4 |
Opus V1.2 70B Marlin | 32K / 36.4 GB | 5 | 0 |
MoMo 70B Lora 1.8.4 DPO GPTQ | 32K / 41.3 GB | 8 | 1 |
MoMo 70B Lora 1.8.6 DPO GPTQ | 32K / 41.3 GB | 5 | 1 |
Midnight Miqu 70B V1.5 GPTQ32G | 31K / 40.7 GB | 189 | 4 |
Tess 70B V1.6 Marlin | 31K / 36.3 GB | 7 | 1 |
...Midnight Miqu 70B V1.0 GPTQ32G | 31K / 40.7 GB | 7 | 2 |
Senku 70B GPTQ 4bit | 31K / 36.7 GB | 6 | 1 |
π Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! π