Meta Llama 3.1 70B Instruct Gptq 4bit by ModelCloud

 »  All LLMs  »  ModelCloud  »  Meta Llama 3.1 70B Instruct Gptq 4bit   URL Share it on

Meta Llama 3.1 70B Instruct Gptq 4bit is an open-source language model by ModelCloud. Features: 70b LLM, VRAM: 39.9GB, Context: 128K, License: meta, Quantized, Instruction-Based, LLM Explorer Score: 0.15.

  4-bit   4bit   70b   Conversational   Endpoints compatible   Gptq   Gptqmodel   Instruct   Int4   Llama   Llama-3.1   Modelcloud   Quantized   Region:us   Safetensors   Sharded   Tensorflow

Meta Llama 3.1 70B Instruct Gptq 4bit Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Meta Llama 3.1 70B Instruct Gptq 4bit Parameters and Internals

LLM NameMeta Llama 3.1 70B Instruct Gptq 4bit
Repository 🤗https://huggingface.co/ModelCloud/Meta-Llama-3.1-70B-Instruct-gptq-4bit 
Model Size70b
Required VRAM39.9 GB
Updated2026-04-22
MaintainerModelCloud
Model Typellama
Instruction-BasedYes
Model Files  8.0 GB: 1-of-5   8.0 GB: 2-of-5   8.0 GB: 3-of-5   8.0 GB: 4-of-5   7.9 GB: 5-of-5
GPTQ QuantizationYes
Quantization Typegptq|4bit
Model ArchitectureLlamaForCausalLM
Licensemeta
Context Length131072
Model Max Length131072
Transformers Version4.44.0.dev0
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|finetune_right_pad_id|>
Vocabulary Size128256
Torch Data Typebfloat16

Best Alternatives to Meta Llama 3.1 70B Instruct Gptq 4bit

Best Alternatives
Context / RAM
Downloads
Likes
...B Instruct AutoRound GPTQ 4bit128K / 39.9 GB1946
...B Instruct AutoRound GPTQ 4bit128K / 39.9 GB470
Meta Llama 3 70B Instruct GPTQ8K / 39.8 GB176319
...ama 3 Taiwan 70B Instruct GPTQ8K / 39.8 GB62
Meta Llama 3 70B Instruct GPTQ8K / 39.8 GB63916
...g Llama 3 70B Instruct GPTQ 4B8K / 39.8 GB60
...g Llama 3 70B Instruct GPTQ 8B8K / 74.4 GB11
...erkrautLM 70B Instruct GPTQ 8B8K / 74.4 GB61
...a Llama 3 70B Instruct GPTQ 8B8K / 74.4 GB50
...ta Llama 3 70B Instruct Marlin8K / 39.5 GB3046
Note: green Score (e.g. "73.2") means that the model is better than ModelCloud/Meta-Llama-3.1-70B-Instruct-gptq-4bit.

Rank the Meta Llama 3.1 70B Instruct Gptq 4bit Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 53232 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum — our secure, self-hosted AI agent for server management.
Release v20260328a