Gpt4 Alpaca Lora Mlp 65B GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Gpt4 Alpaca Lora Mlp 65B GPTQ   URL Share it on

Gpt4 Alpaca Lora Mlp 65B GPTQ is an open-source language model by TheBloke. Features: 65b LLM, VRAM: 33.5GB, Context: 2K, License: other, Quantized, HF Score: 63.7, LLM Explorer Score: 0.09, Arc: 65, HellaSwag: 86.1, MMLU: 62.7, TruthfulQA: 59.2, WinoGrande: 80.7, GSM8K: 28.3.

  4-bit   Autotrain compatible Dataset:c-s-ale/alpaca-gpt4-da...   Endpoints compatible   Gptq   Llama   Lora   Quantized   Region:us   Safetensors

Gpt4 Alpaca Lora Mlp 65B GPTQ Benchmarks

Gpt4 Alpaca Lora Mlp 65B GPTQ (TheBloke/gpt4-alpaca-lora_mlp-65B-GPTQ)
๐ŸŒŸ Advertise your project ๐Ÿš€

Gpt4 Alpaca Lora Mlp 65B GPTQ Parameters and Internals

Model Type 
text2text-generation
Use Cases 
Areas:
research, commercial applications
Additional Notes 
Merged LoRA weights with original Llama 65B model, quantized to 4bit using GPTQ.
Training Details 
Data Sources:
alpaca_data_gpt4
Methodology:
Fine-tuning using LoRA
Context Length:
512
Hardware Used:
8xA100(80G)
Model Architecture:
LoRA [MLP]
Input Output 
Accepted Modalities:
text
Performance Tips:
Limit responses to <1500 tokens to avoid VRAM overflow.
LLM NameGpt4 Alpaca Lora Mlp 65B GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/gpt4-alpaca-lora_mlp-65B-GPTQ 
Base Model(s)  Gpt4 Alpaca Lora Mlp 65B HF   TheBloke/gpt4-alpaca-lora_mlp-65B-HF
Model Size65b
Required VRAM33.5 GB
Updated2025-10-23
MaintainerTheBloke
Model Typellama
Model Files  33.5 GB
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureLlamaForCausalLM
Licenseother
Context Length2048
Model Max Length2048
Transformers Version4.28.1
Tokenizer ClassLlamaTokenizer
Beginning of Sentence Token<s>
End of Sentence Token</s>
Unk Token<unk>
Vocabulary Size32000
LoRA ModelYes
Torch Data Typefloat16

Best Alternatives to Gpt4 Alpaca Lora Mlp 65B GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
...en Instruct Human Mix 65B GPTQ2K / 33.5 GB140
Airoboros 65B Gpt4 1.4 GPTQ2K / 33.5 GB1213
Airoboros 65B Gpt4 1.3 GPTQ2K / 33.5 GB43
Airoboros 65B Gpt4 1.2 GPTQ2K / 33.5 GB1111
Guanaco 65B GPTQ2K / 33.5 GB669262
Dromedary 65B Lora GPTQ2K / 33.5 GB26
Airoboros 65B GPT4 2.0 GPTQ2K / 33.5 GB103
Airoboros 65B GPT4 M2.0 GPTQ2K / 33.5 GB92
...stage Llama1 65B Instruct GPTQ2K / 34.7 GB183
...Research Oasst1 Llama 65B GPTQ2K / 33.5 GB85
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/gpt4-alpaca-lora_mlp-65B-GPTQ.

Rank the Gpt4 Alpaca Lora Mlp 65B GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52721 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a