Galpaca 30B GPTQ by TheBloke

 ยป  All LLMs  ยป  TheBloke  ยป  Galpaca 30B GPTQ   URL Share it on

  4bit   Alpaca   Autotrain compatible   Dataset:tatsu-lab/alpaca   Galactica   Gptq   Opt   Quantized   Region:us

Galpaca 30B GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Galpaca 30B GPTQ (TheBloke/galpaca-30B-GPTQ)
๐ŸŒŸ Advertise your project ๐Ÿš€

Galpaca 30B GPTQ Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
scientific research
Applications:
instruction following tasks
Primary Use Cases:
instruction-response capabilities enhancement
Limitations:
inaccurate information production
Considerations:
Caution against production use without safeguards.
Additional Notes 
The model's expectations and use align with those outlined in the GALACTICA and Alpaca context.
Training Details 
Data Sources:
tatsu-lab/alpaca
Methodology:
Fine-tuning
Context Length:
512
Model Architecture:
GPTQ 4-bit model
LLM NameGalpaca 30B GPTQ
Repository ๐Ÿค—https://huggingface.co/TheBloke/galpaca-30B-GPTQ 
Model Size30b
Required VRAM16.1 GB
Updated2025-08-19
MaintainerTheBloke
Model Typeopt
Model Files  16.1 GB
GPTQ QuantizationYes
Quantization Typegptq|4bit
Model ArchitectureOPTForCausalLM
Licensecc-by-nc-4.0
Context Length2048
Model Max Length2048
Transformers Version4.28.0.dev0
Tokenizer ClassPreTrainedTokenizerFast
Vocabulary Size50000
Torch Data Typefloat16
Activation Functiongelu

Best Alternatives to Galpaca 30B GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
...ica 30B Evol Instruct 70K GPTQ2K / 16.3 GB1611
OPT 30B Erebus 4bit 128g2K / 16.6 GB165117
Galpaca 30B MiniOrca2K / 59.6 GB16611
Galpaca 30B2K / 60.8 GB166755
OPT 30B Erebus2K / 36 GB167366
Opt Iml Max 30B2K / 60.1 GB176536
...alactica 30B Evol Instruct 70K2K / 60.1 GB164923
Opt 30B2K / 60.1 GB11409135
Opt Iml 30B2K / 60.1 GB103874
Galactica 30B2K / 60.8 GB172840
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/galpaca-30B-GPTQ.

Rank the Galpaca 30B GPTQ Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50751 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124