Alpaca 30B Lora Int4 by elinas

 ยป  All LLMs  ยป  elinas  ยป  Alpaca 30B Lora Int4   URL Share it on

  4bit   Alpaca   Autotrain compatible   Endpoints compatible   Gptq   Llama   Lora   Pytorch   Quantized   Region:us

Alpaca 30B Lora Int4 Benchmarks

Alpaca 30B Lora Int4 (elinas/alpaca-30b-lora-int4)
๐ŸŒŸ Advertise your project ๐Ÿš€

Alpaca 30B Lora Int4 Parameters and Internals

Model Type 
auto-regressive language model, transformer architecture
Use Cases 
Areas:
research on large language models, question answering, natural language understanding, reading comprehension
Primary Use Cases:
exploring potential applications, understanding capabilities and limitations of language models, developing techniques to improve models, evaluating and mitigating biases
Limitations:
not trained with human feedback, generates potentially toxic or offensive content, generates incorrect information, not intended for downstream applications without risk evaluation
Additional Notes 
Recent evaluations indicate varying performance depending on the quantization and groupsize configurations.
Supported Languages 
English (high proficiency), Other included languages (less proficiency due to less training data)
Training Details 
Data Sources:
CCNet, C4, GitHub, Wikipedia, Books, ArXiv, Stack Exchange
Data Volume:
approximately 1-1.4T tokens depending on the model size
Methodology:
not explicitly stated; general transformer-based neural network training
Training Time:
between December 2022 and February 2023
Model Architecture:
transformer architecture with varying sizes ranging from 7B to 65B parameters
Responsible Ai Considerations 
Fairness:
Evaluated on RAI datasets to measure biases
Mitigation Strategies:
Data filtered based on proximity to Wikipedia using Kneser-Ney language model and fastText linear classifier
Input Output 
Input Format:
Instruction: your-prompt
Accepted Modalities:
text
Output Format:
Response: processed input according to instruction tuning guidelines
Performance Tips:
Use GPTQ and text-generation-webui for setup and follow guidelines for instruction and chat settings
Release Notes 
Version:
1
Date:
2023-04-05
Notes:
Update due to recent GPTQ commits introducing breaking changes.
Version:
null
Date:
2023-03-29
Notes:
Non-groupsize quantized model offers trade-offs between size and evaluation results.
Version:
null
Date:
2023-03-27
Notes:
New weights added, replacing old .pt version with 128 groupsize safetensors file.
LLM NameAlpaca 30B Lora Int4
Repository ๐Ÿค—https://huggingface.co/elinas/alpaca-30b-lora-int4 
Base Model(s)  Alpaca 30B Int4   MetaIX/Alpaca-30B-Int4
Model Size30b
Required VRAM16.9 GB
Updated2025-09-15
Maintainerelinas
Model Typellama
Model Files  18.1 GB   17.0 GB   16.9 GB
GPTQ QuantizationYes
Quantization Typegptq|4bit
Model ArchitectureLlamaForCausalLM
Licenseother
Transformers Version4.27.0.dev0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32000
LoRA ModelYes
Torch Data Typefloat16

Best Alternatives to Alpaca 30B Lora Int4

Best Alternatives
Context / RAM
Downloads
Likes
GPlatty 30B SuperHOT 8K GPTQ8K / 16.9 GB57
... 30B Supercot SuperHOT 8K GPTQ8K / 16.9 GB65
Platypus 30B SuperHOT 8K GPTQ8K / 16.9 GB34
Tulu 30B SuperHOT 8K GPTQ8K / 16.9 GB55
Yayi2 30B Llama GPTQ4K / 17 GB62
WizardLM 30B GPTQ2K / 16.9 GB182118
Llama 30B FINAL MODEL MINI2K / 19.4 GB51
...2 Llama 30B 7K Steps Gptq 2bit2K / 9.5 GB52
...Assistant SFT 7 Llama 30B GPTQ2K / 16.9 GB180135
WizardLM 30B V1.0 GPTQ2K / 16.9 GB51
Note: green Score (e.g. "73.2") means that the model is better than elinas/alpaca-30b-lora-int4.

Rank the Alpaca 30B Lora Int4 Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51369 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124