Guanaco Unchained 33B Qlora by Neko-Institute-of-Science

 ยป  All LLMs  ยป  Neko-Institute-of-Science  ยป  Guanaco Unchained 33B Qlora   URL Share it on

  Adapter Dataset:cheshireai/guanaco-unc...   Finetuned   Lora   Region:us

Guanaco Unchained 33B Qlora Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Guanaco Unchained 33B Qlora (Neko-Institute-of-Science/guanaco-unchained-33b-qlora)
๐ŸŒŸ Advertise your project ๐Ÿš€

Guanaco Unchained 33B Qlora Parameters and Internals

Model Type 
Text Generation
Additional Notes 
Uses QLoRA method for efficient training in reduced precision.
Training Details 
Data Sources:
CheshireAI/guanaco-unchained
Methodology:
QLoRA with 8-bit training
Context Length:
2048
Input Output 
Accepted Modalities:
Text
Performance Tips:
Use full context length; Considerations for specific batch sizes and 8-bit training
LLM NameGuanaco Unchained 33B Qlora
Repository ๐Ÿค—https://huggingface.co/Neko-Institute-of-Science/guanaco-unchained-33b-qlora 
Model Size33b
Required VRAM1 GB
Updated2025-09-14
MaintainerNeko-Institute-of-Science
Model Files  1.0 GB
Model ArchitectureAdapter
Is Biasednone
PEFT TypeLORA
LoRA ModelYes
PEFT Target Modulesk_proj|o_proj|up_proj|gate_proj|q_proj|v_proj|down_proj
LoRA Alpha16
LoRA Dropout0.05
R Param64

Best Alternatives to Guanaco Unchained 33B Qlora

Best Alternatives
Context / RAM
Downloads
Likes
MentaLLaMA 33B Lora0K / 0.1 GB05
Airoboros 33B 2.1 Peft0K / 1 GB01
Chinese Alpaca Pro Lora 33B0K / 2.3 GB07
Chinese Alpaca Plus Lora 33B0K / 2.3 GB02
Sorceroboros 33B S2a4 Qlora0K / 1.9 GB11
Llama 33B Lxctx PI 16384 LoRA0K / 1 GB02
...Gpt4 1.4.1 Lxctx PI 16384 LoRA0K / 1 GB01
Enterredaas 33B QLoRA0K / 1 GB04
... 33B Gpt4 1.4.1 NTK 16384 LoRA0K / 1 GB02
Chinese Alpaca Lora 33B0K / 2.9 GB010
Note: green Score (e.g. "73.2") means that the model is better than Neko-Institute-of-Science/guanaco-unchained-33b-qlora.

Rank the Guanaco Unchained 33B Qlora Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51368 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124