Llama 30B Supercot by ausboss

 ยป  All LLMs  ยป  ausboss  ยป  Llama 30B Supercot   URL Share it on

  Autotrain compatible   Endpoints compatible   Llama   Pytorch   Region:us   Sharded

Llama 30B Supercot Benchmarks

Llama 30B Supercot (ausboss/llama-30b-supercot)
๐ŸŒŸ Advertise your project ๐Ÿš€

Llama 30B Supercot Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
research, commercial applications
Applications:
language generation, contextual text generation
Primary Use Cases:
structured text generation based on instructions
Additional Notes 
Merge the LLaMa model with SuperCOT-LoRA for enhanced contextual response generation.
Input Output 
Input Format:
Structured prompting format with instruction, input, and response sections.
Accepted Modalities:
text
Output Format:
Textual response based on provided instruction and context.
Performance Tips:
Use specific prompt suffixes to improve output quality.
LLM NameLlama 30B Supercot
Repository ๐Ÿค—https://huggingface.co/ausboss/llama-30b-supercot 
Model Size30b
Required VRAM11.5 GB
Updated2025-08-18
Maintainerausboss
Model Typellama
Model Files  0.0 GB: 1-of-243   0.4 GB: 2-of-243   0.4 GB: 3-of-243   0.2 GB: 4-of-243   0.2 GB: 5-of-243   0.3 GB: 6-of-243   0.3 GB: 7-of-243   0.2 GB: 8-of-243   0.2 GB: 9-of-243   0.3 GB: 10-of-243   0.3 GB: 11-of-243   0.2 GB: 12-of-243   0.2 GB: 13-of-243   0.3 GB: 14-of-243   0.3 GB: 15-of-243   0.2 GB: 16-of-243   0.2 GB: 17-of-243   0.3 GB: 18-of-243   0.3 GB: 19-of-243   0.2 GB: 20-of-243   0.2 GB: 21-of-243   0.3 GB: 22-of-243   0.3 GB: 23-of-243   0.2 GB: 24-of-243   0.2 GB: 25-of-243   0.3 GB: 26-of-243   0.3 GB: 27-of-243   0.2 GB: 28-of-243   0.2 GB: 29-of-243   0.3 GB: 30-of-243   0.3 GB: 31-of-243   0.2 GB: 32-of-243   0.2 GB: 33-of-243   0.3 GB: 34-of-243   0.3 GB: 35-of-243   0.2 GB: 36-of-243   0.2 GB: 37-of-243   0.3 GB: 38-of-243   0.3 GB: 39-of-243   0.2 GB: 40-of-243   0.2 GB: 41-of-243   0.3 GB: 42-of-243   0.3 GB: 43-of-243   0.2 GB: 44-of-243   0.2 GB: 45-of-243   0.3 GB: 46-of-243
Model ArchitectureLlamaForCausalLM
Context Length2048
Model Max Length2048
Transformers Version4.28.0
Vocabulary Size32000
Torch Data Typefloat16

Quantized Models of the Llama 30B Supercot

Model
Likes
Downloads
VRAM
Llama 30B Supercot GGUF015213 GB
Llama 30B Supercot 4bit11316 GB

Best Alternatives to Llama 30B Supercot

Best Alternatives
Context / RAM
Downloads
Likes
Flash Llama 30M 2000132K / 0.1 GB19030
Smaug Slerp 30B V0.132K / 60.4 GB50
Tenebra 30B Alpha0116K / 65 GB1812
Llama33b 16K16K / 65.2 GB151
Yayi2 30B Llama4K / 121.2 GB92222
... Tokens By Perplexity Bottom K4K / 5.4 GB50
...via Sample With Temperature2.04K / 5.4 GB50
...lue Sample With Temperature2.04K / 5.4 GB50
... Tokens By Writing Style Top K4K / 5.4 GB50
Yayi2 30B Llama4K / 121.2 GB1822
Note: green Score (e.g. "73.2") means that the model is better than ausboss/llama-30b-supercot.

Rank the Llama 30B Supercot Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50738 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124