Compatible with 7B, 13B, and 30B 4-bit quantized LLaMa models, including ggml quantized converted bins. The structure of prompts becomes more critical for lower parameter sizes.
Training Details
Methodology:
LoRA training for adaptation with langchain prompting
Input Output
Input Format:
Instruction-Input-Response format
Accepted Modalities:
text
Output Format:
text
Performance Tips:
Use suggestion suffixes to improve output quality.
Note: green Score (e.g. "73.2") means that the model is better than ausboss/llama-30b-supercot-4bit.
Rank the Llama 30B Supercot 4bit Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52758 in total.