KULLM3 AWQ is an open-source language model by taeminlee. Features: 10.7b LLM, VRAM: 6GB, Context: 4K, License: cc-by-nc-4.0, Quantized, LLM Explorer Score: 0.13.
The model's quantization was carried out in a custom branch of AutoAWQ and works using vllm. It may not work with other frameworks as they were not tested.
Supported Languages
Korean (high proficiency), English (high proficiency)
Training Details
Data Sources:
vicgalle/alpaca-gpt4, Mixed Korean instruction data (gpt-generated, hand-crafted, etc)
Data Volume:
66000+ examples
Input Output
Performance Tips:
Without system prompt used in the training phase, KULLM would show lower performance than expected.
Note: green Score (e.g. "73.2") means that the model is better than taeminlee/KULLM3-awq.
Rank the KULLM3 AWQ Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52721 in total.