Supports multi-line and multi-character conversations. Capable of context-obedient question answering. Format configuration enhances comprehension of commands.
Supported Languages
English (Full)
Training Details
Data Sources:
Synthetic data from GPT-4, Rosetta Code Dataset
Methodology:
qLoRA fine-tune
Context Length:
8192
Hardware Used:
4-bit Base Model
Model Architecture:
Quantised Model
Safety Evaluation
Methodologies:
None specified
Input Output
Input Format:
BEGININPUT with context metadata and instructions
Accepted Modalities:
Text
Output Format:
Plain text, formatted as specified
Release Notes
Version:
1.4
Notes:
Increased context size to 8K. Incorporates SuperHOT 8K technique.
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52721 in total.