Model Type | |
Use Cases |
Areas: | Research, Instruction Following |
|
Applications: | General Question Answering, Conversation, Education |
|
Primary Use Cases: | Chatbot, Instruction following assistant |
|
Limitations: | May produce problematic outputs, especially upon being prompted to do so, Not aligned with human preferences using techniques like RLHF |
|
Considerations: | Prompts should be carefully crafted to avoid unintended outputs. |
|
|
Additional Notes | The model is compatible with multiple UIs and libraries for easier accessibility. It has undergone quantization by TheBloke for enhanced deployment efficiency. The Blink's LLM work is funded by a grant from andreessen horowitz (a16z). |
|
Supported Languages | English (Good proficiency) |
|
Training Details |
Data Sources: | WizardLM, QingyiSi/Alpaca-CoT, GPTeacher-General-Instruct, metaeval/ScienceQA_text_only, openai/summarize_from_feedback, camel-ai/math, camel-ai/physics, camel-ai/chemistry, camel-ai/biology, winglian/evals, ARC-Easy, ARC-Challenge, hellaswag, riddle_sense, gsm8k |
|
Data Volume: | |
Methodology: | Instruction fine-tuning on open datasets |
|
Context Length: | |
Training Time: | 7.5 hours on 6XA100 80GB GPUs for 1 epoch |
|
Hardware Used: | |
Model Architecture: | Based on LlaMA architecture |
|
|
Input Output |
Input Format: | A chat between a USER and ASSISTANT |
|
Accepted Modalities: | |
Output Format: | |
Performance Tips: | Ensure correct prompt template for optimal response generation. |
|
|
Release Notes |
Version: | |
Notes: | Fixes to datasets dropped during training. |
|
|
|