| Model Type |  | 
| Use Cases | 
| Areas: | | Research, Chatbot development | 
 |  | Applications: | | Multilingual chat applications | 
 |  | Primary Use Cases: | | Multilingual text generation, Conversational agents | 
 |  | Limitations: | | Cannot be used in critical or high-risk situations, May produce undesirable outputs | 
 |  | Considerations: | | Users should avoid using this model in scenarios where errors could lead to significant harm. | 
 |  | 
| Additional Notes | | Prompt format is defined in tokenizer_config.json. Can be used to deploy OpenAI-like API service using vllm. | 
 | 
| Supported Languages | | zh (Full proficiency), en (Full proficiency), fr (Full proficiency), de (Full proficiency), ja (Full proficiency), ko (Full proficiency), it (Full proficiency), ru (Full proficiency), fi (Full proficiency) | 
 | 
| Input Output | 
| Input Format: | | Prompt format using special tokens like <|role|>, <|says|>, <|end|> | 
 |  | Accepted Modalities: |  |  | Output Format: |  |  | Performance Tips: | | Ensure usage of the fast tokenizer from transformers. | 
 |  |