| Model Type | | text-generation, transformers, lora | 
 | 
| Use Cases | 
| Areas: | | research, commercial applications | 
 |  | Primary Use Cases: | | Text generation in Chinese, AI assistant | 
 |  | Limitations: | | Needs further benchmarks for capability evaluation, Limited by quality and quantity of Chinese corpora used, Improvements needed in role-playing, mathematics, and handling complex tasks | 
 |  | 
| Additional Notes | | The model is optimized for Chinese language operations utilizing LoRa technology for adaptation. | 
 | 
| Supported Languages | | en (basic support), zh (enhanced support) | 
 | 
| Training Details | 
| Data Sources: | | llamafactory/alpaca_zh, llamafactory/alpaca_gpt4_zh, oaast_sft_zh | 
 |  | Data Volume: |  |  | Methodology: | | Fine-tuning with LoRa technology | 
 |  | Training Time: |  |  | Hardware Used: |  |  | Model Architecture: | | Adapted Meta-Llama-3-8B-Instruct for better Chinese handling | 
 |  | 
| Safety Evaluation | 
| Risk Categories: |  |  | Ethical Considerations: | | Refer to Meta Llama 3's Ethical Considerations for information on bias monitoring, responsible usage guidelines, and model limitation transparency. | 
 |  | 
| Responsible Ai Considerations | 
| Fairness: | | Bias monitoring is important. | 
 |  | Transparency: | | Transparency in model limitations is necessary. | 
 |  | 
| Input Output | 
| Input Format: |  |  | Accepted Modalities: |  |  | Output Format: |  |  | Performance Tips: | | The model offers enhanced performance in Chinese compared to its predecessor. | 
 |  |