| Model Type | | Multimodal, Language Model |
|
| Use Cases |
| Areas: | | Academic Research, Commercial Applications (with authorization) |
|
| Applications: | | Text Generation, Multimodal Inference |
|
| Primary Use Cases: | | Language Processing, Model Deployment on Mobile Devices |
|
| Limitations: | | Model can hallucinate due to its size limitations., Outputs are significantly influenced by prompts. |
|
| Considerations: | | Users responsible for verifying outputs, particularly in sensitive use cases. |
|
|
| Additional Notes | | MiniCPM is open-source for academic use, with additional requirements for commercial use. |
|
| Supported Languages | | Chinese (High proficiency), English (High proficiency) |
|
| Training Details |
| Data Sources: | | Open-source corpus, ShareGPT |
|
| Methodology: | |
| Hardware Used: | | 1080/2080 GPU, 3090/4090 GPU |
|
|
| Safety Evaluation |
| Ethical Considerations: | | The model does not understand or express personal opinions. Responsibility for evaluation and verification of content lies with the user. |
|
|
| Responsible Ai Considerations |
| Fairness: | | Model trained on a vast amount of open-source corpus to ensure wide adaptability. |
|
| Transparency: | | Developers emphasize the model's inability to express opinions. |
|
| Accountability: | | Users are responsible for evaluating and verifying the generated content. |
|
| Mitigation Strategies: | | Iterative improvement plans for the model announced. |
|
|
| Input Output |
| Input Format: | |
| Accepted Modalities: | |
| Output Format: | |
| Performance Tips: | | Specify model data types in 'from_pretrained' to avoid calculation errors. |
|
|