Model Type | large language model, finetuned, LoRA |
|
Use Cases |
Areas: | mental health analysis, research |
|
Applications: | mental health condition analysis, explanation generation |
|
Primary Use Cases: | Interpretable mental health analysis with instruction-following |
|
Limitations: | Not for clinical use, Potential biases and incorrectness |
|
Considerations: | Use results for non-clinical research; ensure professional oversight in clinical settings. |
|
|
Additional Notes | MentaLLaMA is part of a series of models designed for mental health analysis. |
|
Supported Languages | |
Training Details |
Data Sources: | IMHI instruction tuning data |
|
Data Volume: | 75K high-quality natural language instructions |
|
Methodology: | |
|
Safety Evaluation |
Methodologies: | comprehensive evaluation on the IMHI benchmark |
|
Findings: | approaches state-of-the-art discriminative methods in correctness, generates high-quality explanations |
|
Risk Categories: | potential bias, gender gaps, incorrect predictions, inappropriate explanations |
|
Ethical Considerations: | Results should be used for non-clinical research. Professional assistance is recommended for clinical contexts. There are challenges in real-scenario applications. |
|
|
Responsible Ai Considerations |
Fairness: | Potential gender gap bias addressed. |
|
|
Input Output |
Input Format: | Natural language instructions |
|
Accepted Modalities: | |
Output Format: | Mental health analyses and explanations |
|
|