Medical question-answering, Medical dialogue tasks
Limitations:
May not perform effectively outside the medical domain, Training data targets knowledge level of medical students, may limit addressing needs of board-certified physicians, Not tested in real-world applications, Must not be used as a substitute for a doctorβs opinion, should be treated as a research tool
Training Details
Data Sources:
Anki flashcards, Wikidoc, StackExchange, ChatDoctor
Model Architecture:
Based on LLaMA (Large Language Model Meta AI), specifically fine-tuned for medical domain tasks.
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/medalpaca-13B-GPTQ.
Rank the Medalpaca 13B GPTQ Capabilities
π Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! π
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52394 in total.