Medicine LLM by AdaptLLM

 ยป  All LLMs  ยป  AdaptLLM  ยป  Medicine LLM   URL Share it on

  Arxiv:2309.09530   Arxiv:2406.14491   Arxiv:2411.19930   Autotrain compatible   Biology   Dataset:eleutherai/pile   Dataset:gair/lima   Dataset:open-orca/openorca Dataset:wizardlm/wizardlm evol...   En   Endpoints compatible   Instruct   Llama   Medical   Pytorch   Region:us   Safetensors   Sharded   Tensorflow
Model Card on HF ๐Ÿค—: https://huggingface.co/AdaptLLM/medicine-LLM 

Medicine LLM Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
๐ŸŒŸ Advertise your project ๐Ÿš€

Medicine LLM Parameters and Internals

Model Type 
text generation
Use Cases 
Areas:
biomedicine, finance, law
Applications:
question answering, reading comprehension, domain-specific applications
Primary Use Cases:
Domain-specific task handling in biomedicine, finance, and law
Limitations:
Reduced prompting ability if not using the reading comprehension approach
Considerations:
Pre-filled instructions are tailored for non-aligned models; different formats are required for chat capabilities
Additional Notes 
The 13B version scales up LLaMA-1-13B for larger-scale models with positive results
Training Details 
Data Sources:
Open-Orca/OpenOrca, GAIR/lima, WizardLM/WizardLM_evol_instruct_V2_196k, EleutherAI/pile
Methodology:
Continual pre-training on domain-specific corpora, using reading comprehension texts to improve prompting performance
Context Length:
2048
Model Architecture:
Based on LLaMA-1-7B, adapted for domain-specific tasks
Input Output 
Input Format:
text input in question form
Accepted Modalities:
text
Output Format:
text answer or completion
Performance Tips:
Use the reading comprehension transformation to boost prompting performance in domain-specific tasks
Release Notes 
Version:
2nd version
Date:
2024/6/21
Notes:
Released at Instruction-Pretrain, effective for both pre-training and continual pre-training
LLM NameMedicine LLM
Repository ๐Ÿค—https://huggingface.co/AdaptLLM/medicine-LLM 
Model Size6.7b
Required VRAM8.8 GB
Updated2025-06-09
MaintainerAdaptLLM
Model Typellama
Instruction-BasedYes
Model Files  0.8 GB: 1-of-33   0.8 GB: 2-of-33   0.8 GB: 3-of-33   0.8 GB: 4-of-33   0.8 GB: 5-of-33   0.8 GB: 6-of-33   0.8 GB: 7-of-33   0.8 GB: 8-of-33   0.8 GB: 9-of-33   0.8 GB: 10-of-33   0.8 GB: 11-of-33   0.8 GB: 12-of-33   0.8 GB: 13-of-33   0.8 GB: 14-of-33   0.8 GB: 15-of-33   0.8 GB: 16-of-33   0.8 GB: 17-of-33   0.8 GB: 18-of-33   0.8 GB: 19-of-33   0.8 GB: 20-of-33   0.8 GB: 21-of-33   0.8 GB: 22-of-33   0.8 GB: 23-of-33   0.8 GB: 24-of-33   0.8 GB: 25-of-33   0.8 GB: 26-of-33   0.8 GB: 27-of-33   0.8 GB: 28-of-33   0.8 GB: 29-of-33   0.8 GB: 30-of-33   0.8 GB: 31-of-33   0.8 GB: 32-of-33   0.5 GB: 33-of-33   0.8 GB: 1-of-33   0.8 GB: 2-of-33   0.8 GB: 3-of-33   0.8 GB: 4-of-33   0.8 GB: 5-of-33   0.8 GB: 6-of-33   0.8 GB: 7-of-33   0.8 GB: 8-of-33   0.8 GB: 9-of-33   0.8 GB: 10-of-33   0.8 GB: 11-of-33
Supported Languagesen
Model ArchitectureLLaMAForCausalLM
Transformers Version4.27.0.dev0
Vocabulary Size32001
Torch Data Typefloat16
Medicine LLM (AdaptLLM/medicine-LLM)

Quantized Models of the Medicine LLM

Model
Likes
Downloads
VRAM
Medicine LLM GGUF227142 GB
Medicine LLM GPTQ12313 GB
Medicine LLM AWQ3273 GB

Best Alternatives to Medicine LLM

Best Alternatives
Context / RAM
Downloads
Likes
Finance LLM0K / 8.8 GB474135
Law LLM0K / 8.8 GB10679
Note: green Score (e.g. "73.2") means that the model is better than AdaptLLM/medicine-LLM.

Rank the Medicine LLM Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 48023 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124