Medalpaca 13B by medalpaca

 ยป  All LLMs  ยป  medalpaca  ยป  Medalpaca 13B   URL Share it on

  Arxiv:2303.14070   Autotrain compatible   En   Endpoints compatible   Llama   Medical   Pytorch   Region:us   Sharded
Model Card on HF ๐Ÿค—: https://huggingface.co/medalpaca/medalpaca-13b 

Medalpaca 13B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Medalpaca 13B (medalpaca/medalpaca-13b)
๐ŸŒŸ Advertise your project ๐Ÿš€

Medalpaca 13B Parameters and Internals

Model Type 
text-generation
Use Cases 
Primary Use Cases:
question-answering, medical dialogues
Limitations:
may not perform effectively outside the scope of the medical domain, uncertain efficacy and accuracy, should not substitute a doctor's opinion
Considerations:
targets knowledge level of medical students
Supported Languages 
en (English)
Training Details 
Data Sources:
Anki flashcards, Wikidoc, StackExchange, ChatDoctor
Methodology:
fine-tuned for medical domain tasks
Model Architecture:
Based on LLaMA
Input Output 
Performance Tips:
Use within medical domain
LLM NameMedalpaca 13B
Repository ๐Ÿค—https://huggingface.co/medalpaca/medalpaca-13b 
Model Size13b
Required VRAM52.1 GB
Updated2025-09-08
Maintainermedalpaca
Model Typellama
Model Files  13.0 GB   10.0 GB: 1-of-6   9.9 GB: 2-of-6   9.9 GB: 3-of-6   9.9 GB: 4-of-6   9.9 GB: 5-of-6   2.5 GB: 6-of-6   0.0 GB   0.0 GB
Supported Languagesen
Model ArchitectureLlamaForCausalLM
Licensecc
Model Max Length512
Transformers Version4.28.0.dev0
Tokenizer ClassLlamaTokenizer
Padding Token[PAD]
Vocabulary Size32001
Torch Data Typefloat32

Quantized Models of the Medalpaca 13B

Model
Likes
Downloads
VRAM
Medalpaca 13B GGUF62505 GB
Medalpaca 13B GPTQ29157 GB

Best Alternatives to Medalpaca 13B

Best Alternatives
Context / RAM
Downloads
Likes
Luminaura RP 13B128K / 26 GB60
Yarn Llama 2 13B 128K128K / 26 GB218112
Agent Llama2 13B 80K80K / 26.4 GB80
Chat Llama2 13B 80K80K / 52.8 GB80
LongAlign 13B 64K64K / 26 GB10613
LongAlign 13B 64K Base64K / 26 GB993
LongAlign 13B 64K64K / 26 GB1113
LongAlign 13B 64K Base64K / 26 GB63
Openbuddy Llama2 13B V15p1 64K64K / 26.1 GB44
Openbuddy Llama2 13b64k V1564K / 26.1 GB82
Note: green Score (e.g. "73.2") means that the model is better than medalpaca/medalpaca-13b.

Rank the Medalpaca 13B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51221 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124