Medalpaca 13B GPTQ by TheBloke

 Β»  All LLMs  Β»  TheBloke  Β»  Medalpaca 13B GPTQ   URL Share it on

  Arxiv:2303.14070   4-bit   Autotrain compatible Base model:medalpaca/medalpaca... Base model:quantized:medalpaca...   En   Gptq   Llama   Medical   Pytorch   Quantized   Region:us   Safetensors

Medalpaca 13B GPTQ Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Medalpaca 13B GPTQ (TheBloke/medalpaca-13B-GPTQ)
🌟 Advertise your project πŸš€

Medalpaca 13B GPTQ Parameters and Internals

Model Type 
llama
Use Cases 
Primary Use Cases:
Medical question-answering, Medical dialogue tasks
Limitations:
May not perform effectively outside the medical domain, Training data targets knowledge level of medical students, may limit addressing needs of board-certified physicians, Not tested in real-world applications, Must not be used as a substitute for a doctor’s opinion, should be treated as a research tool
Training Details 
Data Sources:
Anki flashcards, Wikidoc, StackExchange, ChatDoctor
Model Architecture:
Based on LLaMA (Large Language Model Meta AI), specifically fine-tuned for medical domain tasks.
Input Output 
Input Format:
prompt_template
Accepted Modalities:
text
Output Format:
text
LLM NameMedalpaca 13B GPTQ
Repository πŸ€—https://huggingface.co/TheBloke/medalpaca-13B-GPTQ 
Model NameMedalpaca 13B
Model Creatormedalpaca
Base Model(s)  Medalpaca 13B   medalpaca/medalpaca-13b
Model Size13b
Required VRAM7.3 GB
Updated2025-08-17
MaintainerTheBloke
Model Typellama
Model Files  7.3 GB
Supported Languagesen
GPTQ QuantizationYes
Quantization Typegptq
Model ArchitectureLlamaForCausalLM
Licenseother
Model Max Length512
Transformers Version4.28.0.dev0
Tokenizer ClassLlamaTokenizer
Vocabulary Size32001
Torch Data Typefloat32

Best Alternatives to Medalpaca 13B GPTQ

Best Alternatives
Context / RAM
Downloads
Likes
Yarn Llama 2 13B 128K GPTQ128K / 7.3 GB1216
LongAlign 13B 64K GPTQ64K / 7.3 GB41
...boros L2 13B 2 1 YaRN 64K GPTQ64K / 7.3 GB93
Yarn Llama 2 13B 64K GPTQ64K / 7.3 GB121
OrcaMaid V3 13B 32K GPTQ32K / 7.3 GB103
OrcaMaid V2 FIX 13B 32K GPTQ32K / 7.3 GB74
EverythingLM 13B 16K GPTQ16K / 7.3 GB1613
Tinybra 13B GPTQ 32g 4BIT16K / 8 GB101
Tinybra 13B GPTQ 4BIT16K / 7 GB60
LlongOrca 13B 16K GPT16K / 7.3 GB60
Note: green Score (e.g. "73.2") means that the model is better than TheBloke/medalpaca-13B-GPTQ.

Rank the Medalpaca 13B GPTQ Capabilities

πŸ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 50728 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124