DeepSeek R1 Distill Qwen 1.5B Medical Continual Pretrain Merged by MilyaShams

 ยป  All LLMs  ยป  MilyaShams  ยป  DeepSeek R1 Distill Qwen 1.5B Medical Continual Pretrain Merged   URL Share it on

  Autotrain compatible   Conversational   En   Endpoints compatible   Qwen2   Region:us   Safetensors   Trl   Unsloth

DeepSeek R1 Distill Qwen 1.5B Medical Continual Pretrain Merged Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
DeepSeek R1 Distill Qwen 1.5B Medical Continual Pretrain Merged (MilyaShams/DeepSeek-R1-Distill-Qwen-1.5B-medical-continual-pretrain-merged)
๐ŸŒŸ Advertise your project ๐Ÿš€

DeepSeek R1 Distill Qwen 1.5B Medical Continual Pretrain Merged Parameters and Internals

LLM NameDeepSeek R1 Distill Qwen 1.5B Medical Continual Pretrain Merged
Repository ๐Ÿค—https://huggingface.co/MilyaShams/DeepSeek-R1-Distill-Qwen-1.5B-medical-continual-pretrain-merged 
Base Model(s)  unsloth/deepseek-r1-distill-qwen-1.5b   unsloth/deepseek-r1-distill-qwen-1.5b
Model Size1.5b
Required VRAM3.5 GB
Updated2025-09-23
MaintainerMilyaShams
Model Typeqwen2
Model Files  3.5 GB
Supported Languagesen
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length131072
Model Max Length131072
Transformers Version4.51.0
Tokenizer ClassLlamaTokenizerFast
Padding Token<|vision_pad|>
Vocabulary Size151936
Torch Data Typebfloat16

Best Alternatives to DeepSeek R1 Distill Qwen 1.5B Medical Continual Pretrain Merged

Best Alternatives
Context / RAM
Downloads
Likes
ReaderLM V2500K / 3.1 GB5347752
Reader Lm 1.5B250K / 3.1 GB1411607
DeepSeek R1 Distill Qwen 1.5B128K / 3.5 GB12145231427
Qwen2.5 1.5B128K / 3.1 GB888741150
AceInstruct 1.5B128K / 3.5 GB7688020
DeepScaleR 1.5B Preview128K / 7.1 GB11774573
...n Research Reasoning Qwen 1.5B128K / 7.1 GB5716221
OpenReasoning Nemotron 1.5B128K / 3.1 GB518847
Qwen2 1.5B128K / 3.1 GB8373497
Stella En 1.5B V5128K / 6.2 GB581890211
Note: green Score (e.g. "73.2") means that the model is better than MilyaShams/DeepSeek-R1-Distill-Qwen-1.5B-medical-continual-pretrain-merged.

Rank the DeepSeek R1 Distill Qwen 1.5B Medical Continual Pretrain Merged Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51594 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124