Qwen2.5 Coder 0.5B by Qwen

 ยป  All LLMs  ยป  Qwen  ยป  Qwen2.5 Coder 0.5B   URL Share it on

  Arxiv:2407.10671   Arxiv:2409.12186   Autotrain compatible Base model:finetune:qwen/qwen2...   Base model:qwen/qwen2.5-0.5b   Code   Codegen   Codeqwen   Conversational   En   Endpoints compatible   Qwen   Qwen-coder   Qwen2   Region:us   Safetensors
Model Card on HF ๐Ÿค—: https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B 

Qwen2.5 Coder 0.5B Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Qwen2.5 Coder 0.5B (Qwen/Qwen2.5-Coder-0.5B)
๐ŸŒŸ Advertise your project ๐Ÿš€

Qwen2.5 Coder 0.5B Parameters and Internals

Model Type 
code generation
Additional Notes 
We do not recommend using base language models for conversations. Instead, apply post-training (e.g., SFT, RLHF, continued pretraining).
Training Details 
Data Sources:
source code, text-code grounding, Synthetic data
Data Volume:
5.5 trillion tokens
Methodology:
Pretraining
Context Length:
32768
Model Architecture:
transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
LLM NameQwen2.5 Coder 0.5B
Repository ๐Ÿค—https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B 
Base Model(s)  Qwen/Qwen2.5-0.5B   Qwen/Qwen2.5-0.5B
Model Size0.5b
Required VRAM1 GB
Updated2025-09-07
MaintainerQwen
Model Typeqwen2
Model Files  1.0 GB
Supported Languagesen
Generates CodeYes
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.43.1
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size151936
Torch Data Typebfloat16
Errorsreplace

Best Alternatives to Qwen2.5 Coder 0.5B

Best Alternatives
Context / RAM
Downloads
Likes
Qwen2.5 Coder 0.5B Instruct32K / 1 GB3449550
Qw2 Ft Lc32K / 2 GB220
...en2.5 Coder 0.5B FIM Spec Zeta32K / 1.3 GB80
Zeta Sft32K / 1 GB230
Qwen2.5 Coder 0.5B Instruct32K / 1 GB20664
Qwen CoMa 0.5B32K / 1.3 GB71
Qwen2.5 Coder 0.5B Instruct32K / 1 GB70
Qwen2.5 Coder 0.5B32K / 1 GB11661
Qwen2.5 Coder 0.5B Instruct32K / 1 GB60
...nstruct MIFT En Manywords 800032K / 2 GB70
Note: green Score (e.g. "73.2") means that the model is better than Qwen/Qwen2.5-Coder-0.5B.

Rank the Qwen2.5 Coder 0.5B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51187 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124