Qwen2.5 0.5B Instruct Full Pretrain Mix Mid Tweet 1M En GPT by AmberYifan

 »  All LLMs  »  AmberYifan  »  Qwen2.5 0.5B Instruct Full Pretrain Mix Mid Tweet 1M En GPT   URL Share it on

Qwen2.5 0.5B Instruct Full Pretrain Mix Mid Tweet 1M En GPT is an open-source language model by AmberYifan. Features: 0.5b LLM, VRAM: 1GB, Context: 32K, License: apache-2.0, Instruction-Based, LLM Explorer Score: 0.22.

Base model:finetune:qwen/qwen2... Base model:qwen/qwen2.5-0.5b-i...   Conversational   Endpoints compatible   Full   Generated from trainer   Instruct   Llama-factory   Qwen2   Region:us   Safetensors

Qwen2.5 0.5B Instruct Full Pretrain Mix Mid Tweet 1M En GPT Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").

Qwen2.5 0.5B Instruct Full Pretrain Mix Mid Tweet 1M En GPT Parameters and Internals

LLM NameQwen2.5 0.5B Instruct Full Pretrain Mix Mid Tweet 1M En GPT
Repository 🤗https://huggingface.co/AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-mix-mid-tweet-1m-en-gpt 
Base Model(s)  Qwen/Qwen2.5-0.5B-Instruct   Qwen/Qwen2.5-0.5B-Instruct
Model Size0.5b
Required VRAM1 GB
Updated2025-12-28
MaintainerAmberYifan
Model Typeqwen2
Instruction-BasedYes
Model Files  1.0 GB   0.0 GB
Model ArchitectureQwen2ForCausalLM
Licenseapache-2.0
Context Length32768
Model Max Length32768
Transformers Version4.52.4
Tokenizer ClassQwen2Tokenizer
Padding Token<|endoftext|>
Vocabulary Size151936
Torch Data Typebfloat16
Errorsreplace

Best Alternatives to Qwen2.5 0.5B Instruct Full Pretrain Mix Mid Tweet 1M En GPT

Best Alternatives
Context / RAM
Downloads
Likes
Qwen2 0.5B Abyme Merge3128K / 1.3 GB51
Qwen2 0.5B Abyme Merge2128K / 0.3 GB290
Qwen2.5 0.5B Instruct64K / 1 GB70
Qwen2.5 0.5B Instruct32K / 1 GB5717441494
Qwen2.5 0.5B Instruct32K / 1 GB357002922
Qwen Math SFT32K / 2 GB190
Qwen2 0.5B Instruct32K / 1 GB548631200
Qwen2.5 Coder 0.5B Instruct32K / 1 GB2732652
Tanzania 0.5B32K / 2 GB240
...etrain Mix Low Tweet 1M En GPT32K / 1 GB210
Note: green Score (e.g. "73.2") means that the model is better than AmberYifan/qwen2.5-0.5b-instruct-full-pretrain-mix-mid-tweet-1m-en-gpt.

Rank the Qwen2.5 0.5B Instruct Full Pretrain Mix Mid Tweet 1M En GPT Capabilities

🆘 Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! 🌟

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52889 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum — our secure, self-hosted AI agent for server management.
Release v20260328a