Tinyllama Zh by whynlp

 ยป  All LLMs  ยป  whynlp  ยป  Tinyllama Zh   URL Share it on

Tinyllama Zh is an open-source language model by whynlp. Features: 90b LLM, VRAM: 4.9GB, Context: 2K, License: mit, LLM Explorer Score: 0.12.

  Autotrain compatible   Conversational   Dataset:p208p2002/wudao   Endpoints compatible   Llama   Region:us   Safetensors   Zh
Model Card on HF ๐Ÿค—: https://huggingface.co/whynlp/tinyllama-zh 

Tinyllama Zh Benchmarks

nn.n% — How the model compares to the reference models: Anthropic Sonnet 3.5 ("so35"), GPT-4o ("gpt4o") or GPT-4 ("gpt4").
Tinyllama Zh (whynlp/tinyllama-zh)
๐ŸŒŸ Advertise your project ๐Ÿš€

Tinyllama Zh Parameters and Internals

Model Type 
Llama, Causal Language Model
Use Cases 
Areas:
demo project, pretraining
Primary Use Cases:
demonstrating how to use the tinyllama to pretrain on a large corpus
Limitations:
does not perform very well
Considerations:
For better performance, use a better corpus such as wanjuan.
Additional Notes 
This project serves as a demonstration to pretrain the TinyLlama on a large corpus with minimal modification to the transformers code.
Supported Languages 
Chinese (NLP)
Training Details 
Data Sources:
WuDaoCorpora Text
Data Volume:
45B tokens
Training Time:
6 days
Hardware Used:
8 A100 GPUs
LLM NameTinyllama Zh
Repository ๐Ÿค—https://huggingface.co/whynlp/tinyllama-zh 
Model Size90b
Required VRAM4.9 GB
Updated2025-09-23
Maintainerwhynlp
Model Typellama
Model Files  4.9 GB
Supported Languageszh
Model ArchitectureLlamaForCausalLM
Licensemit
Context Length2048
Model Max Length2048
Transformers Version4.35.2
Tokenizer ClassChatGLMTokenizer
Padding Token<unk>
Vocabulary Size65024
Torch Data Typefloat32

Best Alternatives to Tinyllama Zh

Best Alternatives
Context / RAM
Downloads
Likes
Chytrej 90M Base8K / 0.2 GB2442
BigWeave V6 90B4K / 175.7 GB440
BigWeave V12 90B4K / 175.7 GB292
BigWeave V9 90B4K / 175.7 GB11
Note: green Score (e.g. "73.2") means that the model is better than whynlp/tinyllama-zh.

Rank the Tinyllama Zh Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 52052 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum โ€” our secure, self-hosted AI agent for server management.
Release v20260328a