Polyglot Ko 12.8B by EleutherAI

 ยป  All LLMs  ยป  EleutherAI  ยป  Polyglot Ko 12.8B   URL Share it on

  Arxiv:2104.09864   Arxiv:2204.04541   Arxiv:2306.02254   Autotrain compatible   Endpoints compatible   Gpt neox   Ko   Pytorch   Region:us   Safetensors   Sharded   Tensorflow

Polyglot Ko 12.8B Benchmarks

Polyglot Ko 12.8B (EleutherAI/polyglot-ko-12.8b)
๐ŸŒŸ Advertise your project ๐Ÿš€

Polyglot Ko 12.8B Parameters and Internals

Model Type 
causal-lm
Use Cases 
Areas:
research, commercial applications
Limitations:
Polyglot-Ko may not always return the most factual or accurate response., Model may produce socially unacceptable or offensive content.
Considerations:
Human curation or filtering mechanism is recommended to censor sensitive content.
Additional Notes 
Polyglot-Ko may produce socially unacceptable or offensive content.
Supported Languages 
Korean (high)
Training Details 
Data Sources:
Korean blog posts, Korean news dataset, Modu corpus, Korean patent dataset, Korean Q & A dataset, KcBert dataset, Korean fiction dataset, Korean online comments, Korean wikipedia, Clova call, Naver sentiment movie corpus, Korean hate speech dataset, Open subtitles, AIHub various tasks datasets, Standard Korean language dictionary
Data Volume:
863 GB (1.2TB before processing)
Methodology:
Trained with cross-entropy loss to maximize the likelihood of predicting the next token. Used EleutherAI GPT-NeoX framework.
Context Length:
2048
Training Time:
301,000 steps
Hardware Used:
256 A100 GPUs
Model Architecture:
40 transformer layers, model dimension 5120, feedforward dimension 20480, 40 heads, head dimension 128, RoPE applied to 64 dimensions of each head. Tokenization vocabulary of 30003.
LLM NamePolyglot Ko 12.8B
Repository ๐Ÿค—https://huggingface.co/EleutherAI/polyglot-ko-12.8b 
Model Size12.8b
Required VRAM25.8 GB
Updated2025-09-23
MaintainerEleutherAI
Model Typegpt_neox
Model Files  0.9 GB: 1-of-28   0.8 GB: 2-of-28   0.8 GB: 3-of-28   1.0 GB: 4-of-28   0.9 GB: 5-of-28   1.0 GB: 6-of-28   0.9 GB: 7-of-28   1.0 GB: 8-of-28   0.9 GB: 9-of-28   1.0 GB: 10-of-28   0.9 GB: 11-of-28   1.0 GB: 12-of-28   0.9 GB: 13-of-28   1.0 GB: 14-of-28   0.9 GB: 15-of-28   1.0 GB: 16-of-28   0.9 GB: 17-of-28   1.0 GB: 18-of-28   0.9 GB: 19-of-28   1.0 GB: 20-of-28   0.9 GB: 21-of-28   1.0 GB: 22-of-28   0.9 GB: 23-of-28   1.0 GB: 24-of-28   0.9 GB: 25-of-28   1.0 GB: 26-of-28   0.9 GB: 27-of-28   0.5 GB: 28-of-28   26.0 GB
Supported Languagesko
Model ArchitectureGPTNeoXForCausalLM
Licenseapache-2.0
Context Length2048
Model Max Length2048
Transformers Version4.29.2
Tokenizer ClassPreTrainedTokenizerFast
Padding Token<|endoftext|>
Vocabulary Size30080
Torch Data Typefloat16

Best Alternatives to Polyglot Ko 12.8B

Best Alternatives
Context / RAM
Downloads
Likes
...therAI Polyglot Ko 12.8B 4bits2K / 7.7 GB81
...pen Platypus Polyglot Ko 12.8B2K / 51.4 GB50
Polyglot Ko 12.8B Instruct2K / 25.9 GB31873
KoRnDAlpaca RAG Polyglot 12.8B2K / 51.4 GB80
KoRnDAlpaca RAG Polyglot 12.8B2K / 51.4 GB50
Kullm Polyglot 12.8B V32K / 25.9 GB75
Polyglot Ko 12.8B Inst All2K / 51.4 GB71
Koquality Polyglot 12.8B2K / 51.4 GB50
Kyujin Poly Platypus Ko 12.8B2K / 25.9 GB5622
Gollm 12.8B Instruct V2.32K / 25.9 GB50
Note: green Score (e.g. "73.2") means that the model is better than EleutherAI/polyglot-ko-12.8b.

Rank the Polyglot Ko 12.8B Capabilities

๐Ÿ†˜ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐ŸŒŸ

Instruction Following and Task Automation  
Factuality and Completeness of Knowledge  
Censorship and Alignment  
Data Analysis and Insight Generation  
Text Generation  
Text Summarization and Feature Extraction  
Code Generation  
Multi-Language Support and Translation  

What open-source LLMs or SLMs are you in search of? 51536 in total.

Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124