multilingual dialogue, long text reasoning (supporting up to 128K context), web browsing, code execution, custom tool calling
Supported Languages
zh (Chinese), en (English), ja (Japanese), ko (Korean), de (German), others (26 languages in total)
Training Details
Methodology:
Pre-trained with data over various domains including semantic, mathematical, reasoning, code, and knowledge datasets. Enhanced with human preference alignment techniques.
Context Length:
131072
Input Output
Input Format:
Chat templates with roles and content
Release Notes
Version:
2024/08/12
Notes:
Code updated with `transformers>=4.44.0`.
Version:
2024/07/24
Notes:
Published latest tech insights related to long text on [Medium](https://medium.com/@ChatGLM/glm-long-scaling-pre-trained-model-contexts-to-millions-caa3c48dea85)
Note: green Score (e.g. "73.2") means that the model is better than THUDM/glm-4-9b-chat-1m.
Rank the Glm 4 9B Chat 1M Capabilities
๐ Have you tried this model? Rate its performance. This feedback would greatly assist ML community in identifying the most suitable model for their needs. Your contribution really does make a difference! ๐
Instruction Following and Task Automation
Factuality and Completeness of Knowledge
Censorship and Alignment
Data Analysis and Insight Generation
Text Generation
Text Summarization and Feature Extraction
Code Generation
Multi-Language Support and Translation
What open-source LLMs or SLMs are you in search of? 52721 in total.