🚀 Website under development • Launching soon

Zhipu AI

Portfolio Stats
Total Models2
Multimodal0
Benchmarks Run28
Avg Performance62.4%
Latest Release
GLM-4.5
Released: Jul 28, 2025
Release Timeline
Recent model releases by year
2025
2 models
Performance Overview
Top models and benchmark performance

Top Performing Models

By avg score
64.0%

Benchmark Categories

math
2
98.2%
general
10
77.6%
agents
6
54.4%
code
6
48.4%
reasoning
4
39.4%

Model Statistics

Multimodal Ratio
0%
Models with Providers
1

All Models

Complete portfolio of 2 models with advanced filtering

LicenseLinks
GLM-4.5
GLM-4.5 is an Agentic, Reasoning, and Coding (ARC) foundation model designed for intelligent agents, featuring 355 billion total parameters with 32 billion active parameters using MoE architecture. Trained on 23T tokens through multi-stage training, it is a hybrid reasoning model that provides two modes: thinking mode for complex reasoning and tool usage, and non-thinking mode for immediate responses. The model unifies agentic, reasoning, and coding capabilities with 128K context length support. It achieves exceptional performance with a score of 63.2 across 12 industry-standard benchmarks, placing 3rd among all proprietary and open-source models. Released under MIT open-source license allowing commercial use and secondary development.
Jul 28, 2025
MIT
64.2%--72.9%-
GLM-4.5-Air
GLM-4.5-Air is a more compact variant of GLM-4.5 designed for efficient Agentic, Reasoning, and Coding (ARC) applications. It features 106 billion total parameters with 12 billion active parameters using MoE architecture. Like GLM-4.5, it is a hybrid reasoning model providing thinking mode for complex reasoning and tool usage, and non-thinking mode for immediate responses. Despite its compact design, GLM-4.5-Air delivers competitive performance with a score of 59.8 across 12 industry-standard benchmarks, ranking 6th overall while maintaining superior efficiency. It supports 128K context length and is released under MIT open-source license allowing commercial use.
Jul 28, 2025
MIT
57.6%--70.7%-