DeepSeek

DeepSeek-R1-0528

Zero-eval
#1MMLU-Redux
#1CNMO 2024
#1MMLU-Pro
+3 more

by DeepSeek

About

DeepSeek-R1-0528 is a language model developed by DeepSeek. It achieves strong performance with an average score of 68.1% across 16 benchmarks. It excels particularly in MMLU-Redux (93.4%), AIME 2024 (91.4%), AIME 2025 (87.5%). It supports a 262K token context window for handling large documents. The model is available through 3 API providers. Released in 2025, it represents DeepSeek's latest advancement in AI technology.

Pricing Range
Input (per 1M)$0.50 -$0.70
Output (per 1M)$2.15 -$2.50
Providers3
Timeline
AnnouncedMay 28, 2025
ReleasedMay 28, 2025
Specifications
Training Tokens14.8T
License & Family
License
MIT License
Base ModelDeepSeek-V3
Benchmark Performance Overview
Performance metrics and category breakdown

Overall Performance

16 benchmarks
Average Score
68.1%
Best Score
93.4%
High Performers (80%+)
7

Performance Metrics

Max Context Window
262.1K
Avg Throughput
30.7 tok/s
Avg Latency
1ms

Top Categories

code
73.3%
general
69.2%
agents
58.7%
Benchmark Performance
Top benchmark scores with normalized values (0-100%)
Ranking Across Benchmarks
Position relative to other models on each benchmark

MMLU-Redux

Rank #1 of 13
#1DeepSeek-R1-0528
93.4%
#2Qwen3-235B-A22B-Instruct-2507
93.1%
#3DeepSeek-R1
92.9%
#4Kimi K2 Instruct
92.7%

AIME 2024

Rank #6 of 41
#3o3
91.6%
#4Gemini 2.5 Pro
92.0%
#5Grok-3
93.3%
#6DeepSeek-R1-0528
91.4%
#7Gemini 2.5 Flash
88.0%
#8o3-mini
87.3%
#9DeepSeek R1 Distill Llama 70B
86.7%

AIME 2025

Rank #9 of 36
#6Gemini 2.5 Pro Preview 06-05
88.0%
#7Grok-3 Mini
90.8%
#8GPT-5 mini
91.1%
#9DeepSeek-R1-0528
87.5%
#10o3
86.4%
#11GPT-5 nano
85.2%
#12Gemini 2.5 Pro
83.0%

CNMO 2024

Rank #1 of 4
#1DeepSeek-R1-0528
86.9%
#2DeepSeek-R1
78.8%
#3Kimi K2 Instruct
74.3%
#4DeepSeek-V3
43.2%

MMLU-Pro

Rank #1 of 60
#1DeepSeek-R1-0528
85.0%
#2DeepSeek-R1
84.0%
#3Qwen3-235B-A22B-Instruct-2507
83.0%
#4DeepSeek-V3 0324
81.2%
All Benchmark Results for DeepSeek-R1-0528
Complete list of benchmark scores with detailed information
MMLU-Redux
MMLU-Redux benchmark
general
text
0.93
93.4%
Self-reported
AIME 2024
AIME 2024 benchmark
general
text
0.91
91.4%
Self-reported
AIME 2025
AIME 2025 benchmark
general
text
0.88
87.5%
Self-reported
CNMO 2024
CNMO 2024 benchmark
general
text
0.87
86.9%
Self-reported
MMLU-Pro
MMLU-Pro benchmark
general
text
0.85
85.0%
Self-reported
FRAMES
FRAMES benchmark
general
text
0.83
83.0%
Self-reported
GPQA
GPQA benchmark
general
text
0.81
81.0%
Self-reported
HMMT 2025
HMMT 2025 benchmark
general
text
0.79
79.4%
Self-reported
LiveCodeBench
LiveCodeBench benchmark
code
text
0.73
73.3%
Self-reported
Aider-Polyglot
Aider-Polyglot benchmark
general
text
0.72
71.6%
Self-reported
Showing 1 to 10 of 16 benchmarks