Google

Gemini 2.5 Pro

Multimodal
Zero-eval
#1MRCR
#1Video-MME
#1MRCR 1M (pointwise)
+4 more

by Google

About

Gemini 2.5 Pro is a multimodal language model developed by Google. It achieves strong performance with an average score of 67.1% across 16 benchmarks. It excels particularly in MRCR (93.0%), AIME 2024 (92.0%), Global-MMLU-Lite (88.6%). The model shows particular specialization in vision tasks with an average performance of 82.2%. With a 1.1M token context window, it can handle extensive documents and complex multi-turn conversations. The model is available through 1 API provider. As a multimodal model, it can process and understand text, images, and other input formats seamlessly. Released in 2025, it represents Google's latest advancement in AI technology.

Pricing Range
Input (per 1M)$1.25 -$1.25
Output (per 1M)$10.00 -$10.00
Providers1
Timeline
AnnouncedMay 20, 2025
ReleasedMay 20, 2025
Knowledge CutoffJan 31, 2025
Specifications
Capabilities
Multimodal
License & Family
License
Proprietary
Benchmark Performance Overview
Performance metrics and category breakdown

Overall Performance

16 benchmarks
Average Score
67.1%
Best Score
93.0%
High Performers (80%+)
7

Performance Metrics

Max Context Window
1.1M
Avg Throughput
85.0 tok/s
Avg Latency
1ms

Top Categories

vision
82.2%
code
70.6%
general
69.4%
reasoning
4.9%
Benchmark Performance
Top benchmark scores with normalized values (0-100%)
Ranking Across Benchmarks
Position relative to other models on each benchmark

MRCR

Rank #1 of 6
#1Gemini 2.5 Pro
93.0%
#2Gemini 1.5 Pro
82.6%
#3Gemini 1.5 Flash
71.9%
#4Gemini 2.0 Flash
69.2%

AIME 2024

Rank #4 of 41
#1Grok-3
93.3%
#2o4-mini
93.4%
#3Grok-3 Mini
95.8%
#4Gemini 2.5 Pro
92.0%
#5o3
91.6%
#6DeepSeek-R1-0528
91.4%
#7Gemini 2.5 Flash
88.0%

Global-MMLU-Lite

Rank #2 of 14
#1Gemini 2.5 Pro Preview 06-05
89.2%
#2Gemini 2.5 Pro
88.6%
#3Gemini 2.5 Flash
88.4%
#4Gemini 2.5 Flash-Lite
81.1%
#5Gemini 2.0 Flash-Lite
78.2%

Video-MME

Rank #1 of 5
#1Gemini 2.5 Pro
84.8%
#2Gemini 1.5 Pro
78.6%
#3Gemini 1.5 Flash
76.1%
#4Gemini 1.5 Flash 8B
66.2%

AIME 2025

Rank #12 of 36
#9GPT-5 nano
85.2%
#10o3
86.4%
#11DeepSeek-R1-0528
87.5%
#12Gemini 2.5 Pro
83.0%
#13Qwen3 235B A22B
81.5%
#14Claude Opus 4.1
80.2%
#15Phi 4 Reasoning Plus
78.0%
All Benchmark Results for Gemini 2.5 Pro
Complete list of benchmark scores with detailed information
MRCR
MRCR benchmark
general
text
0.93
93.0%
Self-reported
AIME 2024
AIME 2024 benchmark
general
text
0.92
92.0%
Self-reported
Global-MMLU-Lite
Global-MMLU-Lite benchmark
general
text
0.89
88.6%
Self-reported
Video-MME
Video-MME benchmark
vision
video
0.85
84.8%
Self-reported
AIME 2025
AIME 2025 benchmark
general
text
0.83
83.0%
Self-reported
GPQA
GPQA benchmark
general
text
0.83
83.0%
Self-reported
MRCR 1M (pointwise)
MRCR 1M (pointwise) benchmark
general
text
0.83
82.9%
Self-reported
MMMU
MMMU benchmark
vision
multimodal
0.80
79.6%
Self-reported
Aider-Polyglot
Aider-Polyglot benchmark
general
text
0.77
76.5%
Self-reported
LiveCodeBench v5
LiveCodeBench v5 benchmark
code
text
0.76
75.6%
Self-reported
Showing 1 to 10 of 16 benchmarks