Google

Gemini 2.5 Pro Preview 06-05

Multimodal
Zero-eval
#1Global-MMLU-Lite
#1FACTS Grounding
#1Vibe-Eval
+6 more

by Google

About

Gemini 2.5 Pro Preview 06-05 is a multimodal language model developed by Google. It achieves strong performance with an average score of 68.8% across 13 benchmarks. It excels particularly in Global-MMLU-Lite (89.2%), AIME 2025 (88.0%), FACTS Grounding (87.8%). With a 1.1M token context window, it can handle extensive documents and complex multi-turn conversations. The model is available through 1 API provider. As a multimodal model, it can process and understand text, images, and other input formats seamlessly. Released in 2025, it represents Google's latest advancement in AI technology.

Pricing Range
Input (per 1M)$1.25 -$1.25
Output (per 1M)$10.00 -$10.00
Providers1
Timeline
AnnouncedJun 5, 2025
ReleasedJun 5, 2025
Knowledge CutoffJan 31, 2025
Specifications
Capabilities
Multimodal
License & Family
License
Proprietary
Benchmark Performance Overview
Performance metrics and category breakdown

Overall Performance

13 benchmarks
Average Score
68.8%
Best Score
89.2%
High Performers (80%+)
7

Performance Metrics

Max Context Window
1.1M
Avg Throughput
85.0 tok/s
Avg Latency
1ms

Top Categories

factuality
87.8%
vision
82.8%
general
69.8%
code
68.1%
long_context
16.4%
Benchmark Performance
Top benchmark scores with normalized values (0-100%)
Ranking Across Benchmarks
Position relative to other models on each benchmark

Global-MMLU-Lite

Rank #1 of 14
#1Gemini 2.5 Pro Preview 06-05
89.2%
#2Gemini 2.5 Pro
88.6%
#3Gemini 2.5 Flash
88.4%
#4Gemini 2.5 Flash-Lite
81.1%

AIME 2025

Rank #8 of 36
#5Grok-3 Mini
90.8%
#6GPT-5 mini
91.1%
#7Grok-4
91.7%
#8Gemini 2.5 Pro Preview 06-05
88.0%
#9DeepSeek-R1-0528
87.5%
#10o3
86.4%
#11GPT-5 nano
85.2%

FACTS Grounding

Rank #1 of 9
#1Gemini 2.5 Pro Preview 06-05
87.8%
#2Gemini 2.5 Flash
85.3%
#3Gemini 2.5 Flash-Lite
84.1%
#4Gemini 2.0 Flash
83.6%

GPQA

Rank #3 of 115
#1Grok-4
87.5%
#2Grok-4 Heavy
88.4%
#3Gemini 2.5 Pro Preview 06-05
86.4%
#4GPT-5
85.7%
#5Claude 3.7 Sonnet
84.8%
#6Grok-3
84.6%

VideoMMMU

Rank #2 of 3
#1GPT-5
84.6%
#2Gemini 2.5 Pro Preview 06-05
83.6%
#3o3
83.3%
All Benchmark Results for Gemini 2.5 Pro Preview 06-05
Complete list of benchmark scores with detailed information
Global-MMLU-Lite
Global-MMLU-Lite benchmark
general
text
0.89
89.2%
Self-reported
AIME 2025
AIME 2025 benchmark
general
text
0.88
88.0%
Self-reported
FACTS Grounding
FACTS Grounding benchmark
factuality
text
0.88
87.8%
Self-reported
GPQA
GPQA benchmark
general
text
0.86
86.4%
Self-reported
VideoMMMU
VideoMMMU benchmark
vision
multimodal
0.84
83.6%
Self-reported
Aider-Polyglot
Aider-Polyglot benchmark
general
text
0.82
82.2%
Self-reported
MMMU
MMMU benchmark
vision
multimodal
0.82
82.0%
Self-reported
LiveCodeBench
LiveCodeBench benchmark
code
text
0.69
69.0%
Self-reported
SWE-Bench Verified
SWE-Bench Verified benchmark
general
text
0.67
67.2%
Self-reported
Vibe-Eval
Vibe-Eval benchmark
code
text
0.67
67.2%
Self-reported
Showing 1 to 10 of 13 benchmarks