Gemini 2.5 Flash-Lite
Multimodal
Zero-eval
#1MRCR v2
#1Arc
#3FACTS Grounding
by Google
About
Gemini 2.5 Flash-Lite is a multimodal language model developed by Google. The model shows competitive results across 13 benchmarks. It excels particularly in FACTS Grounding (84.1%), Global-MMLU-Lite (81.1%), MMMU (72.9%). With a 1.1M token context window, it can handle extensive documents and complex multi-turn conversations. The model is available through 1 API provider. As a multimodal model, it can process and understand text, images, and other input formats seamlessly. Released in 2025, it represents Google's latest advancement in AI technology.
Pricing Range
Input (per 1M)$0.10 -$0.10
Output (per 1M)$0.40 -$0.40
Providers1
Timeline
AnnouncedJun 17, 2025
ReleasedJun 17, 2025
Knowledge CutoffJan 1, 2025
Specifications
Capabilities
Multimodal
License & Family
License
Creative Commons Attribution 4.0 License
Benchmark Performance Overview
Performance metrics and category breakdown
Overall Performance
13 benchmarks
Average Score
40.8%
Best Score
84.1%
High Performers (80%+)
2Performance Metrics
Max Context Window
1.1MAvg Throughput
5.7 tok/sAvg Latency
0msTop Categories
factuality
84.1%
vision
72.9%
code
42.5%
general
35.8%
reasoning
2.5%
Benchmark Performance
Top benchmark scores with normalized values (0-100%)
Ranking Across Benchmarks
Position relative to other models on each benchmark
FACTS Grounding
Rank #3 of 9
#1Gemini 2.5 Flash
85.3%
#2Gemini 2.5 Pro Preview 06-05
87.8%
#3Gemini 2.5 Flash-Lite
84.1%
#4Gemini 2.0 Flash
83.6%
#5Gemini 2.0 Flash-Lite
83.6%
#6Gemma 3 12B
75.8%
Global-MMLU-Lite
Rank #4 of 14
#1Gemini 2.5 Flash
88.4%
#2Gemini 2.5 Pro
88.6%
#3Gemini 2.5 Pro Preview 06-05
89.2%
#4Gemini 2.5 Flash-Lite
81.1%
#5Gemini 2.0 Flash-Lite
78.2%
#6Gemma 3 27B
75.1%
#7Gemma 3 12B
69.5%
MMMU
Rank #15 of 52
#12Llama 4 Maverick
73.4%
#13Claude Sonnet 4
74.4%
#14GPT-4.1
74.8%
#15Gemini 2.5 Flash-Lite
72.9%
#16GPT-4.1 mini
72.7%
#17GPT-4o
72.2%
#18Gemini 2.0 Flash
70.7%
GPQA
Rank #44 of 115
#41GPT-4.1 mini
65.0%
#42QwQ-32B
65.2%
#43DeepSeek R1 Distill Llama 70B
65.2%
#44Gemini 2.5 Flash-Lite
64.6%
#45DeepSeek R1 Distill Qwen 32B
62.1%
#46Gemini 2.0 Flash
62.1%
#47o1-mini
60.0%
Vibe-Eval
Rank #6 of 8
#3Gemini 1.5 Pro
53.9%
#4Gemini 2.0 Flash
56.3%
#5Gemini 2.5 Flash
65.4%
#6Gemini 2.5 Flash-Lite
51.3%
#7Gemini 1.5 Flash
48.9%
#8Gemini 1.5 Flash 8B
40.9%
All Benchmark Results for Gemini 2.5 Flash-Lite
Complete list of benchmark scores with detailed information
FACTS Grounding FACTS Grounding benchmark | factuality | text | 0.84 | 84.1% | Self-reported |
Global-MMLU-Lite Global-MMLU-Lite benchmark | general | text | 0.81 | 81.1% | Self-reported |
MMMU MMMU benchmark | vision | multimodal | 0.73 | 72.9% | Self-reported |
GPQA GPQA benchmark | general | text | 0.65 | 64.6% | Self-reported |
Vibe-Eval Vibe-Eval benchmark | code | text | 0.51 | 51.3% | Self-reported |
AIME 2025 AIME 2025 benchmark | general | text | 0.50 | 49.8% | Self-reported |
LiveCodeBench LiveCodeBench benchmark | code | text | 0.34 | 33.7% | Self-reported |
SWE-Bench Verified SWE-Bench Verified benchmark | general | text | 0.32 | 31.6% | Self-reported |
Aider-Polyglot Aider-Polyglot benchmark | general | text | 0.27 | 26.7% | Self-reported |
MRCR v2 MRCR v2 benchmark | general | text | 0.17 | 16.6% | Self-reported |
Showing 1 to 10 of 13 benchmarks