Gemini 2.0 Flash-Lite
Multimodal
Zero-eval
#1MRCR 1M
#1Bird-SQL (dev)
#2CoVoST2
+2 more
by Google
About
Gemini 2.0 Flash-Lite is a multimodal language model developed by Google. The model shows competitive results across 13 benchmarks. It excels particularly in MATH (86.8%), FACTS Grounding (83.6%), Global-MMLU-Lite (78.2%). With a 1.1M token context window, it can handle extensive documents and complex multi-turn conversations. The model is available through 1 API provider. As a multimodal model, it can process and understand text, images, and other input formats seamlessly. Released in 2025, it represents Google's latest advancement in AI technology.
Pricing Range
Input (per 1M)$0.07 -$0.07
Output (per 1M)$0.30 -$0.30
Providers1
Timeline
AnnouncedFeb 5, 2025
ReleasedFeb 5, 2025
Knowledge CutoffJun 1, 2024
Specifications
Capabilities
Multimodal
License & Family
License
Proprietary
Benchmark Performance Overview
Performance metrics and category breakdown
Overall Performance
13 benchmarks
Average Score
59.0%
Best Score
86.8%
High Performers (80%+)
2Performance Metrics
Max Context Window
1.1MAvg Throughput
85.0 tok/sAvg Latency
1msTop Categories
factuality
83.6%
math
71.0%
vision
68.0%
general
55.5%
code
28.9%
Benchmark Performance
Top benchmark scores with normalized values (0-100%)
Ranking Across Benchmarks
Position relative to other models on each benchmark
MATH
Rank #5 of 63
#2Gemma 3 27B
89.0%
#3Gemini 2.0 Flash
89.7%
#4o1
96.4%
#5Gemini 2.0 Flash-Lite
86.8%
#6Gemini 1.5 Pro
86.5%
#7o1-preview
85.5%
#8GPT-5
84.7%
FACTS Grounding
Rank #5 of 9
#2Gemini 2.0 Flash
83.6%
#3Gemini 2.5 Flash-Lite
84.1%
#4Gemini 2.5 Flash
85.3%
#5Gemini 2.0 Flash-Lite
83.6%
#6Gemma 3 12B
75.8%
#7Gemma 3 27B
74.9%
#8Gemma 3 4B
70.1%
Global-MMLU-Lite
Rank #5 of 14
#2Gemini 2.5 Flash-Lite
81.1%
#3Gemini 2.5 Flash
88.4%
#4Gemini 2.5 Pro
88.6%
#5Gemini 2.0 Flash-Lite
78.2%
#6Gemma 3 27B
75.1%
#7Gemma 3 12B
69.5%
#8Gemini Diffusion
69.1%
MMLU-Pro
Rank #20 of 60
#17Grok-2 mini
72.0%
#18GPT-4o
72.6%
#19Llama 3.1 405B Instruct
73.3%
#20Gemini 2.0 Flash-Lite
71.6%
#21Qwen2.5 72B Instruct
71.1%
#22Phi 4
70.4%
#23Kimi K2 Base
69.2%
MMMU
Rank #25 of 52
#22Claude 3.5 Sonnet
68.3%
#23Llama 4 Scout
69.4%
#24Qwen2.5 VL 32B Instruct
70.0%
#25Gemini 2.0 Flash-Lite
68.0%
#26Grok-2
66.1%
#27Gemini 1.5 Pro
65.9%
#28Pixtral Large
64.0%
All Benchmark Results for Gemini 2.0 Flash-Lite
Complete list of benchmark scores with detailed information
MATH MATH benchmark | math | text | 0.87 | 86.8% | Self-reported |
FACTS Grounding FACTS Grounding benchmark | factuality | text | 0.84 | 83.6% | Self-reported |
Global-MMLU-Lite Global-MMLU-Lite benchmark | general | text | 0.78 | 78.2% | Self-reported |
MMLU-Pro MMLU-Pro benchmark | general | text | 0.72 | 71.6% | Self-reported |
MMMU MMMU benchmark | vision | multimodal | 0.68 | 68.0% | Self-reported |
EgoSchema EgoSchema benchmark | general | text | 0.67 | 67.2% | Self-reported |
MRCR 1M MRCR 1M benchmark | general | text | 0.58 | 58.0% | Self-reported |
Bird-SQL (dev) Bird-SQL (dev) benchmark | general | text | 0.57 | 57.4% | Self-reported |
HiddenMath HiddenMath benchmark | math | text | 0.55 | 55.3% | Self-reported |
GPQA GPQA benchmark | general | text | 0.52 | 51.5% | Self-reported |
Showing 1 to 10 of 13 benchmarks
Resources