Google

Gemma 3n E2B Instructed

Multimodal
Zero-eval
#2OpenAI MMLU

by Google

About

Gemma 3n E2B Instructed is a multimodal language model developed by Google. The model shows competitive results across 18 benchmarks. Notable strengths include HumanEval (66.5%), MMLU (60.1%), Global-MMLU-Lite (59.0%). As a multimodal model, it can process and understand text, images, and other input formats seamlessly. Released in 2025, it represents Google's latest advancement in AI technology.

Timeline
AnnouncedJun 26, 2025
ReleasedJun 26, 2025
Knowledge CutoffJun 1, 2024
Specifications
Training Tokens11.0T
Capabilities
Multimodal
License & Family
License
Proprietary
Benchmark Performance Overview
Performance metrics and category breakdown

Overall Performance

18 benchmarks
Average Score
33.7%
Best Score
66.5%
High Performers (80%+)
0

Top Categories

math
40.4%
code
33.2%
general
32.8%
Benchmark Performance
Top benchmark scores with normalized values (0-100%)
Ranking Across Benchmarks
Position relative to other models on each benchmark

HumanEval

Rank #57 of 62
#54Gemma 3n E2B Instructed LiteRT (Preview)
66.5%
#55GPT-4
67.0%
#56GPT-3.5 Turbo
68.0%
#57Gemma 3n E2B Instructed
66.5%
#58Phi-3.5-mini-instruct
62.8%
#59Gemma 2 27B
51.8%
#60Gemma 3 1B
41.5%

MMLU

Rank #77 of 78
#74Gemma 3n E2B Instructed LiteRT (Preview)
60.1%
#75IBM Granite 4.0 Tiny Preview
60.4%
#76Llama 3.2 3B Instruct
63.4%
#77Gemma 3n E2B Instructed
60.1%
#78GPT OSS 120B
54.8%

Global-MMLU-Lite

Rank #12 of 14
#9Gemma 3n E2B Instructed LiteRT (Preview)
59.0%
#10Gemma 3n E4B Instructed LiteRT Preview
64.5%
#11Gemma 3n E4B Instructed
64.5%
#12Gemma 3n E2B Instructed
59.0%
#13Gemma 3 4B
54.5%
#14Gemma 3 1B
34.2%

MBPP

Rank #29 of 31
#26Gemma 3n E2B Instructed LiteRT (Preview)
56.6%
#27Gemma 2 27B
62.6%
#28Gemma 3 4B
63.2%
#29Gemma 3n E2B Instructed
56.6%
#30Gemma 2 9B
52.4%
#31Gemma 3 1B
35.2%

Global-MMLU

Rank #4 of 4
#1Gemma 3n E2B Instructed LiteRT (Preview)
55.1%
#2Gemma 3n E4B Instructed
60.3%
#3Gemma 3n E4B Instructed LiteRT Preview
60.3%
#4Gemma 3n E2B Instructed
55.1%
All Benchmark Results for Gemma 3n E2B Instructed
Complete list of benchmark scores with detailed information
HumanEval
HumanEval benchmark
code
text
0.67
66.5%
Self-reported
MMLU
MMLU benchmark
general
text
0.60
60.1%
Self-reported
Global-MMLU-Lite
Global-MMLU-Lite benchmark
general
text
0.59
59.0%
Self-reported
MBPP
MBPP benchmark
code
text
56.60
56.6%
Self-reported
Global-MMLU
Global-MMLU benchmark
general
text
0.55
55.1%
Self-reported
MGSM
MGSM benchmark
math
text
0.53
53.1%
Self-reported
WMT24++
WMT24++ benchmark
general
text
0.43
42.7%
Self-reported
MMLU-Pro
MMLU-Pro benchmark
general
text
0.41
40.5%
Self-reported
Include
Include benchmark
general
text
0.39
38.6%
Self-reported
HiddenMath
HiddenMath benchmark
math
text
0.28
27.7%
Self-reported
Showing 1 to 10 of 18 benchmarks