
Gemma 3n E4B Instructed
Multimodal
Zero-eval
#1OpenAI MMLU
#1ECLeKTic
#2Global-MMLU
+2 more
by Google
About
Gemma 3n E4B Instructed is a multimodal language model developed by Google. The model shows competitive results across 18 benchmarks. Notable strengths include HumanEval (75.0%), MGSM (67.0%), MMLU (64.9%). The model is available through 1 API provider. As a multimodal model, it can process and understand text, images, and other input formats seamlessly. Released in 2025, it represents Google's latest advancement in AI technology.
Pricing Range
Input (per 1M)$20.00 -$20.00
Output (per 1M)$40.00 -$40.00
Providers1
Timeline
AnnouncedJun 26, 2025
ReleasedJun 26, 2025
Knowledge CutoffJun 1, 2024
Specifications
Training Tokens11.0T
Capabilities
Multimodal
License & Family
License
Proprietary
Benchmark Performance Overview
Performance metrics and category breakdown
Overall Performance
18 benchmarks
Average Score
42.0%
Best Score
75.0%
High Performers (80%+)
0Performance Metrics
Max Context Window
64.0KAvg Throughput
42.1 tok/sAvg Latency
0msTop Categories
math
52.4%
general
41.6%
code
38.9%
Benchmark Performance
Top benchmark scores with normalized values (0-100%)
Ranking Across Benchmarks
Position relative to other models on each benchmark
HumanEval
Rank #45 of 62
#42Claude 3 Haiku
75.9%
#43Qwen2.5-Omni-7B
78.7%
#44Qwen2 7B Instruct
79.9%
#45Gemma 3n E4B Instructed
75.0%
#46Gemma 3n E4B Instructed LiteRT Preview
75.0%
#47Gemini 1.5 Flash
74.3%
#48Grok-1.5
74.1%
MGSM
Rank #23 of 31
#20Llama 3.2 11B Instruct
68.9%
#21GPT-4
74.5%
#22Claude 3 Haiku
75.1%
#23Gemma 3n E4B Instructed
67.0%
#24Phi 4 Mini
63.9%
#25Gemma 3n E4B Instructed LiteRT Preview
60.7%
#26Phi-3.5-MoE-instruct
58.7%
MMLU
Rank #72 of 78
#69Gemma 3n E4B Instructed LiteRT Preview
64.9%
#70Ministral 8B Instruct
65.0%
#71Granite 3.3 8B Instruct
65.5%
#72Gemma 3n E4B Instructed
64.9%
#73Granite 3.3 8B Base
63.9%
#74Llama 3.2 3B Instruct
63.4%
#75IBM Granite 4.0 Tiny Preview
60.4%
Global-MMLU-Lite
Rank #9 of 14
#6Gemini Diffusion
69.1%
#7Gemma 3 12B
69.5%
#8Gemma 3 27B
75.1%
#9Gemma 3n E4B Instructed
64.5%
#10Gemma 3n E4B Instructed LiteRT Preview
64.5%
#11Gemma 3n E2B Instructed LiteRT (Preview)
59.0%
#12Gemma 3n E2B Instructed
59.0%
MBPP
Rank #24 of 31
#21Qwen2 7B Instruct
67.2%
#22Llama 4 Scout
67.8%
#23Phi-3.5-mini-instruct
69.6%
#24Gemma 3n E4B Instructed
63.6%
#25Gemma 3n E4B Instructed LiteRT Preview
63.6%
#26Gemma 3 4B
63.2%
#27Gemma 2 27B
62.6%
All Benchmark Results for Gemma 3n E4B Instructed
Complete list of benchmark scores with detailed information
HumanEval HumanEval benchmark | code | text | 0.75 | 75.0% | Self-reported |
MGSM MGSM benchmark | math | text | 0.67 | 67.0% | Self-reported |
MMLU MMLU benchmark | general | text | 0.65 | 64.9% | Self-reported |
Global-MMLU-Lite Global-MMLU-Lite benchmark | general | text | 0.65 | 64.5% | Self-reported |
MBPP MBPP benchmark | code | text | 63.60 | 63.6% | Self-reported |
Global-MMLU Global-MMLU benchmark | general | text | 0.60 | 60.3% | Self-reported |
Include Include benchmark | general | text | 0.57 | 57.2% | Self-reported |
MMLU-Pro MMLU-Pro benchmark | general | text | 0.51 | 50.6% | Self-reported |
WMT24++ WMT24++ benchmark | general | text | 0.50 | 50.1% | Self-reported |
HiddenMath HiddenMath benchmark | math | text | 0.38 | 37.7% | Self-reported |
Showing 1 to 10 of 18 benchmarks