Model Comparison

Comprehensive side-by-side analysis of model capabilities and performance

Google

Gemini 1.5 Pro

Google

Gemini 1.5 Pro is a multimodal language model developed by Google. It achieves strong performance with an average score of 72.6% across 23 benchmarks. It excels particularly in XSTest (98.8%), HellaSwag (93.3%), GSM8k (90.8%). With a 2.1M token context window, it can handle extensive documents and complex multi-turn conversations. The model is available through 1 API provider. As a multimodal model, it can process and understand text, images, and other input formats seamlessly. Released in 2024, it represents Google's latest advancement in AI technology.

OpenAI

GPT-4.1 mini

OpenAI

GPT-4.1 mini is a multimodal language model developed by OpenAI. The model shows competitive results across 29 benchmarks. It excels particularly in CharXiv-D (88.4%), MMLU (87.5%), IFEval (84.1%). With a 1.1M token context window, it can handle extensive documents and complex multi-turn conversations. The model is available through 2 API providers. As a multimodal model, it can process and understand text, images, and other input formats seamlessly. Released in 2025, it represents OpenAI's latest advancement in AI technology.

Google

Gemini 1.5 Pro

Google

2024-05-01

OpenAI

GPT-4.1 mini

OpenAI

2025-04-14

11 months newer

Pricing Comparison

Cost per million tokens (USD)

Google

Gemini 1.5 Pro

Input:$2.50
Output:$10.00
OpenAI

GPT-4.1 mini

$10.50 cheaper
Input:$0.40
Output:$1.60

Performance Metrics

Context window and performance specifications

Google

Gemini 1.5 Pro

Larger context
Max Context:2.1M
OpenAI

GPT-4.1 mini

Max Context:1.1M

Average performance across 4 common benchmarks

Google

Gemini 1.5 Pro

Average Score:69.8%
OpenAI

GPT-4.1 mini

+4.8%
Average Score:74.6%

Performance comparison across key benchmark categories

Google

Gemini 1.5 Pro

code
74.5%
math
+1.8%
74.9%
vision
72.3%
general
+23.0%
68.9%
OpenAI

GPT-4.1 mini

code
+9.6%
84.1%
math
73.1%
vision
+0.5%
72.7%
general
45.9%
Benchmark Scores - Detailed View
Side-by-side comparison of all benchmark scores
Knowledge Cutoff
Training data recency comparison

Gemini 1.5 Pro

2023-11-01

GPT-4.1 mini

2024-05-31

More recent knowledge cutoff means awareness of newer technologies and frameworks

Provider Availability & Performance

Available providers and their performance metrics

Google

Gemini 1.5 Pro

1 providers

Google

Throughput: 85 tok/s
Latency: 0.7ms
OpenAI

GPT-4.1 mini

2 providers

ZeroEval

Throughput: 150 tok/s
Latency: 5ms

OpenAI

Throughput: 150 tok/s
Latency: 5ms
Google

Gemini 1.5 Pro

Avg Score:69.8%
Providers:1
OpenAI

GPT-4.1 mini

+4.8%
Avg Score:74.6%
Providers:2