Model Comparison

Comprehensive side-by-side analysis of model capabilities and performance

Google

Gemini 1.5 Pro

Google

Gemini 1.5 Pro is a multimodal language model developed by Google. It achieves strong performance with an average score of 72.6% across 23 benchmarks. It excels particularly in XSTest (98.8%), HellaSwag (93.3%), GSM8k (90.8%). With a 2.1M token context window, it can handle extensive documents and complex multi-turn conversations. The model is available through 1 API provider. As a multimodal model, it can process and understand text, images, and other input formats seamlessly. Released in 2024, it represents Google's latest advancement in AI technology.

OpenAI

GPT-4

OpenAI

GPT-4 is a multimodal language model developed by OpenAI. It achieves strong performance with an average score of 77.7% across 12 benchmarks. It excels particularly in AI2 Reasoning Challenge (ARC) (96.3%), HellaSwag (95.3%), Uniform Bar Exam (90.0%). The model shows particular specialization in reasoning tasks with an average performance of 93.0%. The model is available through 2 API providers. As a multimodal model, it can process and understand text, images, and other input formats seamlessly.

OpenAI

GPT-4

OpenAI

2023-06-13

Google

Gemini 1.5 Pro

Google

2024-05-01

10 months newer

Pricing Comparison

Cost per million tokens (USD)

Google

Gemini 1.5 Pro

$77.50 cheaper
Input:$2.50
Output:$10.00
OpenAI

GPT-4

Input:$30.00
Output:$60.00

Performance Metrics

Context window and performance specifications

Google

Gemini 1.5 Pro

Larger context
Max Context:2.1M
OpenAI

GPT-4

Max Context:65.5K

Average performance across 7 common benchmarks

Google

Gemini 1.5 Pro

+12.8%
Average Score:81.6%
OpenAI

GPT-4

Average Score:68.8%

Performance comparison across key benchmark categories

Google

Gemini 1.5 Pro

reasoning
+0.3%
93.3%
general
68.9%
math
+6.4%
74.9%
code
+7.5%
74.5%
OpenAI

GPT-4

reasoning
93.0%
general
+7.3%
76.2%
math
68.5%
code
67.0%
Benchmark Scores - Detailed View
Side-by-side comparison of all benchmark scores
Knowledge Cutoff
Training data recency comparison

GPT-4

2022-12-31

Gemini 1.5 Pro

2023-11-01

More recent knowledge cutoff means awareness of newer technologies and frameworks

Provider Availability & Performance

Available providers and their performance metrics

Google

Gemini 1.5 Pro

1 providers

Google

Throughput: 85 tok/s
Latency: 0.7ms
OpenAI

GPT-4

2 providers

Azure

Throughput: 104 tok/s
Latency: 0.3ms

OpenAI

Throughput: 100 tok/s
Latency: 0.5ms
Google

Gemini 1.5 Pro

+12.8%
Avg Score:81.6%
Providers:1
OpenAI

GPT-4

Avg Score:68.8%
Providers:2