Model Comparison

Comprehensive side-by-side analysis of model capabilities and performance

Anthropic

Claude 3.5 Haiku

Anthropic

Claude 3.5 Haiku is a language model developed by Anthropic. It achieves strong performance with an average score of 60.8% across 9 benchmarks. It excels particularly in HumanEval (88.1%), MGSM (85.6%), DROP (83.1%). It supports a 400K token context window for handling large documents. The model is available through 3 API providers. Released in 2024, it represents Anthropic's latest advancement in AI technology.

OpenAI

GPT-4o

OpenAI

GPT-4o is a multimodal language model developed by OpenAI. The model shows competitive results across 32 benchmarks. It excels particularly in AI2D (94.2%), DocVQA (92.8%), ChartQA (85.7%). The model shows particular specialization in vision tasks with an average performance of 82.5%. It supports a 144K token context window for handling large documents. The model is available through 3 API providers. As a multimodal model, it can process and understand text, images, and other input formats seamlessly. Released in 2024, it represents OpenAI's latest advancement in AI technology.

OpenAI

GPT-4o

OpenAI

2024-08-06

Anthropic

Claude 3.5 Haiku

Anthropic

2024-10-22

2 months newer

Pricing Comparison

Cost per million tokens (USD)

Anthropic

Claude 3.5 Haiku

$7.70 cheaper
Input:$0.80
Output:$4.00
OpenAI

GPT-4o

Input:$2.50
Output:$10.00

Performance Metrics

Context window and performance specifications

Anthropic

Claude 3.5 Haiku

Larger context
Max Context:400.0K
OpenAI

GPT-4o

Max Context:144.4K

Average performance across 5 common benchmarks

Anthropic

Claude 3.5 Haiku

Average Score:44.2%
OpenAI

GPT-4o

+12.0%
Average Score:56.2%

Performance comparison across key benchmark categories

Anthropic

Claude 3.5 Haiku

code
+7.1%
88.1%
math
+16.1%
77.5%
general
+5.5%
57.6%
agents
36.9%
OpenAI

GPT-4o

code
81.0%
math
61.4%
general
52.1%
agents
+14.6%
51.5%
Benchmark Scores - Detailed View
Side-by-side comparison of all benchmark scores

Provider Availability & Performance

Available providers and their performance metrics

Anthropic

Claude 3.5 Haiku

3 providers

Google

Throughput: 42 tok/s
Latency: 0.4ms

Bedrock

Throughput: 104 tok/s
Latency: 0.5ms

Anthropic

Throughput: 100 tok/s
Latency: 0.3ms
OpenAI

GPT-4o

3 providers

Azure

Throughput: 99 tok/s
Latency: 0.53ms

ZeroEval

Throughput: 99 tok/s
Latency: 0.53ms

OpenAI

Throughput: 132 tok/s
Latency: 0.5ms
Anthropic

Claude 3.5 Haiku

Avg Score:44.2%
Providers:3
OpenAI

GPT-4o

+12.0%
Avg Score:56.2%
Providers:3