Model Comparison

Comprehensive side-by-side analysis of model capabilities and performance

Anthropic

Claude 3.5 Haiku

Anthropic

Claude 3.5 Haiku is a language model developed by Anthropic. It achieves strong performance with an average score of 60.8% across 9 benchmarks. It excels particularly in HumanEval (88.1%), MGSM (85.6%), DROP (83.1%). It supports a 400K token context window for handling large documents. The model is available through 3 API providers. Released in 2024, it represents Anthropic's latest advancement in AI technology.

OpenAI

GPT-4.1 nano

OpenAI

GPT-4.1 nano is a multimodal language model developed by OpenAI. The model shows competitive results across 25 benchmarks. It excels particularly in MMLU (80.1%), IFEval (74.5%), CharXiv-D (73.9%). With a 1.1M token context window, it can handle extensive documents and complex multi-turn conversations. The model is available through 1 API provider. As a multimodal model, it can process and understand text, images, and other input formats seamlessly. Released in 2025, it represents OpenAI's latest advancement in AI technology.

Anthropic

Claude 3.5 Haiku

Anthropic

2024-10-22

OpenAI

GPT-4.1 nano

OpenAI

2025-04-14

5 months newer

Pricing Comparison

Cost per million tokens (USD)

Anthropic

Claude 3.5 Haiku

Input:$0.80
Output:$4.00
OpenAI

GPT-4.1 nano

$4.30 cheaper
Input:$0.10
Output:$0.40

Performance Metrics

Context window and performance specifications

Anthropic

Claude 3.5 Haiku

Max Context:400.0K
OpenAI

GPT-4.1 nano

Larger context
Max Context:1.1M

Average performance across 3 common benchmarks

Anthropic

Claude 3.5 Haiku

+9.5%
Average Score:38.5%
OpenAI

GPT-4.1 nano

Average Score:29.0%

Performance comparison across key benchmark categories

Anthropic

Claude 3.5 Haiku

code
+13.6%
88.1%
math
+21.3%
77.5%
general
+25.2%
57.6%
agents
+18.6%
36.9%
OpenAI

GPT-4.1 nano

code
74.5%
math
56.2%
general
32.4%
agents
18.3%
Benchmark Scores - Detailed View
Side-by-side comparison of all benchmark scores
Knowledge Cutoff
Training data recency comparison

GPT-4.1 nano

2024-05-31

More recent knowledge cutoff means awareness of newer technologies and frameworks

Provider Availability & Performance

Available providers and their performance metrics

Anthropic

Claude 3.5 Haiku

3 providers

Google

Throughput: 42 tok/s
Latency: 0.4ms

Bedrock

Throughput: 104 tok/s
Latency: 0.5ms

Anthropic

Throughput: 100 tok/s
Latency: 0.3ms
OpenAI

GPT-4.1 nano

1 providers

OpenAI

Throughput: 200 tok/s
Latency: 2ms
Anthropic

Claude 3.5 Haiku

+9.5%
Avg Score:38.5%
Providers:3
OpenAI

GPT-4.1 nano

Avg Score:29.0%
Providers:1