Model Comparison

Comprehensive side-by-side analysis of model capabilities and performance

Anthropic

Claude 3.5 Haiku

Anthropic

Claude 3.5 Haiku is a language model developed by Anthropic. It achieves strong performance with an average score of 60.8% across 9 benchmarks. It excels particularly in HumanEval (88.1%), MGSM (85.6%), DROP (83.1%). It supports a 400K token context window for handling large documents. The model is available through 3 API providers. Released in 2024, it represents Anthropic's latest advancement in AI technology.

Meta

Llama 4 Maverick

Meta

Llama 4 Maverick is a multimodal language model developed by Meta. It achieves strong performance with an average score of 71.8% across 13 benchmarks. It excels particularly in DocVQA (94.4%), MGSM (92.3%), ChartQA (90.0%). The model shows particular specialization in vision tasks with an average performance of 75.8%. With a 2.0M token context window, it can handle extensive documents and complex multi-turn conversations. The model is available through 7 API providers. As a multimodal model, it can process and understand text, images, and other input formats seamlessly. Released in 2025, it represents Meta's latest advancement in AI technology.

Anthropic

Claude 3.5 Haiku

Anthropic

2024-10-22

Meta

Llama 4 Maverick

Meta

2025-04-05

5 months newer

Pricing Comparison

Cost per million tokens (USD)

Anthropic

Claude 3.5 Haiku

Input:$0.80
Output:$4.00
Meta

Llama 4 Maverick

$4.03 cheaper
Input:$0.17
Output:$0.60

Performance Metrics

Context window and performance specifications

Anthropic

Claude 3.5 Haiku

Max Context:400.0K
Meta

Llama 4 Maverick

Larger context
Max Context:2.0M
Parameters:400.0B

Average performance across 4 common benchmarks

Anthropic

Claude 3.5 Haiku

Average Score:65.4%
Meta

Llama 4 Maverick

+10.6%
Average Score:76.0%

Performance comparison across key benchmark categories

Anthropic

Claude 3.5 Haiku

code
+27.6%
88.1%
math
+1.8%
77.5%
general
57.6%
Meta

Llama 4 Maverick

code
60.5%
math
75.7%
general
+13.9%
71.5%
Benchmark Scores - Detailed View
Side-by-side comparison of all benchmark scores

Provider Availability & Performance

Available providers and their performance metrics

Anthropic

Claude 3.5 Haiku

3 providers

Google

Throughput: 42 tok/s
Latency: 0.4ms

Bedrock

Throughput: 104 tok/s
Latency: 0.5ms

Anthropic

Throughput: 100 tok/s
Latency: 0.3ms
Meta

Llama 4 Maverick

7 providers

Sambanova

Throughput: 638.7 tok/s
Latency: 2.04ms

Together

Throughput: 97.93 tok/s
Latency: 0.2ms

DeepInfra

Throughput: 83.59 tok/s
Latency: 0.38ms

Fireworks

Throughput: 63.03 tok/s
Latency: 0.62ms

Groq

Throughput: 307.3 tok/s
Latency: 0.27ms

Novita

Throughput: 69.42 tok/s
Latency: 0.62ms

Lambda

Throughput: 93.69 tok/s
Latency: 0.65ms
Anthropic

Claude 3.5 Haiku

Avg Score:65.4%
Providers:3
Meta

Llama 4 Maverick

+10.6%
Avg Score:76.0%
Providers:7