Anthropic

Claude 3.5 Haiku

Zero-eval

by Anthropic

About

Claude 3.5 Haiku is a language model developed by Anthropic. It achieves strong performance with an average score of 60.8% across 9 benchmarks. It excels particularly in HumanEval (88.1%), MGSM (85.6%), DROP (83.1%). It supports a 400K token context window for handling large documents. The model is available through 3 API providers. Released in 2024, it represents Anthropic's latest advancement in AI technology.

Pricing Range
Input (per 1M)$0.80 -$1.00
Output (per 1M)$4.00 -$5.00
Providers3
Timeline
AnnouncedOct 22, 2024
ReleasedOct 22, 2024
Specifications
License & Family
License
Proprietary
Benchmark Performance Overview
Performance metrics and category breakdown

Overall Performance

9 benchmarks
Average Score
60.8%
Best Score
88.1%
High Performers (80%+)
3

Performance Metrics

Max Context Window
400.0K
Avg Throughput
82.0 tok/s
Avg Latency
0ms

Top Categories

code
88.1%
math
77.5%
general
57.6%
agents
36.9%
Benchmark Performance
Top benchmark scores with normalized values (0-100%)
Ranking Across Benchmarks
Position relative to other models on each benchmark

HumanEval

Rank #22 of 62
#19o1
88.1%
#20Qwen2.5 32B Instruct
88.4%
#21Qwen2.5-Coder 7B Instruct
88.4%
#22Claude 3.5 Haiku
88.1%
#23GPT-4.5
88.0%
#24Gemma 3 27B
87.8%
#25GPT-4o mini
87.2%

MGSM

Rank #15 of 31
#12Llama 3.2 90B Instruct
86.9%
#13GPT-4o mini
87.0%
#14Gemini 1.5 Pro
87.5%
#15Claude 3.5 Haiku
85.6%
#16Qwen3 235B A22B
83.5%
#17Claude 3 Sonnet
83.5%
#18Gemini 1.5 Flash
82.6%

DROP

Rank #9 of 28
#6GPT-4o
83.4%
#7Llama 3.1 405B Instruct
84.8%
#8Nova Pro
85.4%
#9Claude 3.5 Haiku
83.1%
#10Claude 3 Opus
83.1%
#11GPT-4
80.9%
#12Nova Lite
80.2%

MATH

Rank #35 of 63
#32Mistral Small 3.2 24B Instruct
69.4%
#33Kimi K2 Base
70.2%
#34GPT-4o mini
70.2%
#35Claude 3.5 Haiku
69.4%
#36Nova Micro
69.3%
#37Mistral Small 3.1 24B Instruct
69.3%
#38Llama 3.2 90B Instruct
68.0%

MMLU-Pro

Rank #35 of 60
#32Mistral Small 3 24B Instruct
66.3%
#33Llama 3.1 70B Instruct
66.4%
#34Mistral Small 3.1 24B Instruct
66.8%
#35Claude 3.5 Haiku
65.0%
#36Qwen2 72B Instruct
64.4%
#37Qwen2.5 14B Instruct
63.7%
#38Gemma 3 12B
60.6%
All Benchmark Results for Claude 3.5 Haiku
Complete list of benchmark scores with detailed information
HumanEval
HumanEval benchmark
code
text
0.88
88.1%
Self-reported
MGSM
MGSM benchmark
math
text
0.86
85.6%
Self-reported
DROP
DROP benchmark
general
text
0.83
83.1%
Self-reported
MATH
MATH benchmark
math
text
0.69
69.4%
Self-reported
MMLU-Pro
MMLU-Pro benchmark
general
text
0.65
65.0%
Self-reported
TAU-bench Retail
TAU-bench Retail benchmark
agents
text
0.51
51.0%
Self-reported
GPQA
GPQA benchmark
general
text
0.42
41.6%
Self-reported
SWE-Bench Verified
SWE-Bench Verified benchmark
general
text
0.41
40.6%
Self-reported
TAU-bench Airline
TAU-bench Airline benchmark
agents
text
0.23
22.8%
Self-reported