OpenAI

GPT-5 nano

Multimodal
Zero-eval

by OpenAI

About

GPT-5 nano is a multimodal language model developed by OpenAI. The model shows competitive results across 5 benchmarks. It excels particularly in AIME 2025 (85.2%), HMMT 2025 (75.6%), GPQA (71.2%). It supports a 528K token context window for handling large documents. The model is available through 2 API providers. As a multimodal model, it can process and understand text, images, and other input formats seamlessly. Released in 2025, it represents OpenAI's latest advancement in AI technology.

Pricing Range
Input (per 1M)$0.05 -$0.05
Output (per 1M)$0.40 -$0.40
Providers2
Timeline
AnnouncedAug 7, 2025
ReleasedAug 7, 2025
Knowledge CutoffMay 30, 2024
Specifications
Capabilities
Multimodal
License & Family
License
Proprietary
Benchmark Performance Overview
Performance metrics and category breakdown

Overall Performance

5 benchmarks
Average Score
50.1%
Best Score
85.2%
High Performers (80%+)
1

Performance Metrics

Max Context Window
528.0K
Avg Throughput
500.0 tok/s
Avg Latency
0ms

Top Categories

general
60.2%
math
9.6%
Benchmark Performance
Top benchmark scores with normalized values (0-100%)
Ranking Across Benchmarks
Position relative to other models on each benchmark

AIME 2025

Rank #11 of 36
#8o3
86.4%
#9DeepSeek-R1-0528
87.5%
#10Gemini 2.5 Pro Preview 06-05
88.0%
#11GPT-5 nano
85.2%
#12Gemini 2.5 Pro
83.0%
#13Qwen3 235B A22B
81.5%
#14Claude Opus 4.1
80.2%

HMMT 2025

Rank #4 of 7
#1DeepSeek-R1-0528
79.4%
#2GPT-5 mini
87.8%
#3GPT-5
93.3%
#4GPT-5 nano
75.6%
#5Kimi K2 Instruct
38.8%
#6GPT-4.1 mini
35.0%
#7GPT-4.1
28.9%

GPQA

Rank #27 of 115
#24DeepSeek-R1
71.5%
#25GPT OSS 120B
71.5%
#26o1-preview
73.3%
#27GPT-5 nano
71.2%
#28Magistral Medium
70.8%
#29GPT-4o
70.1%
#30Llama 4 Maverick
69.8%

FrontierMath

Rank #4 of 6
#1o3
15.8%
#2GPT-5 mini
22.1%
#3GPT-5
26.3%
#4GPT-5 nano
9.6%
#5o3-mini
9.2%
#6o1
5.5%

Humanity's Last Exam

Rank #12 of 16
#9Magistral Medium
9.0%
#10Gemini 2.5 Flash
11.0%
#11o4-mini
14.7%
#12GPT-5 nano
8.7%
#13GPT-4.1
5.4%
#14Gemini 2.5 Flash-Lite
5.1%
#15Kimi K2 Instruct
4.7%
All Benchmark Results for GPT-5 nano
Complete list of benchmark scores with detailed information
AIME 2025
AIME 2025 benchmark
general
text
0.85
85.2%
Self-reported
HMMT 2025
HMMT 2025 benchmark
general
text
0.76
75.6%
Self-reported
GPQA
GPQA benchmark
general
text
0.71
71.2%
Self-reported
FrontierMath
FrontierMath benchmark
math
text
0.10
9.6%
Self-reported
Humanity's Last Exam
Humanity's Last Exam benchmark
general
text
0.09
8.7%
Self-reported