
Llama 3.1 Nemotron Nano 8B V1
Zero-eval
by NVIDIA
About
Llama 3.1 Nemotron Nano 8B V1 is a language model developed by NVIDIA. It achieves strong performance with an average score of 72.2% across 7 benchmarks. It excels particularly in MATH-500 (95.4%), MBPP (84.6%), MT-Bench (81.0%). Released in 2025, it represents NVIDIA's latest advancement in AI technology.
Timeline
AnnouncedMar 18, 2025
ReleasedMar 18, 2025
Knowledge CutoffDec 31, 2023
Specifications
License & Family
License
Llama 3.1 Community License
Benchmark Performance Overview
Performance metrics and category breakdown
Overall Performance
7 benchmarks
Average Score
72.2%
Best Score
95.4%
High Performers (80%+)
3Top Categories
math
95.4%
code
82.0%
roleplay
81.0%
general
54.9%
Benchmark Performance
Top benchmark scores with normalized values (0-100%)
Ranking Across Benchmarks
Position relative to other models on each benchmark
MATH-500
Rank #8 of 22
#5DeepSeek R1 Zero
95.9%
#6Kimi-k1.5
96.2%
#7Claude 3.7 Sonnet
96.2%
#8Llama 3.1 Nemotron Nano 8B V1
95.4%
#9Phi 4 Mini Reasoning
94.6%
#10DeepSeek R1 Distill Llama 70B
94.5%
#11DeepSeek R1 Distill Qwen 32B
94.3%
MBPP
Rank #4 of 31
#1Qwen2.5 72B Instruct
88.2%
#2Qwen2.5-Coder 32B Instruct
90.2%
#3Llama-3.3 Nemotron Super 49B v1
91.3%
#4Llama 3.1 Nemotron Nano 8B V1
84.6%
#5Qwen2.5 32B Instruct
84.0%
#6Qwen2.5 VL 32B Instruct
84.0%
#7Qwen2.5-Coder 7B Instruct
83.5%
MT-Bench
Rank #9 of 11
#6Ministral 8B Instruct
83.0%
#7Mistral Small 3 24B Instruct
83.5%
#8Qwen2 7B Instruct
84.1%
#9Llama 3.1 Nemotron Nano 8B V1
81.0%
#10Pixtral-12B
76.8%
#11Llama 3.1 Nemotron 70B Instruct
9.0%
IFEval
Rank #29 of 37
#26Gemma 3 1B
80.2%
#27Llama 3.1 8B Instruct
80.4%
#28GPT-4o
81.0%
#29Llama 3.1 Nemotron Nano 8B V1
79.3%
#30Llama 3.2 3B Instruct
77.4%
#31Granite 3.3 8B Instruct
74.8%
#32Granite 3.3 8B Base
74.8%
BFCL v2
Rank #5 of 5
#2Llama 3.2 3B Instruct
67.0%
#3Llama-3.3 Nemotron Super 49B v1
73.7%
#4Llama 3.1 Nemotron Ultra 253B v1
74.1%
#5Llama 3.1 Nemotron Nano 8B V1
63.6%
All Benchmark Results for Llama 3.1 Nemotron Nano 8B V1
Complete list of benchmark scores with detailed information
MATH-500 MATH-500 benchmark | math | text | 0.95 | 95.4% | Self-reported |
MBPP MBPP benchmark | code | text | 84.60 | 84.6% | Self-reported |
MT-Bench MT-Bench benchmark | roleplay | text | 81.00 | 81.0% | Self-reported |
IFEval IFEval benchmark | code | text | 0.79 | 79.3% | Self-reported |
BFCL v2 BFCL v2 benchmark | general | text | 0.64 | 63.6% | Self-reported |
GPQA GPQA benchmark | general | text | 0.54 | 54.1% | Self-reported |
AIME 2025 AIME 2025 benchmark | general | text | 0.47 | 47.1% | Self-reported |