
Mistral Small 3.1 24B Base
Multimodal
Zero-eval
#3TriviaQA
by Mistral AI
About
Mistral Small 3.1 24B Base is a multimodal language model developed by Mistral AI. It achieves strong performance with an average score of 62.9% across 5 benchmarks. It excels particularly in MMLU (81.0%), TriviaQA (80.5%), MMMU (59.3%). It supports a 256K token context window for handling large documents. The model is available through 1 API provider. As a multimodal model, it can process and understand text, images, and other input formats seamlessly. It's licensed for commercial use, making it suitable for enterprise applications. Released in 2025, it represents Mistral AI's latest advancement in AI technology.
Pricing Range
Input (per 1M)$0.10 -$0.10
Output (per 1M)$0.30 -$0.30
Providers1
Timeline
AnnouncedMar 17, 2025
ReleasedMar 17, 2025
Specifications
Capabilities
Multimodal
License & Family
License
Apache 2.0
Benchmark Performance Overview
Performance metrics and category breakdown
Overall Performance
5 benchmarks
Average Score
62.9%
Best Score
81.0%
High Performers (80%+)
2Performance Metrics
Max Context Window
256.0KAvg Throughput
137.1 tok/sAvg Latency
0msTop Categories
general
63.8%
vision
59.3%
Benchmark Performance
Top benchmark scores with normalized values (0-100%)
Ranking Across Benchmarks
Position relative to other models on each benchmark
MMLU
Rank #38 of 78
#35Jamba 1.5 Large
81.2%
#36Grok-1.5
81.3%
#37GPT-4o mini
82.0%
#38Mistral Small 3.1 24B Base
81.0%
#39Mistral Small 3 24B Base
80.7%
#40Mistral Small 3.1 24B Instruct
80.6%
#41Mistral Small 3.2 24B Instruct
80.5%
TriviaQA
Rank #3 of 13
#1Gemma 2 27B
83.7%
#2Kimi K2 Base
85.1%
#3Mistral Small 3.1 24B Base
80.5%
#4Mistral Small 3.1 24B Instruct
80.5%
#5Mistral Small 3 24B Base
80.3%
#6Granite 3.3 8B Base
78.2%
MMMU
Rank #35 of 52
#32GPT-4o mini
59.4%
#33Llama 3.2 90B Instruct
60.3%
#34Nova Pro
61.7%
#35Mistral Small 3.1 24B Base
59.3%
#36Mistral Small 3.1 24B Instruct
59.3%
#37Qwen2.5-Omni-7B
59.2%
#38Qwen2.5 VL 7B Instruct
58.6%
MMLU-Pro
Rank #42 of 60
#39Qwen2.5 7B Instruct
56.3%
#40Claude 3 Sonnet
56.8%
#41Gemini 1.5 Flash 8B
58.7%
#42Mistral Small 3.1 24B Base
56.0%
#43Mistral Small 3 24B Base
54.4%
#44Jamba 1.5 Large
53.5%
#45Phi 4 Mini
52.8%
GPQA
Rank #90 of 115
#87Gemini 1.5 Flash 8B
38.4%
#88Nova Micro
40.0%
#89GPT-4o mini
40.2%
#90Mistral Small 3.1 24B Base
37.5%
#91Jamba 1.5 Large
36.9%
#92Phi-3.5-MoE-instruct
36.8%
#93Qwen2.5 7B Instruct
36.4%
All Benchmark Results for Mistral Small 3.1 24B Base
Complete list of benchmark scores with detailed information
MMLU MMLU benchmark | general | text | 0.81 | 81.0% | Self-reported |
TriviaQA TriviaQA benchmark | general | text | 0.81 | 80.5% | Self-reported |
MMMU MMMU benchmark | vision | multimodal | 0.59 | 59.3% | Self-reported |
MMLU-Pro MMLU-Pro benchmark | general | text | 0.56 | 56.0% | Self-reported |
GPQA GPQA benchmark | general | text | 0.38 | 37.5% | Self-reported |
Resources