Model Comparison

Comprehensive side-by-side analysis of model capabilities and performance

Cohere

Command R+

Cohere

Command R+ is a language model developed by Cohere. It achieves strong performance with an average score of 74.6% across 6 benchmarks. It excels particularly in HellaSwag (88.6%), Winogrande (85.4%), MMLU (75.7%). The model shows particular specialization in reasoning tasks with an average performance of 81.7%. It supports a 256K token context window for handling large documents. The model is available through 2 API providers. Released in 2024, it represents Cohere's latest advancement in AI technology.

OpenAI

GPT-4.1 mini

OpenAI

GPT-4.1 mini is a multimodal language model developed by OpenAI. The model shows competitive results across 29 benchmarks. It excels particularly in CharXiv-D (88.4%), MMLU (87.5%), IFEval (84.1%). With a 1.1M token context window, it can handle extensive documents and complex multi-turn conversations. The model is available through 2 API providers. As a multimodal model, it can process and understand text, images, and other input formats seamlessly. Released in 2025, it represents OpenAI's latest advancement in AI technology.

Cohere

Command R+

Cohere

2024-08-30

OpenAI

GPT-4.1 mini

OpenAI

2025-04-14

7 months newer

Pricing Comparison

Cost per million tokens (USD)

Cohere

Command R+

$0.75 cheaper
Input:$0.25
Output:$1.00
OpenAI

GPT-4.1 mini

Input:$0.40
Output:$1.60

Performance Metrics

Context window and performance specifications

Cohere

Command R+

Max Context:256.0K
Parameters:104.0B
OpenAI

GPT-4.1 mini

Larger context
Max Context:1.1M

Average performance across 1 common benchmarks

Cohere

Command R+

Average Score:75.7%
OpenAI

GPT-4.1 mini

+11.8%
Average Score:87.5%

Performance comparison across key benchmark categories

Cohere

Command R+

general
+29.8%
75.7%
math
70.7%
OpenAI

GPT-4.1 mini

general
45.9%
math
+2.4%
73.1%
Benchmark Scores - Detailed View
Side-by-side comparison of all benchmark scores
Knowledge Cutoff
Training data recency comparison

GPT-4.1 mini

2024-05-31

More recent knowledge cutoff means awareness of newer technologies and frameworks

Provider Availability & Performance

Available providers and their performance metrics

Cohere

Command R+

2 providers

Cohere

Throughput: 59 tok/s
Latency: 0.65ms

Bedrock

Throughput: 100 tok/s
Latency: 0.5ms
OpenAI

GPT-4.1 mini

2 providers

ZeroEval

Throughput: 150 tok/s
Latency: 5ms

OpenAI

Throughput: 150 tok/s
Latency: 5ms
Cohere

Command R+

Avg Score:75.7%
Providers:2
OpenAI

GPT-4.1 mini

+11.8%
Avg Score:87.5%
Providers:2