Model Comparison

Comprehensive side-by-side analysis of model capabilities and performance

Cohere

Command R+

Cohere

Command R+ is a language model developed by Cohere. It achieves strong performance with an average score of 74.6% across 6 benchmarks. It excels particularly in HellaSwag (88.6%), Winogrande (85.4%), MMLU (75.7%). The model shows particular specialization in reasoning tasks with an average performance of 81.7%. It supports a 256K token context window for handling large documents. The model is available through 2 API providers. Released in 2024, it represents Cohere's latest advancement in AI technology.

Meta

Llama 4 Scout

Meta

Llama 4 Scout is a multimodal language model developed by Meta. It achieves strong performance with an average score of 67.3% across 12 benchmarks. It excels particularly in DocVQA (94.4%), MGSM (90.6%), ChartQA (88.8%). The model shows particular specialization in vision tasks with an average performance of 81.9%. With a 20.0M token context window, it can handle extensive documents and complex multi-turn conversations. The model is available through 6 API providers. As a multimodal model, it can process and understand text, images, and other input formats seamlessly. Released in 2025, it represents Meta's latest advancement in AI technology.

Cohere

Command R+

Cohere

2024-08-30

Meta

Llama 4 Scout

Meta

2025-04-05

7 months newer

Pricing Comparison

Cost per million tokens (USD)

Cohere

Command R+

Input:$0.25
Output:$1.00
Meta

Llama 4 Scout

$0.87 cheaper
Input:$0.08
Output:$0.30

Performance Metrics

Context window and performance specifications

Cohere

Command R+

Max Context:256.0K
Parameters:104.0B
Meta

Llama 4 Scout

Larger context
Max Context:20.0M
Parameters:109.0B

Average performance across 1 common benchmarks

Cohere

Command R+

Average Score:75.7%
Meta

Llama 4 Scout

+3.9%
Average Score:79.6%

Performance comparison across key benchmark categories

Cohere

Command R+

general
+9.4%
75.7%
math
+0.2%
70.7%
Meta

Llama 4 Scout

general
66.3%
math
70.5%
Benchmark Scores - Detailed View
Side-by-side comparison of all benchmark scores

Provider Availability & Performance

Available providers and their performance metrics

Cohere

Command R+

2 providers

Cohere

Throughput: 59 tok/s
Latency: 0.65ms

Bedrock

Throughput: 100 tok/s
Latency: 0.5ms
Meta

Llama 4 Scout

6 providers

Together

Throughput: 106.9 tok/s
Latency: 0.54ms

DeepInfra

Throughput: 76.1 tok/s
Latency: 0.31ms

Fireworks

Throughput: 116.1 tok/s
Latency: 0.53ms

Groq

Throughput: 776.1 tok/s
Latency: 1.08ms

Novita

Throughput: 69.82 tok/s
Latency: 0.85ms

Lambda

Throughput: 139.7 tok/s
Latency: 0.43ms
Cohere

Command R+

Avg Score:75.7%
Providers:2
Meta

Llama 4 Scout

+3.9%
Avg Score:79.6%
Providers:6