Model Comparison

Comprehensive side-by-side analysis of model capabilities and performance

Meta

Llama 3.1 70B Instruct

Meta

Llama 3.1 70B Instruct is a language model developed by Meta. It achieves strong performance with an average score of 74.7% across 18 benchmarks. It excels particularly in GSM-8K (CoT) (95.1%), ARC-C (94.8%), API-Bank (90.0%). It supports a 256K token context window for handling large documents. The model is available through 9 API providers. Released in 2024, it represents Meta's latest advancement in AI technology.

Meta

Llama 3.2 90B Instruct

Meta

Llama 3.2 90B Instruct is a multimodal language model developed by Meta. It achieves strong performance with an average score of 71.3% across 13 benchmarks. It excels particularly in AI2D (92.3%), DocVQA (90.1%), MGSM (86.9%). It supports a 256K token context window for handling large documents. The model is available through 5 API providers. As a multimodal model, it can process and understand text, images, and other input formats seamlessly. It's licensed for commercial use, making it suitable for enterprise applications. Released in 2024, it represents Meta's latest advancement in AI technology.

Meta

Llama 3.1 70B Instruct

Meta

2024-07-23

Meta

Llama 3.2 90B Instruct

Meta

2024-09-25

2 months newer

Pricing Comparison

Cost per million tokens (USD)

Meta

Llama 3.1 70B Instruct

$0.35 cheaper
Input:$0.20
Output:$0.20
Meta

Llama 3.2 90B Instruct

Input:$0.35
Output:$0.40

Performance Metrics

Context window and performance specifications

Meta

Llama 3.1 70B Instruct

Max Context:256.0K
Parameters:70.0B
Meta

Llama 3.2 90B Instruct

Max Context:256.0K
Parameters:90.0B

Average performance across 2 common benchmarks

Meta

Llama 3.1 70B Instruct

Average Score:62.6%
Meta

Llama 3.2 90B Instruct

+3.7%
Average Score:66.3%

Performance comparison across key benchmark categories

Meta

Llama 3.1 70B Instruct

math
+12.6%
83.3%
general
68.7%
Meta

Llama 3.2 90B Instruct

math
70.7%
general
+4.7%
73.5%
Benchmark Scores - Detailed View
Side-by-side comparison of all benchmark scores

Provider Availability & Performance

Available providers and their performance metrics

Meta

Llama 3.1 70B Instruct

9 providers

Sambanova

Throughput: 74 tok/s
Latency: 0.5ms

Together

Throughput: 94 tok/s
Latency: 0.5ms

Hyperbolic

Throughput: 100 tok/s
Latency: 0.5ms

DeepInfra

Throughput: 25 tok/s
Latency: 0.5ms

Fireworks

Throughput: 32 tok/s
Latency: 0.5ms

Groq

Throughput: 250 tok/s
Latency: 0.5ms

Bedrock

Throughput: 100 tok/s
Latency: 0.5ms

Lambda

Throughput: 42 tok/s
Latency: 0.5ms

Cerebras

Throughput: 1204 tok/s
Latency: 0.2ms
Meta

Llama 3.2 90B Instruct

5 providers

Together

Throughput: 57 tok/s
Latency: 0.5ms

Hyperbolic

Throughput: 42 tok/s
Latency: 0.5ms

DeepInfra

Throughput: 24 tok/s
Latency: 0.5ms

Fireworks

Throughput: 50 tok/s
Latency: 0.5ms

Bedrock

Throughput: 100 tok/s
Latency: 0.5ms
Meta

Llama 3.1 70B Instruct

Avg Score:62.6%
Providers:9
Meta

Llama 3.2 90B Instruct

+3.7%
Avg Score:66.3%
Providers:5