Model Comparison

Comprehensive side-by-side analysis of model capabilities and performance

Jamba 1.5 Large

AI21 Labs

Jamba 1.5 Large is a language model developed by AI21 Labs. It achieves strong performance with an average score of 65.5% across 8 benchmarks. It excels particularly in ARC-C (93.0%), GSM8k (87.0%), MMLU (81.2%). It supports a 512K token context window for handling large documents. The model is available through 2 API providers. Released in 2024, it represents AI21 Labs's latest advancement in AI technology.

Google

MedGemma 4B IT

Google

MedGemma 4B IT is a multimodal language model developed by Google. The model shows competitive results across 7 benchmarks. It excels particularly in MIMIC CXR (88.9%), DermMCQA (71.8%), PathMCQA (69.8%). As a multimodal model, it can process and understand text, images, and other input formats seamlessly. Released in 2025, it represents Google's latest advancement in AI technology.

Jamba 1.5 Large

AI21 Labs

2024-08-22

Google

MedGemma 4B IT

Google

2025-05-20

9 months newer

Performance Metrics

Context window and performance specifications

Jamba 1.5 Large

Larger context
Max Context:512.0K
Parameters:398.0B
Google

MedGemma 4B IT

Max Context:-
Parameters:4.3B

Performance comparison across key benchmark categories

Jamba 1.5 Large

general
57.1%
Google

MedGemma 4B IT

general
+2.4%
59.5%
Knowledge Cutoff
Training data recency comparison

Jamba 1.5 Large

2024-03-05

More recent knowledge cutoff means awareness of newer technologies and frameworks

Provider Availability & Performance

Available providers and their performance metrics

Jamba 1.5 Large

2 providers

Google

Throughput: 42 tok/s
Latency: 0.3ms

Bedrock

Throughput: 100 tok/s
Latency: 0.5ms
Google

MedGemma 4B IT

0 providers

Jamba 1.5 Large

Avg Score:0.0%
Providers:2
Google

MedGemma 4B IT

Avg Score:0.0%
Providers:0