Model Comparison

Comprehensive side-by-side analysis of model capabilities and performance

DeepSeek

DeepSeek R1 Zero

DeepSeek

DeepSeek R1 Zero is a language model developed by DeepSeek. It achieves strong performance with an average score of 76.5% across 4 benchmarks. It excels particularly in MATH-500 (95.9%), AIME 2024 (86.7%), GPQA (73.3%). It's licensed for commercial use, making it suitable for enterprise applications. Released in 2025, it represents DeepSeek's latest advancement in AI technology.

Google

MedGemma 4B IT

Google

MedGemma 4B IT is a multimodal language model developed by Google. The model shows competitive results across 7 benchmarks. It excels particularly in MIMIC CXR (88.9%), DermMCQA (71.8%), PathMCQA (69.8%). As a multimodal model, it can process and understand text, images, and other input formats seamlessly. Released in 2025, it represents Google's latest advancement in AI technology.

DeepSeek

DeepSeek R1 Zero

DeepSeek

2025-01-20

Google

MedGemma 4B IT

Google

2025-05-20

4 months newer

Performance comparison across key benchmark categories

DeepSeek

DeepSeek R1 Zero

general
+20.5%
80.0%
Google

MedGemma 4B IT

general
59.5%

Provider Availability & Performance

Available providers and their performance metrics

DeepSeek

DeepSeek R1 Zero

0 providers
Google

MedGemma 4B IT

0 providers
DeepSeek

DeepSeek R1 Zero

Avg Score:0.0%
Providers:0
Google

MedGemma 4B IT

Avg Score:0.0%
Providers:0