Model Comparison

Comprehensive side-by-side analysis of model capabilities and performance

DeepSeek

DeepSeek R1 Distill Qwen 1.5B

DeepSeek

DeepSeek R1 Distill Qwen 1.5B is a language model developed by DeepSeek. The model shows competitive results across 4 benchmarks. It excels particularly in MATH-500 (83.9%), AIME 2024 (52.7%), GPQA (33.8%). It's licensed for commercial use, making it suitable for enterprise applications. Released in 2025, it represents DeepSeek's latest advancement in AI technology.

Moonshot AI

Kimi-k1.5

Moonshot AI

Kimi-k1.5 is a multimodal language model developed by Moonshot AI. This model demonstrates exceptional performance with an average score of 81.7% across 9 benchmarks. It excels particularly in MATH-500 (96.2%), CLUEWSC (91.4%), C-Eval (88.3%). The model shows particular specialization in math tasks with an average performance of 85.5%. As a multimodal model, it can process and understand text, images, and other input formats seamlessly. Released in 2025, it represents Moonshot AI's latest advancement in AI technology.

DeepSeek

DeepSeek R1 Distill Qwen 1.5B

DeepSeek

2025-01-20

Moonshot AI

Kimi-k1.5

Moonshot AI

2025-01-20

0 days newer

Average performance across 2 common benchmarks

DeepSeek

DeepSeek R1 Distill Qwen 1.5B

Average Score:68.3%
Moonshot AI

Kimi-k1.5

+18.5%
Average Score:86.9%

Performance comparison across key benchmark categories

DeepSeek

DeepSeek R1 Distill Qwen 1.5B

math
83.9%
general
43.3%
code
16.9%
Moonshot AI

Kimi-k1.5

math
+1.7%
85.5%
general
+42.2%
85.4%
code
+62.4%
79.3%
Benchmark Scores - Detailed View
Side-by-side comparison of all benchmark scores

Provider Availability & Performance

Available providers and their performance metrics

DeepSeek

DeepSeek R1 Distill Qwen 1.5B

0 providers
Moonshot AI

Kimi-k1.5

0 providers
DeepSeek

DeepSeek R1 Distill Qwen 1.5B

Avg Score:68.3%
Providers:0
Moonshot AI

Kimi-k1.5

+18.5%
Avg Score:86.9%
Providers:0