Jamba 1.5 Mini

Zero-eval

by AI21 Labs

About

Jamba 1.5 Mini is a language model developed by AI21 Labs. The model shows competitive results across 8 benchmarks. It excels particularly in ARC-C (85.7%), GSM8k (75.8%), MMLU (69.7%). It supports a 512K token context window for handling large documents. The model is available through 2 API providers. Released in 2024, it represents AI21 Labs's latest advancement in AI technology.

Pricing Range
Input (per 1M)$0.20 -$0.20
Output (per 1M)$0.40 -$0.40
Providers2
Timeline
AnnouncedAug 22, 2024
ReleasedAug 22, 2024
Knowledge CutoffMar 5, 2024
Specifications
License & Family
License
Jamba Open Model License
Benchmark Performance Overview
Performance metrics and category breakdown

Overall Performance

8 benchmarks
Average Score
56.1%
Best Score
85.7%
High Performers (80%+)
1

Performance Metrics

Max Context Window
512.3K
Avg Throughput
100.0 tok/s
Avg Latency
0ms

Top Categories

reasoning
85.7%
math
75.8%
factuality
54.1%
general
46.6%
Benchmark Performance
Top benchmark scores with normalized values (0-100%)
Ranking Across Benchmarks
Position relative to other models on each benchmark

ARC-C

Rank #12 of 31
#9Claude 3 Haiku
89.2%
#10Nova Micro
90.2%
#11Phi-3.5-MoE-instruct
91.0%
#12Jamba 1.5 Mini
85.7%
#13Phi-3.5-mini-instruct
84.6%
#14Phi 4 Mini
83.7%
#15Llama 3.1 8B Instruct
83.4%

GSM8k

Rank #40 of 46
#37Llama 3.2 3B Instruct
77.7%
#38Mistral Small 3 24B Base
80.7%
#39Granite 3.3 8B Instruct
80.9%
#40Jamba 1.5 Mini
75.8%
#41Gemma 2 27B
74.0%
#42Command R+
70.7%
#43IBM Granite 4.0 Tiny Preview
70.1%

MMLU

Rank #62 of 78
#59GPT-3.5 Turbo
69.8%
#60Qwen2 7B Instruct
70.5%
#61Gemma 2 9B
71.3%
#62Jamba 1.5 Mini
69.7%
#63Llama 3.1 8B Instruct
69.4%
#64Pixtral-12B
69.2%
#65Phi-3.5-mini-instruct
69.0%

TruthfulQA

Rank #13 of 16
#10Qwen2.5-Coder 32B Instruct
54.2%
#11Qwen2 72B Instruct
54.8%
#12Command R+
56.3%
#13Jamba 1.5 Mini
54.1%
#14Granite 3.3 8B Base
52.1%
#15Qwen2.5-Coder 7B Instruct
50.6%
#16Mistral NeMo Instruct
50.3%

Arena Hard

Rank #17 of 22
#14Qwen2.5 7B Instruct
52.0%
#15Granite 3.3 8B Base
57.6%
#16Granite 3.3 8B Instruct
57.6%
#17Jamba 1.5 Mini
46.1%
#18Mistral Small 3.2 24B Instruct
43.1%
#19Phi-3.5-MoE-instruct
37.9%
#20Phi-3.5-mini-instruct
37.0%
All Benchmark Results for Jamba 1.5 Mini
Complete list of benchmark scores with detailed information
ARC-C
ARC-C benchmark
reasoning
text
0.86
85.7%
Self-reported
GSM8k
GSM8k benchmark
math
text
0.76
75.8%
Self-reported
MMLU
MMLU benchmark
general
text
0.70
69.7%
Self-reported
TruthfulQA
TruthfulQA benchmark
factuality
text
0.54
54.1%
Self-reported
Arena Hard
Arena Hard benchmark
general
text
0.46
46.1%
Self-reported
MMLU-Pro
MMLU-Pro benchmark
general
text
0.42
42.5%
Self-reported
Wild Bench
Wild Bench benchmark
general
text
0.42
42.4%
Self-reported
GPQA
GPQA benchmark
general
text
0.32
32.3%
Self-reported