About

Technology and consulting company

Portfolio Stats
Total Models3
Multimodal2
Benchmarks Run46
Avg Performance63.7%
Latest Release
IBM Granite 4.0 Tiny Preview
Released: May 2, 2025
Release Timeline
Recent model releases by year
2025
3 models
Performance Overview
Top models and benchmark performance

Top Performing Models

By avg score

Benchmark Categories

code
13
71.9%
math
5
69.6%
reasoning
3
68.4%
factuality
3
59.0%
general
22
58.3%

Model Statistics

Multimodal Ratio
67%
Models with Providers
0

All Models

Complete portfolio of 3 models with advanced filtering

LicenseLinks
IBMIBM Granite 4.0 Tiny Preview
A preliminary version of the smallest model in the upcoming Granite 4.0 family, released May 2025. It utilizes a novel hybrid Mamba-2/Transformer, fine-grained mixture of experts (MoE) architecture (7B total parameters, 1B active at inference). This preview version is partially trained (2.5T tokens) but demonstrates significant memory efficiency and performance potential, validated for at least 128K context length without positional encoding.
May 2, 2025
Apache 2.0
--82.4%--
IBMGranite 3.3 8B Instruct
Granite 3.3 models feature enhanced reasoning capabilities and support for Fill-in-the-Middle (FIM) code completion. They are built on a foundation of open-source instruction datasets with permissive licenses, alongside internally curated synthetic datasets tailored for long-context problem-solving. These models preserve the key strengths of previous Granite versions, including support for a 128K context length, strong performance in retrieval-augmented generation (RAG) and function calling, and controls for response length and originality. Granite 3.3 also delivers competitive results across general, enterprise, and safety benchmarks. Released as open source, the models are available under the Apache 2.0 license.
Apr 16, 2025
Apache 2.0
--89.7%--
IBMGranite 3.3 8B Base
Granite-3.3-8B-Base is a decoder-only language model with a 128K token context window. It improves upon Granite-3.1-8B-Base by adding support for Fill-in-the-Middle (FIM) using specialized tokens, enabling the model to generate content conditioned on both prefix and suffix. This makes it well-suited for code completion tasks
Apr 16, 2025
Apache 2.0
--89.7%--