o3-miniA smaller variant of O3, expected to offer enhanced multimodal capabilities, improved reasoning, and more efficient resource utilization compared to previous models while maintaining strong performance on core tasks. | | Jan 30, 2025 | | 49.3% | 66.7% | - | - | - | |
o1-previewA research preview model focused on mathematical and logical reasoning capabilities, demonstrating improved performance on tasks requiring step-by-step reasoning, mathematical problem-solving, and code generation. The model shows enhanced capabilities in formal reasoning while maintaining strong general capabilities. | | Sep 12, 2024 | | 41.3% | - | - | - | - | |
o1A research preview model focused on mathematical and logical reasoning capabilities, demonstrating improved performance on tasks requiring step-by-step reasoning, mathematical problem-solving, and code generation. The model shows enhanced capabilities in formal reasoning while maintaining strong general capabilities. | | Dec 17, 2024 | | 41.0% | - | 88.1% | - | - | |
GPT-4oGPT-4o ('o' for 'omni') is a multimodal AI model that accepts text, audio, image, and video inputs, and generates text, audio, and image outputs. It matches GPT-4 Turbo performance on text and code, with improvements in non-English languages, vision, and audio understanding. | | Aug 6, 2024 | | 33.2% | 30.7% | - | - | - | |
GPT-4o miniGPT-4o mini is OpenAI's latest cost-efficient small model, designed to make AI intelligence more accessible and affordable. It excels in textual intelligence and multimodal reasoning, outperforming previous models like GPT-3.5 Turbo. With a context window of 128K tokens and support for text and vision, it offers low-cost, real-time applications such as customer support chatbots. Priced at 15 cents per million input tokens and 60 cents per million output tokens, it is significantly cheaper than its predecessors. Safety is prioritized with built-in measures and improved resistance to security threats. | | Jul 18, 2024 | | 8.7% | - | 87.2% | - | - | |
Phi-3.5-mini-instructPhi-3.5-mini-instruct is a 3.8B-parameter model that supports up to 128K context tokens, with improved multilingual capabilities across over 20 languages. It underwent additional training and safety post-training to enhance instruction-following, reasoning, math, and code generation. Ideal for environments with memory or latency constraints, it uses an MIT license. | | Aug 23, 2024 | | - | - | 62.8% | - | 69.6% | |
GPT-3.5 TurboThe latest GPT-3.5 Turbo model with higher accuracy at responding in requested formats and a fix for a bug which caused a text encoding issue for non-English language function calls. | | Mar 21, 2023 | | - | - | 68.0% | - | - | |
GPT-4oGPT-4o ('o' for 'omni') is a multimodal AI model that accepts text, audio, image, and video inputs, and generates text, audio, and image outputs. It matches GPT-4 Turbo performance on text and code, with improvements in non-English languages, vision, and audio understanding. | | May 13, 2024 | | - | - | 90.2% | - | - | |
o1-minio1-mini is a cost-efficient language model developed by OpenAI, designed for advanced reasoning tasks while minimizing computational resources. | | Sep 12, 2024 | | - | - | 92.4% | - | - | |
GPT-4 TurboThe latest GPT-4 model with improved performance, updated knowledge, and enhanced capabilities. It offers faster response times and more affordable pricing compared to previous versions. | | Apr 9, 2024 | | - | - | 87.1% | - | - | |