OpenAI has officially launched its latest AI models, o3 and o4-mini, on April 17, 2025, marking a significant advancement in artificial intelligence capabilities. These new models are designed to enhance reasoning, problem-solving, and visual analysis, contributing to a broader range of applications for developers and users alike.
According to OpenAI, the o3 model is touted as the company’s most advanced reasoning model to date, excelling in various fields including mathematics, coding, reasoning, science, and visual perception. This model is reported to perform exceptionally well on established academic benchmarks, demonstrating a 20% reduction in major errors compared to its predecessor, o1. Meanwhile, o4-mini is characterized as a smaller, faster model that offers a competitive trade-off between cost, speed, and performance, making it particularly appealing for developers.
Both o3 and o4-mini are now available for ChatGPT Pro, Plus, and Team users, with o4-mini also accessible to free-tier users through a new “Think” option when composing prompts. OpenAI has stated that enterprise and educational users will gain access within a week, further expanding the reach of these advanced models.
One of the standout features of these new models is their ability to integrate web browsing and image analysis into their reasoning processes. OpenAI claims that these capabilities enable the models to solve complex, multi-step problems more effectively. For instance, if a user asks about California’s summer energy usage, the model can retrieve data, generate a forecast, create a visual representation, and explain the reasoning behind its conclusions—all within a single response.
OpenAI’s CEO, Sam Altman, emphasized that o3 and o4-mini are the first models in the o-series that can “think with images.” This means that users can upload images, such as sketches or diagrams, and the models will analyze these visuals as part of their reasoning chain. This multimodal approach is a groundbreaking step forward for AI, allowing for more nuanced interactions.
AI expert Alexey Minakov, who tested both models, praised their performance, noting, “What can I say, very ‘smart’ models. It’s like having a world-class mathematician in your pocket.” He highlighted that the o3 model could generate a detailed psychological portrait and predict behaviors, showcasing its advanced reasoning capabilities.
In terms of performance metrics, o3 achieved a score of 69.1% on the SWE-bench test, while o4-mini scored 68.1%. OpenAI has set competitive pricing for these models, charging $10 per million input tokens and $40 per million output tokens for o3, while o4-mini is priced at $1.10 per million input tokens and $4.40 per million output tokens. This pricing strategy aims to make the advanced capabilities of these models accessible to a wider audience.
OpenAI is also introducing Codex CLI, a lightweight, open-source coding agent that can run locally on users’ computers. This new tool will allow users to leverage the reasoning capabilities of o3 and o4-mini directly in their coding environments, supporting tasks such as reading screenshots and interacting with codebases. OpenAI has committed to supporting innovative projects utilizing Codex CLI with a $1 million grant program.
Looking ahead, OpenAI plans to release o3-pro in the coming weeks, a version that will utilize more computational resources to generate even more sophisticated responses. Altman noted that o3 and o4-mini could potentially be the last standalone AI reasoning models before the anticipated release of GPT-5, which is expected to merge traditional models with the advanced reasoning capabilities of the o-series.
The launch of these new models is a pivotal moment for OpenAI, as they represent a significant leap forward in AI reasoning and multimodal capabilities. With the ability to process both text and images, the o3 and o4-mini models are set to redefine how users interact with AI, making it possible to tackle more complex queries and tasks with greater efficiency and accuracy.
In summary, the launch of o3 and o4-mini signifies OpenAI’s commitment to pushing the boundaries of artificial intelligence. As these models become available to a broader audience, their impact on various sectors—ranging from software development to scientific research—is expected to be profound, paving the way for innovative applications and solutions in the future.