OpenAI Introduces New AI Models: o3 and o4-mini

OpenAI logo

OpenAI presents two new models that can reason better and use multiple tools simultaneously.

The new multimodal AI models from OpenAI are called o3 and o4-mini and are said to be “the smartest models ever,” according to an announcement. They combine advanced reasoning with functions such as browsing and coding. These are the first models that can use any ChatGPT tool, including visual analysis or image generation.

Reasoning with Images, Code, and Data

OpenAI calls this new process ‘simulated reasoning’: a multi-step thinking process. The difference between the models lies in their use cases and speed. According to OpenAI, o3 is designed for complex analyses and costs $10 per million input tokens. o4-mini is said to handle smaller tasks better but is still powerful enough for many applications.

Additionally, OpenAI is also launching the developer tool Codex CLI. The open-source terminal app is described as “a coding agent that you can run locally”. It connects the models with PCs and local code, allowing you to generate, test, and execute AI code on your own computer. Codex CLI seems similar to Claude Code from Anthropic, but of course runs on OpenAI’s own models.

o3 and o4-mini are now available for ChatGPT Plus, Pro, and Team subscribers. Those using the free version can temporarily test o4-mini via the “Think” option. Next week, the models will also come to Enterprise and education subscriptions. Developers can already use them today via the API, although additional verification may sometimes be required.