OpenAI makes o3-mini generally available

openai

OpenAI has made its LLM o3-mini generally available to ChatGPT users and developers.

OpenAI released a preview of o3-mini, the cheaper version of o3, back in December. The o3 model is the main reasoning model in OpenAI’s lineup, but o3-mini is faster with the same latency as o1-mini.

Three versions

OpenAI has also made the new model available through several of its application programming interfaces (APIs). Developers can use the APIs to integrate o3-mini into their applications. The API version of the LLM is available in three editions with different output quality: o3-mini-low, o3-mini-medium and o3-mini-high.

OpenAI’s reasoning models are distinguished by the increased quality of prompt responses. That quality is created by increasing the amount of hardware to generate each response. The entry-level version of o3-mini called o3-mini-low uses the least amount of infrastructure, while o3-mini-high is the most hardware-intensive.

OpenAI has made o3-mini available in the Free, Plus, Pro and Team editions of ChatGPT. This week it will be rolled out to the Enterprise plan, SiliconANGLE writes. In the Plus and Team versions, 150 messages can be sent per day, three times as many as o1-mini.