Itdaily - OpenAI launches its own cybersecurity model: competition for Anthropic?

OpenAI launches its own cybersecurity model: competition for Anthropic?

openai

OpenAI launches GPT-5.4-Cyber, an AI model specifically trained for cybersecurity. The new model is designed to provide defensive capabilities to verified users.

OpenAI introduces a new AI model, GPT-5.4-Cyber, trained for cybersecurity. The model arrives just days after Anthropic’s AI agent Claude Mythos was unveiled. The rollout of GPT-5.4-Cyber is taking place within the framework of OpenAI’s “Trusted Access for Cyber” program. With this customized version of GPT-5.4, OpenAI aims to support professionals responsible for critical software and infrastructure.

More freedom

GPT-5.4-Cyber is a variant of GPT-5.4, specifically trained to support cyber defenders. The model is less strict about refusing requests for legitimate cybersecurity tasks. This allows security experts to test whether their software is vulnerable to exploitation by cybercriminals.

Not everyone can simply use the model. It is currently only being rolled out within the TAC program. Access to GPT-5.4-Cyber is being phased in for screened security vendors, organizations, and researchers.

Following in Anthropic’s footsteps

This announcement is no coincidence, coming shortly after Anthropic recently launched its first AI security agent, Claude Mythos. According to the company, the model has already discovered thousands of vulnerabilities, even detecting a 27-year-old exploit.

Just like GPT-5.4-Cyber, Anthropic’s Claude Mythos model has limited availability. “Claude Mythos is available to companies managing the world’s most critical code to explore how they can use the model to mitigate security risks,” said Logan Graham, Frontier Red Team Lead at Anthropic, in a recent announcement. Anthropic is therefore not yet bringing it to the wider market with general availability.

The first AI security agents are emerging, raising several questions. Anthropic is keeping its model closed for now, emphasizing the risks associated with these models. “An AI model like this could also cause harm if it falls into the wrong hands,” Graham stated. OpenAI’s model, on the other hand, is based on an existing model—GPT-5.4—and will become more widely available after a testing period.

OpenAI recently sidelined its Sora video tool to focus more on its coding tools, among other projects.