Itdaily - Microsoft uses Claude Mythos to detect vulnerabilities: painful for OpenAI

Microsoft uses Claude Mythos to detect vulnerabilities: painful for OpenAI

mythos

Microsoft wants to integrate Anthropic’s Claude Mythos model into its internal software development process to create secure code.

Microsoft reveals how it plans to use AI technology for more secure software development. From the blog, we learn that Microsoft is one of the parties with access to Anthropic’s Claude Mythos, the AI model that has been shaking the security world to its foundations for weeks. Microsoft wants to implement the model directly into the development cycle to identify vulnerabilities in code.

Securer code development is one of the three security domains where Microsoft wants to focus more on AI. In addition to developing AI-powered security products, AI technology should also serve to inform companies of security risks more quickly and to mitigate them.

Tested and approved

Avoiding vulnerabilities is still the best remedy against security incidents, preferably as early as possible in the development process. To achieve this, Microsoft wants to use Anthropic’s AI security model. Through Anthropic’s Project Glasswing program, it is one of the select approved parties with access to the model.

In the short time it has been available, Claude Mythos has already proven to be very proficient at detecting vulnerabilities. More than a thousand vulnerabilities have reportedly surfaced thanks to Mythos, including one that had gone unnoticed for 27 years. Mozilla, the company behind the Firefox browser, has already tracked down 271 vulnerabilities with the help of Claude Mythos.

Microsoft states that it has put Mythos through rigorous benchmarking. It refers to the CTI-REALM benchmark, a test to evaluate the security capabilities of AI models, which was co-developed by Microsoft. Claude Mythos emerged as the best in that benchmark. By integrating Claude Mythos into the development process, Microsoft hopes to detect and mitigate vulnerabilities in early stages.

Behind closed doors

It has now been two weeks since Anthropic showed its Claude Mythos model to the world, and it hasn’t been out of the news for a single day since. The banking and insurance sectors fear doom scenarios, while security experts and guardians of critical infrastructure want to be on the guest list at all costs. The American cybersecurity agency NSA is even flouting an embargo against Anthropic.

Precisely because Mythos has shown itself to be so proficient at detecting vulnerabilities, there are fears about what would happen if it falls into the wrong hands. Claude Mythos is potentially so powerful that cracking software becomes child’s play, opponents and doomsayers fear. That fear is no longer so theoretical: it was announced this week that ‘unauthorized users’ have managed to bypass the lock and key behind which Claude Mythos is kept.

Painful for OpenAI

The fact that Microsoft specifically chooses Anthropic’s Claude Mythos is an affront to OpenAI. OpenAI tried to steal the spotlight with its own cybersecurity model just a few days after the launch of Mythos. However, no one is talking about GPT-5.4-Cyber: even OpenAI’s most loyal partner prefers Anthropic. Microsoft has been cozying up to Anthropic for some time: 365 applications are also equipped with a Claude integration.

Redmond itself has some making up to do regarding security. For many years, Microsoft has faced criticism from the security world over how the company handles vulnerabilities and security incidents.