OpenAI launches Aardvark, a new AI agent that detects and analyzes security vulnerabilities in software. The tool is currently in private beta and is powered by GPT-5.
OpenAI introduces a new AI agent, Aardvark, designed to support security teams in discovering and resolving vulnerabilities in software code. The AI agent is powered by GPT-5 and reads and analyzes code as a human researcher would. Aardvark is currently only available in private beta.
AI Security Researcher
OpenAI calls its new AI agent Aardvark an “AI security researcher”. This new model should be able to discover and resolve vulnerabilities in software code. The model, powered by GPT-5, also fits within existing development workflows.
read also
OpenAI Upgrades AI Coding Tool Codex with GPT-5
Once Aardvark detects a vulnerability, it attempts to activate it in a sandbox environment. Based on the results, it proposes targeted patches through integration with Codex, OpenAI’s AI coding tool. Developers then receive an explanation and proposal for a solution. Aardvark is integrated with GitHub and fits within existing development workflows.
Test Results
Aardvark is already being deployed within OpenAI and with several partners. According to OpenAI, it has internally revealed several relevant vulnerabilities. In benchmark tests, Aardvark identified 92 percent of known and artificially introduced vulnerabilities in tested repositories. Dozens of issues have also been discovered in open source projects, ten of which have an official CVE classification.
According to OpenAI, Aardvark should contribute to the security of the broader software ecosystem. Those who want to try out the AI agent can sign up to participate in the private beta.
