Google researchers have found that attackers are using AI across almost the entire attack chain. While existing attacks are being improved, new AI-driven threats have yet to emerge.
Attackers are increasingly using generative AI as an accelerator in nearly every step of the attack chain. This is according to the Google Threat Intelligence Group (GTIG) in a report on what the team observed in the wild during the final quarter of 2025. According to GTIG, AI makes existing tactics more efficient and credible, but researchers are not yet seeing fundamentally new, AI-driven cyberattacks that are upending the threat landscape.
Faster research and more targeted phishing
For example, an LLM helps to build profiles of potential targets more quickly. AI accelerates several steps: summarizing organizational structures, identifying hierarchies and decision-makers, and refining target lists. GTIG describes how criminals abused AI to collect sensitive account information and email addresses, processing those same targets into phishing campaigns shortly thereafter.
These phishing campaigns are becoming more realistic, even at scale. Poor grammar and strange sentence structures were long useful red flags. According to GTIG, attackers are now using LLMs specifically to eliminate those weaknesses: hyper-personalized, culturally accurate lures are making their appearance.
Google researchers are also seeing how AI assists with semi-personalized phishing. In so-called rapport-building phishing, a model helps conduct credible, multi-step conversations before a payload follows. This builds the target’s trust in the ‘interlocutor’ before they are enticed to click at a later stage.
Helpdesk
GTIG also sees AI (still) playing a role in coding and fixing bugs. AI can summarize readmes, debug scripts, translate code, and create test plans for vulnerabilities. Additionally, there are experiments with malware that uses AI for code generation (such as a downloader that uses an API to write out stage two functionality) and phishing kits that are likely built faster with AI coding assistance and modern web frameworks.
What GTIG has found is not actually surprising. Criminals are deploying AI in the same way as legitimate employees. The technology helps summarize information, draft emails, and write code. This applies to employees and hackers alike.
