Security firm Barracuda warns of large-scale campaign involving criminals impersonating OpenAI. The impact of generative AI on cybercrime is increasing but remains limited.
Barracuda researchers have uncovered a phishing campaign in which attackers impersonate OpenAI. They trick companies worldwide with emails asking recipients to update their payment information for an OpenAI subscription. The attackers thus assume that the victim has a paying ChatGPT Plus subscription.
The emails in this phishing campaign did closely resemble real messages from OpenAI, with a legitimate-looking email address and an urgent message, Barracuda notes. Because of the encrypted hyperlink, the actual URL differed for each message. This pattern fits a broader trend in which cybercriminals are using generative AI technologies to mount more convincing phishing attacks.
Since the launch of ChatGPT, analysts have noticed an increase in email attacks, such as spam and phishing, involving generative AI. Companies are concerned that their existing security measures are failing in the face of threats posed by AI-powered attacks. AI makes it easier for attackers to target end users by generating persuasive phishing emails and websites, for example.
Limited impact of AI
Despite the concerns, Verizon’s 2024 Data Breach Investigations Report indicates that generative AI was mentioned in fewer than 100 cyber incidents worldwide last year. This report showed that the number of incidents involving AI citations remained low, especially when compared to traditional attack methods such as phishing and malware. Forrester analysts also noted in their 2023 report that while AI helps make phishing campaigns more convincing and large-scale, it does not fundamentally change the nature of cyber attacks.
Nevertheless, experts expect AI innovations to facilitate more complex threats in the future. Organizations are advised to remain attentive to signs of phishing and strengthen their basic security. This remains an effective way to protect against evolving cyber risks.