OpenAI stops 20 cyber operations abusing ChatGPT

ai hacker password cracking

OpenAI already disrupted 20 operations and deceptive networks this year that consulted ChatGPT to generate rogue code.

Hackers are finding their way into AI. Several reports show that hackers are using AI to create malware attacks, for example. OpenAI shares new findings in its own Threat Intelligence report showing that the company’s chatbot ChatGPT disrupted some 20 operations and misleading networks. For example, hacker groups would use AI to spread disinformation, bypass security systems and conduct spearphishing attacks.

Misuse of models

Where AI models can help developers generate code, hackers also discovered the potential of AI. Recently, some reports revealed that certain malware attacks contained AI-generated code.

OpenAI, the AI company behind the well-known chatbot ChatGPT, recently shared its Threat Intelligence report. In it, the company highlights that with ChatGPT, it has already disrupted 20 operations and deceptive networks from around the world that wanted to use the AI model to develop rogue code.

read also

OpenAI stops 20 cyber operations abusing ChatGPT

In the report, OpenAI additionally attempts to analyze how threat actors are attempting to use AI. Hacker groups have allegedly used AI to spread disinformation, bypass security systems as well as conduct spearphishing attacks. OpenAI cites some examples, including the Chinese group ‘SweetSpecter, as well as CyberAv3ngers and STORM-0817.

The company emphasized that it continues to work with their intelligence, research, security, safety and policy teams to anticipate how malicious actors may use advanced models for dangerous purposes and to appropriately plan enforcement steps.