AI is also changing the way cybercriminals work: malware that rewrites itself is pushing cybercrime into a new phase.
Self-learning Malware
Google reports that malicious code is now using AI for the first time to adapt its behavior during execution. This means that some malware families are no longer static, but send prompts to language models to generate source code and then execute a renewed variant. This makes it easier to evolve and harder to remove.
Examples and Tactics
Among the examples examined by BleepingComputer, Promptflux comes to the fore, a Trojan Horse that installs malware. It uses Gemini to rewrite its own source code and put a copy in the Startup folder. Other prototypes use LLMs to generate scripts that collect data, scan systems, or copy login credentials to public repositories. The techniques of the AI scripts are so diverse that they are difficult to detect for the time being.
Nevertheless, Google says that insights from these analyses have been used to strengthen both classic detection mechanisms and the models themselves. DeepMind has been adapted not to contribute to this type of attack and to recognize suspicious prompts. So AI is fighting against AI.
Defense Evolves Along
Classic malware follows fixed routines, but this AI malware is able to change itself during execution based on model output. This increases the reach of attackers and makes security teams think about how they can also have malware detection rewrite itself and adapt to the context.
