Companies are deploying AI faster than they can manage the risks, according to new research from TrendAI. Two-thirds feel pressured to approve AI despite ‘extreme’ security concerns being ignored to stay ahead of the competition.
The race to deploy AI is overshadowing the need for solid governance. A global study by TrendAI—known in a previous life as Trend Micro—among three thousand IT decision-makers shows that 67 percent are under pressure to authorize AI projects, even when security risks are labeled as ‘extreme.’ Only one in seven cites those concerns as decisive, while the rest brush them aside to keep up with the competition or meet internal demand.
This situation is exacerbated by inconsistent governance and unclear responsibilities regarding AI risks. Security teams often work reactively, leading to the use of unauthorized ‘shadow AI’ tools. TrendAI warns that attackers are already using AI to accelerate cyberattacks, such as automated reconnaissance and phishing, lowering the threshold for cybercrime.
AI without rules
Organizations are implementing AI faster than they can secure it, 57 percent of respondents admit, while 64 percent have only moderate confidence in their knowledge of the legal frameworks surrounding AI. Only 38 percent have comprehensive AI policy frameworks in place, while 41 percent perceive unclear regulations as a barrier.
read also
Mistral advocates for ‘content tax’ on AI companies: “European rules satisfy no one”
In practice, this means that AI is often operationalized before the rules surrounding its use are fully established. This creates a situation where AI systems influence critical business processes without the necessary controls. Rachel Jin, Chief Platform & Business Officer at TrendAI, emphasizes that competitive pressure leads to implementation without adequate governance, resulting in unpredictable risks.
Low confidence in autonomous AI
Despite the growing adoption of autonomous AI systems, confidence in them remains limited. Only 48 percent believe that agentic AI will improve cyber defense in the short term, while concerns exist regarding data access, misuse, and lack of oversight. 44 percent recognize the risk of AI agents gaining access to sensitive data, while 36 percent fear malicious prompts that could compromise security.
A third of organizations fear a growing attack surface for cybercriminals, alongside risks such as abuse of trusted AI status and issues with autonomous code implementation. Additionally, 31 percent admit they have no visibility into these systems, raising questions about how they can intervene in the event of failure or misuse. Nearly 40 percent support the introduction of a ‘kill switch’ to shut down systems during failure or abuse, but nearly half are hesitant.
Governance first
The discrepancy between AI adoption and risk management is becoming increasingly clear. TrendAI warns that organizations are implementing systems they do not fully understand or control, resulting in increasing risks. Without visibility and control, AI threatens to become a new category of business risk.
TrendAI concludes with a call to action: organizations must adapt their governance to the speed of AI adoption. Only then can the balance between innovation and security be restored before the risks become unmanageable.
