CEO Enthusiastic about AI and Security, CISO much less So

CEO Enthusiastic about AI and Security, CISO much less So

While business leaders worldwide want to embrace AI for rapid innovation, robust security practices are lagging behind. A gap is emerging between the CISO and the rest of management.

A gap is emerging between the role and responsibility of the CISO on one hand, and the AI ambition of the rest of the company management on the other. This is evident from a study by NTT Data, conducted among 2,300 sensor profiles including 1,500 C-level managers in 34 countries.

The research indicates that 95 percent of respondents see generative AI as an innovation engine. Companies are acting on this: 99 percent plan further AI investments. Security is not forgotten in theory: 94 percent are also investing more in security.

Wide Gap

In practice, however, security is lagging behind. Only 24 percent of the surveyed CISOs think their organization has developed a robust framework that correctly balances the risks and benefits.

38 percent of them think the GenAI strategy is effectively well-aligned with the security strategy. The CEOs see it more optimistically: 51 percent think that balance is fine.

The discrepancy between the optimistic view of the CEO and the rest of the C-level managers on one side, and the vision of the CISO on the other, is large. For example, 69 percent of CISOs indicate that their team currently lacks sufficient skills to successfully address the GenAI challenges. Furthermore, only 20 percent of CEOs think that internal guidelines around policy and responsibility are unclear, compared to 54 percent of CISOs.

read also

‘Half of cybersecurity teams have no say in AI deployment and development’

It’s not surprising then that 45 percent of CISOs ultimately have a negative sentiment towards GenAI. They feel pressured, threatened, and overwhelmed. Among respondents in other roles, only nineteen percent share these negative feelings.

Built-in Security

It seems that widespread enthusiasm for AI is causing known mistakes to be made again. Rapid implementation leads to less adequate security. Yet with AI too, security should be part of the design and implementation from the start. Earlier research by ISACA already showed that this is all too often not the case. Half of the security teams have no say in application development.

A well-developed framework, taking into account relevant legislation, is also essential for a structurally safe deployment of AI. According to the report, there is still much work to be done in this area, and certainly more than the CEO thinks.