Nearly half of companies do not involve security teams in the implementation and rollout of AI solutions. AI does play a role within cybersecurity in improving detection, response and endpoint security.
Barely 35 percent of cybersecurity specialists report being actively involved in the development of policies around AI within their enterprise. However, nearly half (45 percent) of security professionals are not involved at all in the development or implementation of AI solutions. That’s according to ISACA’s 2024 State of Cybersecurity report, presented in Dublin last week.
Security afterwards
More than 1,800 cybersecurity professionals were surveyed about the role of security and AI. It is notable that there is a large gap between the rollout of AI projects and the involvement of security professionals. Thus, AI runs the risk of repeating past mistakes.
After all, by now it is clear that software should be developed and deployed with security as a priority from the beginning. Clipping security onto an existing solution after the fact rarely works optimally. Yet in the case of AI, security once again seems relegated to second place. That brings predictable risks.
ISACA is committed to digital trust in professional contexts, and based on the survey, identifies that there is still much work to be done. Cyber threats are becoming increasingly complex and sophisticated. It is therefore important to involve security specialists in AI projects as well. The role of AI in security itself can also be significant, but is thus only embraced by a minority.
AI and security
Organizations that do combine AI and security do so primarily to boost their capabilities around threat detection and response. 28 percent of those surveyed indicated that such projects are underway. Endpoint security is another popular branch to apply AI to (27 percent).
read also
Isaca: ‘Security experts are under stress’
Organizations are further trying to address the shortage of security professionals by automating routine tasks. 24 percent of respondents say they are using AI for that purpose. Then there is a minority (13 percent) using AI for fraud detection.
Challenges
On the fringes of the survey, ISACA sees four key trends around digital trust for the near future. AI-driven threats rank first. Here, ISACA refers primarily to generative AI, which allows criminals to easily generate very convincing phishing emails. Deepfakes and personalized attacks will become more and more prevalent in the future, according to the organization.
The second major challenge is the gap between the number of security specialists needed and the amount of available talent. AI may close that gap a bit, but rapid digital transformation among European companies, on the other hand, makes the problem worse again.
As a third trend, ISACA points to increasing regulations with which companies must comply. The AI Act and NIS2 are prime examples of this. All that culminates in a fourth trend: the role of cybersecurity specialist becomes the key to future success, because all those challenges are linked to security.