Many companies do not regularly update privacy training

Many companies do not regularly update privacy training

Nearly one-third of companies are not updating employee privacy training sufficiently, leaving employees inadequately prepared for new privacy and cyber threats. That’s according to research from ISACA.

Nearly one-third of companies do not adequately update employee privacy training. That’s according to ISACA’s State of Privacy 2025 report, which highlights the findings on the occasion of Safer Internet Day. Although 87 percent of organizations offer privacy training, only 68 percent regularly update the content.

ISACA’s findings show that while many companies offer privacy training, they do not consistently update it. Only 59 percent of organizations do so annually, while nine percent update the training only once every two to five years. As a result, a significant portion of employees are not receiving up-to-date knowledge about new privacy threats, the organization believes.

This gap is problematic, according to ISACA, which cites World Bank figures showing that cyber incidents are increasing by an average of 21 percent annually looked at over the last decade. Cybersecurity and privacy are therefore no longer mere technical issues, but strategic challenges affecting digital trust relationships. Regular training plays an important role in risk management and data protection.

AI and IoT increase challenges

In addition to the need for better training, the report points to the emergence of new technologies such as artificial intelligence (AI) and the Internet of Things (IoT). These technologies increase the vulnerability of organizations, requiring both employees and business leaders to be prepared for potential incidents.

AI is also playing a growing role in privacy management. Currently, 11 percent of companies use AI to automate privacy-related tasks, such as risk assessments and compliance checks. While this technology can improve operational efficiency, it also carries risks. A lack of transparency in AI algorithms can damage trust and lead to violations of privacy laws, such as the recently introduced EU AI Act.

Earlier we wrote based on figures from ISACA that privacy teams are shrinking despite growing risks. That also increases risks.