Deepfakes undermine digital trust in your company: What can you do?

Deepfakes undermine digital trust in your company: What can you do?

Criminals are using AI to create deepfakes. This not only poses a risk in terms of phishing but can also impact trust in your business. According to ISACA, you should protect this digital trust well.

Deepfakes are no longer found only in the depths of social media. AI-manipulated images have now grown into an acute business risk. Hyperrealistic fake videos and AI-mimicked voices are misleading accounting teams, support teams, and even board members.

Internal and external risk

The problem isn’t entirely new. ISACA has been warning about AI-driven threats for some time. Last year, the organization still highlighted the risks from convincing phishing to deepfakes. Digital trust is, after all, the foundation for audit, security, privacy, and compliance. Frameworks and certifications can help organizations standardize processes.

read also

Deepfakes undermine digital trust in your company: What can you do?

These things don’t just pose an internal risk. One convincing clip is enough to undermine customer trust. According to ISACA, the gap between the importance of digital trust and the operational readiness of organizations is growing. Unfortunately, this is precisely the space where deepfakes flourish.

From incident to crisis in 60 minutes

Organizations are already seeing the effect today: false video instructions from a “CEO”, manipulated advertisements, and fake helpdesks. The financial and reputational damage can quickly accumulate. According to ISACA, the time to respond is limited: the first hour determines whether an incident grows into a crisis.

Data from research conducted by ISACA emphasizes the urgency of the problem: 82 percent of European IT and business profiles believe that digital trust will become even more important in the next five years, but almost three-quarters say their company doesn’t offer training on it; 64 percent immediately link loss of trust to reputational damage.

Acute issue

“Deepfakes are already a business problem today, not a future one,” emphasizes Chris Dimitriadis, Chief Global Strategy Officer at ISACA. “Add AI-driven detection and provenance checks to your media processes, update crisis scripts for synthetic media, and treat deepfakes as part of your broader fraud and security program, not as a separate risk,”.

Deepfakes are already a business problem today, not a future one.

Chris Dimitriadis, Chief Global Strategy Officer ISACA

AI detectors that analyze lip-sync, micro-expressions, or lighting are useful, but insufficient if they’re not embedded in processes. According to Dimitriadis, you make the difference with governance: who escalates, how quickly, what evidence do you keep, and who speaks publicly when? “Without governance, you react ad hoc and remain vulnerable. Equally important is training: everyone, from board to frontline, must recognize deepfake signals and know how to act,” says Dimitriadis.

The EU AI Act and transparency obligations around synthetic media raise the baseline, but don’t replace internal discipline. Companies must translate policy into workable controls without stifling innovation. In the public sector, for example, AI literacy has become an explicit obligation: people using AI must have the technical, practical, and legal background to do so responsibly.

Practical defense plan

Organizations shouldn’t be intimidated. Dimitriadis shares practical steps that every company can take now to arm itself against the risks of deepfakes.

  • Integrate detection: Add AI detection to media workflows (PR, social, web care) and check metadata where possible. Combine automatic detection with human-in-the-loop reviews. This prevents false content from gaining traction through your own channels.
  • Expand incident response with synthetic media: Develop playbooks for: rapid triage (within 15-30 min), forensic evidence preservation, legal check (portrait rights, trademark infringement), and initial communication (fact-checking in progress, “we are proactively investigating and blocking”). Appoint spokespersons and escalation lines in advance.
  • Link deepfakes to existing fraud processes: Don’t place deepfakes in a silo. Set limits on payment authorizations via voice/video, introduce call-back verifications outside the initial chain, and centrally log deviations for threat intel and post-mortems.
  • Train broadly: Skills and training are the front line. Build digital literacy and a culture of healthy skepticism: pause before you click, share, or act.

Those who smartly combine detection, governance, and skills turn the asymmetry around. Attackers only need one convincing clip; defenders need one well-practiced process to limit damage and gain trust. Dimitriadis summarizes: “You don’t protect digital trust with policy or training alone, but with a company-wide effort that links both.”


At the ISACA 2025 Europe conference, taking place from October 15 to 17 in London, ISACA is organizing a session to illustrate the problem, including a live demonstration with a synthetic version of the presenter. Interested parties will learn how to detect AI deception and what they can do against it. More details about the conference and the session can be found here.