Deepfake Detection

Deepfakes undermine the integrity of biometric verifications. Use tech and process reengineering to mitigate risk.

What should IAM leaders know about deepfake detection?

Rapid improvement in generative AI (GenAI) capabilities have made it relatively easy for attackers to create deepfake audio, still images or videos that are increasingly difficult for observers to detect and which can subvert biometric processes.

As a result, impersonation is now much more easily achieved, undermining the integrity of many remote interactions. This has implications for automated processes such as voice recognition in consumer contact centers, remote IDV at account opening or workforce onboarding, and biometric authentication in digital channels.

How can IAM leaders mitigate the risk of deepfake attacks?

IAM leaders should understand how real-time deepfake detection tools can provide useful risk signals in automated biometric processes but cannot be relied upon alone. Leaders should also understand how to augment these with additional layers of security. Furthermore, when it comes to the sprawling attack surface of person-to-person interactions, IAM leaders should engage with internal stakeholders and focus on compensating controls and hardening of business processes.

Option 2

IAM leaders need to engage with business stakeholders to help them understand that this problem cannot be solved by simply purchasing new security tools. Instead, it will need to involve organizationwide efforts to remove process vulnerabilities and training employees to be the first line of defense.

IAM deepfake detection recommendations

  • Deploy Deepfake Voice Detection as Part of Your Voice Recognition Solution: In many cases, voice recognition vendors themselves have added their own deepfake detection capabilities, which are tightly integrated with their core products. In other cases, the deepfake detection is offered as a stand-alone product that can be integrated alongside an existing voice recognition solution. Deepfake voice detection should be seen as an additional layer in an in-depth defense strategy that includes signals such as validation of automatic number identification (ANI) data, SIM swap detection and correlation of phone numbers with identity.

  • Deploy a Layered Approach of Detection Capabilities in Face Recognition Processes - IAM leaders should select face recognition vendors that have implemented a layered approach to defending against attacks that may involve deepfakes, including:
    • Device and behavioral intelligence
    • Liveness detection
    • Image inspection
    • Screen detection
    • Emulator detection
    • Metadata inspection
    • Watermarks
    • Payload integrity
       
  • Harden Business Processes to Protect Employees Against Deepfake-Augmented Social Engineering Attacks - while adopting new technology such as real-time deepfake detection for (some) audio/video calls is worth exploring, IAM leaders should focus first on more fundamental layers of defense:
    • Employee behavior
    • Business process hardening
    • Authentication and verification flows

Need more guidance on deepfake detection? We're discussing the latest insights on emerging IAM topics at Gartner Identity & Access Management Summit 2025, happening December 8 – 10, in Grapevine, TX.

Hear from Gartner IAM experts on what makes deepfake detection a critical topic in 2025 and 2026.

Sign up to get more information

Sign up to receive our latest updates on conference details and IT news.

By clicking the "Continue" button, you are agreeing to the Gartner Terms of Use and Privacy Policy.

Contact Information

All fields are required.