Skip to main content
Cybersecurity

Are deepfakes getting too good?

AI might get too good for the normal checks within the next two years, experts say.

4 min read

If deepfakes keep improving, cybersecurity professionals won’t be able to defeat them by asking someone on a video call to hold up three fingers or turn off filters—but they can’t check for a pulse like a harried physician in HBO’s The Pitt. What solutions can they turn to?

This deepfake evolution, with the accompanying need to boost fraud detection, according to two industry leaders, could happen within the next two years.

Right now, if someone is using a live deepfake to try and deceive an organization, humans can recognize inconsistencies; for example, deepfaked people can have distorted edges, depending on the lighting, or their mouth movement may not quite match their speech.

John Ansbach, managing director at LevelBlue company Stroz Friedberg, which specializes in digital forensic and security investigations, told IT Brew that professionals are rapidly approaching a time when automated tools will have to take over deepfake detection from humans.

“I really do think it’s not going to be long before we are not going to play a role in detection. We’re going to rely on those tools, but we will have to have the good old-fashioned, ‘Are you in the same room?’ to be able to validate that you’re actually talking to a person,” Ansbach said. “It is an iterative fight, bad guys are getting better, good guys are getting better, rinse and repeat.”

The future of detecting. Reinhard Hochrieser, SVP of product and technology at online identity verification platform Jumio, told IT Brew that a number of checks can help mitigate deepfakes and fraud in general. He said the future of detection could look like stricter controls by the companies that host videoconferencing, such as requiring video call participants to perform a “liveness check.”

These checks, according to identity intelligence company Microblink, could confirm that someone is physically present. Analyzing physical responses such as blinking and subtle facial expressions is already used to stop deepfaked video and photos; sensors can also detect the presence of a three-dimensional face.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

By subscribing, you accept our Terms & Privacy Policy.

Other tools on the market focus on audio tracking and discrepancies, since most deepfakes are designed with audio content in mind. These tools alert participants who might be engaging with a deepfake.

Security experts like Ansbach are teaching professionals to look for facial inconsistencies and irregularities on video. Additionally, some tools may look for anomalous behavior in a particular experience.

“If I’m on a call with my chief revenue officer or my chief executive officer, and I have those calls on any regular basis, the tool is creating a baseline so that if one day I’m on and I think I’m on with my CEO, but it’s actually a deepfake, that tool that has compared the experience I’m having now with the baseline, sees an anomaly, and alerts me to that,” Ansbach said.

What to do right now. While AI isn’t capable of completely fooling everyone just yet, Hochreiser suggested that pros get in the habit of asking participants on video calls for liveness checks in real time. For example, they could ask a participant to do something unexpected, like show the camera their phone screen with the current time and date.

“Ask them to turn on the camera, ask them to turn off all filters,” Hochreiser added. “Of course, then think about those other tools, liveness detection and whatnot, but I think we are early in that stage.”

As AI continues to evolve, Ansbach stressed the importance of AI providers giving early access to models before releasing them to the public so that cybersecurity professionals can better prepare their defenses.

“If we’ve got folks that are developing LLMs and agentic AI solutions that are simply significant, powerful, and unconstrained and available to anybody, then we’re going to be in really bad shape and the tools will forever be playing catch up,” Ansbach said. “I think [the] responsible path forward is to be sure that those tools become available for legitimate use before they are made available to the public.”

About the author

Caroline Nihill

Caroline Nihill is a reporter for IT Brew who primarily covers cybersecurity and the way that IT teams operate within market trends and challenges.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

By subscribing, you accept our Terms & Privacy Policy.