RSA

Real-time deepfake hacks are coming soon, experts warn, and IT teams need to be ready

“I think we’ll probably see major headlines where people are getting burned by stuff like that, probably in the next six to 12 months,” one expert tells IT Brew.
article cover

Francis Scialabba

· 3 min read

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

Concerned about who’s on the other side of that Zoom call? Afraid it might be a sophisticated, real-time deepfake? You’re not alone.

Crypto projects were the subject of scams relying on the technology last year when hackers purportedly used doctored videos of Binance CSO Patrick Hillmann to get money from unsuspecting blockchain users. But those videos were manipulated after the fact, not in real time.

IT Brew asked experts at RSA ’23 in San Francisco this April about the potential for danger from the new technology.

Who are you? Deepfake technology isn’t yet so easily available where we’re going to be regularly navigating real-time interactions on video. But that day may be coming soon, part of an ongoing social engineering approach to hacking being deployed by threat actors looking to access identity and credentials.

It doesn’t mean that the technology doesn’t already exist, GitHub CSO Mike Hanley told IT Brew, nor does it mean that motivated adversaries can’t access it.

“You can get a pretty reasonable fake now of somebody interacting and speaking with you in real-time on a Zoom for a price that’s easy for admission for a bad actor,” Hanley said.

But the cost-benefit analysis doesn’t always work out, Proofpoint EVP of cybersecurity strategy Ryan Kalember told IT Brew.

“You would have to do something extremely custom and extremely expensive,” Kalember said.

Hanley cautioned that while the potential of real-time deepfakes is real—and while he believes it’s closer to a reality than Kalember does—there are other issues at play.

Companies should take into account their own risk tolerance before assigning it to the threat model. The realities of the post-pandemic work environment mean there’s high potential for deception, as most meetings are still being held at least partially hybrid.

“I think we’ll probably see major headlines where people are getting burned by stuff like that, probably in the next six to 12 months,” Hanley said.

Taking precautions. As IT Brew reported, it’s already happening on the audio side. A Motherboard reporter in March was able to break into his bank account through a voice ID generator that mimicked his voice sufficiently enough to get into the system. The generative AI company behind the technology, ElevenLabs, has since introduced restrictions on accessing that aspect of the AI.

Users who are concerned about the potential for deepfakes have options. Basic security hygiene and protocols of the kind used by successful IT teams can go a long way. But as Unit 42 Director of Strategic Engagement and Analysis Jen Miller-Osborn told IT Brew, what’s going to be most effective is boring and time-consuming identity verification.

“Confirm it with an actual living person before you take it at face value,” Miller-Osborn said. “Yes, it might take a few extra minutes—that’s what we’re moving towards.”

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.