Skip to main content
Cybersecurity

Here’s what deepfake security awareness simulations look like in 2025

We tried out educational training content focused on AI-powered attacks from three different SAT companies.

8 min read

You can’t learn to swim by reading about water. It’s a similar principle when it comes to the education around combatting deepfakes and other AI-powered cyberattacks. Reading tutorials and watching instructional videos doesn’t have the same impact as experiencing a deepfake incident in real time.

That’s why we reached out to three security awareness training (SAT) companies currently focusing their attention on the growing threat of deepfakes, which led to $347.2 million in direct losses in Q2 2025 (per Resemble.ai’s quarterly Deepfake Threat Intelligence report). The goal? To see if the industry is keeping up with the rapidly evolving deepfake threat with equally sophisticated educational content.

Seeing double. The first stop in my quest: Adaptive Security, a security awareness training company that specializes in AI-powered attacks. Adaptive’s website touts its ability to engage employees with “captivating” deepfake and AI content. To put this assertion to the test, I asked the company if it would demo its offering by creating a deepfake of an IT Brew reporter introducing themselves for an interview with Adaptive CEO Brian Long about—you guessed it—deepfake technology.

An Adaptive spokesperson instructed me to send over a 30-second video of myself talking naturally, adding that it would be best if the video was filmed with good lighting and facing toward the camera. So, I grabbed my iPhone and filmed a quick video reciting their instructions, then reading a few lines from a recent article of mine.

Less than 24 hours later, the company delivered its deepfake video, which Long said only took about five minutes to make. Adaptive—which Long said works with “hundreds and hundreds” of companies, including healthcare and government organizations—only needs a single picture of the subject and about 10 seconds of audio to create a deepfake, he added.

What greeted me when I pressed play on the video? Myself, but slightly different. While the person looking back at me had my cadence, I couldn’t help but notice how frequently the deepfake persona bobbed around unnaturally as it introduced itself, plus how the lighting on the dupe’s face didn’t fit the background: two telltale signs of deepfake content.

Otherwise, it was pretty convincing, according to loved ones, who told me it wasn’t too far off from the real thing. Mitek Systems Chief Scientific Officer Konstantin Simonchik had similar thoughts.

“Looking at the video that you shared today, I found that this service is actually much more advanced right now, and looks much more natural compared to what I observed about a year ago,” Simonchik said. “So, very great progress on quality.”

Long said Adaptive assists companies by analyzing areas where risks are present, simulating AI-powered attacks based on that information, and providing “interactive” training modules to employees. An example of a simulated attack includes a phone call with a fake version of a real person within their company who tries to persuade them to do something nefarious.

“We use these tools that you can put in for anyone and make these dossiers that understand what’s out there about them, and then we also can use that information to say, ‘Hey, how would you attack this person?’ which is really what the attacker is going to do, too,” Long said.

The proportion of users who fail their first simulated attack is a “high double-digit percentage,” according to Long. However, he said, repeated training can lower this failure rate to less than 10%.

Sounds about right. Next on the list was a chat with Jason Thatcher, founder of Breacher.ai, a security awareness company that does deepfake red teaming and phishing simulations.

“We’re basically like ethical hackers using deepfake, but we do some awareness training, as well,” Thatcher told me during our video call, before sending over a link to sample Breacher’s agentic AI deepfake educational bot.

The process was simple. An automated voice guided me through the process of recording a voice sample, asking me about various topics (such as what I find most interesting about being a cybersecurity reporter, and how I go about finding sources for stories). After that, I was able to test out a simulated vishing attack executed via a Teams call with myself.

“Hi, it’s Brianna Monsanto. I need your help with something,” the rip-off Brianna said, before revealing it needed help with a transaction for some foreign clients.

I tested out the simulated attack several times. Each time, the voice on the other side of the line had a reasonable response to all of my questions.

“I would have preferred to ask you in person, but this is extremely urgent, and I’m currently tied up in meetings. I figured a quick call would be the fastest way to get the ball rolling. Can you do it?” the deepfake said when I asked why she didn’t mention the favor when we spoke in person.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

“If it helps, can you confirm your employee ID so I can verify it against our records? Or what would make you feel more secure that it’s me?” the voice responded when I said it sounded a bit different, then asked how I could be sure about its identity.

At one point, the deepfake voice asked if I was willing to proceed with its request instead of “making baseless accusations” about it being a deepfake. (The irony.)

What was interesting about interacting with Breacher’s deepfake was how effectively it pacified my concerns while hammering the importance of the task at hand. Thatcher told IT Brew that this is common among deepfake attacks.

“Usually, they’ll attempt to apply pressure or urgency, so teaching people to just basically take a step back and slow down is really important,” Thatcher said. Teaching employees to not take things at face value and to engage in out-of-band verification—a type of two-factor authentication where employees confirm a request through a second secure communication channel—are other good precautions, he added.

Voice command. Finally, we caught up with Thomas Le Coz, CEO and co-founder of Arsen Security, which offers phishing, vishing, and SMS phishing simulations in addition to security awareness training. Le Coz told IT Brew that deepfakes are a “natural evolution” in the world of social engineering. He offered up Arsen’s vish-to-phish simulation to demonstrate the sophistication of current attacks.

“The goal of the voice is to make you click on the link,” Le Coz said.

Shortly after, I received a phone call from “Michael from the IT department,” who alerted me that someone had accessed my Microsoft account and asked if I had done anything “unusual” recently. Upon confirming no strange activity on my end, the fake IT worker told me that he’d need to secure the account and would send an email with instructions to download a security patch.

Like Breacher’s vishing simulation, “Michael” was quick to address my worries.

“I understand your concern,” Michael said when I told him I was unsure if responding to his pending email would adhere to company policy. “Logging in is necessary to patch the vulnerability. The email will guide you through this. It’s safe and company approved.”

Michael’s voice was pretty realistic. I could see how an unsuspecting employee would fall for the phish after chatting with him. Le Coz confirmed many folks have, even back when the voice models were less advanced than today. When Arsen attempted these simulations a few years ago, a number of employees would hang up after hearing the voice on the other end, only to get phished with the subsequent email.

“What we saw is, even if they felt something was weird…the click-rate more than doubled,” Le Coz said, adding some employees end up clicking because they think they were overreacting or overthinking the call.

Where do we go from here? One deepfake and two vishing calls later, I felt like I had narrowly escaped an episode of Black Mirror.

While I was able to tell something was off about the “people” I spoke with during these calls, deepfake experts told me the technology is quickly evolving to make deepfakes sound even more natural. When I asked iProov CTO Dominic Forrest when deepfakes will be undetectable, he responded by showing me a video he made of himself with a free tool. In the video, Forrest is slowly moving his head and putting on his glasses—nothing out of the ordinary, except his face is swapped with that of another person’s every few seconds, transforming his appearance.

As if that wasn’t telling enough, Forrest also referred me to an online quiz on the iProov website where people can test their ability to identify deepfake content. Despite thinking I would zip through it with flying colors, I was quickly humbled with my 7/10 score.

“You can go on video calls such as this one...and appear as somebody else,” Forrest said. “I can’t tell the difference these days. So, I think the time…of looking for this type of deepfake by eye [has] long passed unfortunately.”

While Le Coz didn’t have any solutions for what cybersecurity professionals can do when deepfakes become fully indiscernible, he shared a few changes he expects down the line.

“I do think that we will have much more face-to-face events, much more physical interaction, because it’s going to be harder and harder to trust remote relationships.”

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.