How did evolving GenAI threats impact IT in 2025?
Attackers may have had the upper hand this year, one industry leader says.
• 6 min read
Caroline Nihill is a reporter for IT Brew who primarily covers cybersecurity and the way that IT teams operate within market trends and challenges.
In 2025, bad actors leveraged AI and other tools to attack IT infrastructure and cause chaos. Experts are divided on whether the industry has been successful in stopping these assaults.
Given all the chatter around AI, it’s easy to forget that mainstream adoption of the technology is still relatively new; OpenAI released ChatGPT in 2022, which kicked off a process of AI development and integration. Now there’s an arms race between attackers and cybersecurity pros, with big questions on who can deploy the latest generation of cybersecurity AI tools the fastest.
Between automated attacks and deepfakes meant to fool targets, the IT industry has been faced with the tall order of addressing these threats quickly.
Making haste. The last year has seen organizations adopt generative AI to disrupt cybersecurity threats.
BforeAI Principal Product Manager Andre Piazza told IT Brew that, with the emergence of GenAI, his company saw hackers extract information from digital profiles and websites to create AI-generated content like fake profiles, phishing emails, and more. Additionally, attackers can use AI to analyze attack surfaces.
“Traditionally, hackers have been using AI to actually assess targets so they’re digitally going to profiles, to websites, and they’re extracting information that makes them better at creating an attack,” Piazza said. “With GenAI, they can scale those attacks.”
For example, Piazza said, attackers can automatically deploy domains for attacks, complete with IT infrastructure, resources, and connections to particular clouds.
“They want to mimic a particular bank, they actually download the website, including plugs—JavaScript, everything—and they’re able to get all of that and redeploy and create something that has the same look and feel,” Piazza said. “Then, they use GenAI to generate an email that is phishing for credentials and all that routes people over to [the website] using stolen email lists.”
Wait, who are you people? Attackers’ use of AI extends to another technology area virtually unknown a decade ago: deepfakes.
Piazza and Reinhard Hochrieser, Jumio’s SVP of product and technology, agreed that the past year saw rising deepfake technology threats. “All the different companies releasing new GenAI models for everyone, so where you can produce simple videos, but you can use that technology for committing fraud or committing a crime as well,” Hochrieser said.
It’s never been easier to impersonate someone else, especially over video and voice call, something that Hochrieser said he didn’t expect years ago.
In the early 2000s, the threats shifted from the virus era of the 90s to credit card breaches and ransoming systems, especially once bad actors realized that cybercrimes could have significant financial benefits, according to Codecademy.
The 2010s saw nation states carrying out increasingly sophisticated cyberattacks, including “infiltration and surveillance campaigns and deployed cyber weapons to attack strategic objectives,” Codecademy added. Meanwhile, hacker collectives and criminal groups targeted corporations and governments with ransomware, data theft, and more.
A decade ago, Hochrieser said, someone who wanted to commit fraud would have to commit a lot of effort with tools like Photoshop. While these tools are still in play, AI has allowed bad actors to create fake and malicious content at far larger scales.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
“You can artificially generate videos with just a single prompt within seconds,” Hochrieser said. “As of today, this is possible with very little computing power, which means the investment as a criminal is very little.”
During an interview with IT Brew, Hochrieser shared a deepfake that he had created to make himself look like one of his colleagues. He said that the entire process of creating it took two minutes, and he created it with a photo from a colleague’s Instagram account.
“It’s funny, but actually it’s super scary,” Hochrieser said. “If I can do that, and I’m not a super technical person, you can imagine that if you are more familiar with the technologies, that you can scale that up super fast. And you can start harvesting data from Instagram, data from Facebook.”
Productivity, at its finest. The ability to delegate lower-level help desk tasks to generative AI tools gives cybersecurity experts and others the ability to focus on tasks that “only human beings can do,” according to Elyse Gunn, tCISO at Nasuni.
One of the benefits to the “AI boom,” Gunn said, is that the technology can enable cybersecurity professionals to focus on critical thought and analysis.
“AI enables us to put them on the reactive front, so that my team, my people, can be on the proactive front,” Gunn said. “Which is really where we start to build the business and become a business driver and a business enabler when we’re being proactive—rather than having to have our heads on a swivel and combat every little thing that pops up tens if not hundreds of times a week at an organization.”
So, how’d we do? Kara Sprague, the CEO of HackerOne, thinks generative AI can give attackers an advantage over defenders.
She said that this is largely to do with organizations leveraging generative AI technology to generate code that was “less secure or contained more vulnerabilities in it than code generated by humans.”
Additionally, Sprague pointed to attackers’ increasing speed.
“Cybercriminals, they don’t have a legal department they have to work through,” Sprague said. “They don’t have to worry much about the safety of the AI solutions that they’re adopting, they don’t have to worry about the ongoing maintenance and production of these things, so they’re able to move with more speed in deploying these things and these tools.”
Piazza said that, while many companies have wrestled with effectively adopting AI, defenders are adopting newer tools like agentic AI, often via a security operations center.
“There are various uses of AI that came out of the most recent wave, and those tools are really helping people get more efficient at the things that they already did in the past, or in the case of predictive AI, getting better at doing things that they never did before,” Piazza said. “Which is, predict attacks before they happen.”
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.