Skip to main content
Cybersecurity

The AI attack surface is increasing, but so is scalable AI-powered defense

The organization reported an 89% increase in AI-enabled adversary attacks year-over-year.

3 min read

TOPICS: Cybersecurity / AI & Emerging Tech / AI in Security

It’s no secret that AI presents unique challenges for security professionals—but are AI-based defenses keeping up?

In CrowdStrike’s 2026 Global Threat Report, experts point to noteworthy increases in threats from artificial intelligence between 2024 and 2025. The organization reported an 89% increase in AI-enabled adversary attacks year-over-year, for example.

Adam Meyers, SVP of counter adversary operations at CrowdStrike, told IT Brew that while adversaries are using AI to their advantage, enterprises relying on AI for defense are beginning to scale “for the first time, really ever.”

“We’ve always had this defender’s dilemma where the defender has to be right 100% of the time, and the bad guy only needs to get lucky once,” Meyers said. “What happens through AI, you can actually start to invert that and…change the dynamic and say, ‘Okay, since we have these AI capable tools now…we can enable a defender to operate much quicker.’”

What’s your vector, Victor? Meyers said that “a lot of people don’t recognize that…AI in particular is hyper competitive.” With companies racing to release AI products as quickly as possible, employees often don’t spend much time ensuring governance and security markers like logging or an audit trail.

“You could actually get into these models and figure out who’s doing what with the model and that creates some problems for the defenders, and it’s something that’s being attacked,” Meyers said. “AI is a double-edged sword. On one hand, it is absolutely being used by enterprises, but it’s opening up their attack surface.”

Surely, we can defend against this somehow. AI tools also allow defenders to keep pace with the speed of attackers. But enterprises should ensure the security of their own AI policies, too.

CrowdStrike’s report recommends that organizations monitor employee use of AI tools, enforce and use data classification rules to “prevent sensitive data leaks.”

“These measures should also include securing homegrown AI workloads from runtime attacks (such as prompt injection), assessing the security of external vendors, and requiring secure configurations and vulnerability assessments for new AI products and their dependencies,” the report stated. “To depend against AI-enabled threats, organizations should develop clear incident response responsibilities and business continuity plans.”

Through detecting malware and anomalies, as well as preventing fraud and summarizing incidents, bigger tech providers like Microsoft are using AI to also help with clients’ cybersecurity efforts.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

By subscribing, you accept our Terms & Privacy Policy.

About the author

Caroline Nihill

Caroline Nihill is a reporter for IT Brew who primarily covers cybersecurity and the way that IT teams operate within market trends and challenges.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

By subscribing, you accept our Terms & Privacy Policy.