Cybersecurity

RSA 2024: Thanks to data, does AI favor defenders?

In an AI fight between defender and attacker, those with the data have the advantage.
article cover

Francis Scialabba

· 3 min read

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

There’s a feeling among prominent IT pros—including some who spoke at RSA Conference 2024 last week—that generative AI and large language models benefit cybersecurity pros in the short term.

Why? Because defenders have something that attackers frequently lack: lots and lots of context-specific data.

“Nobody would debate that defenders have more data about their environments, what’s going on in them, how it’s configured. To me, whoever has more data wins in the long term. Because these technologies are benefiting from that type of learning,” Anton Chuvakin, Google Cloud’s security advisor for the office of the CISO, told IT Brew.

A recent study from Google Cloud and the Cloud Security Alliance found that 67% of 2,486 IT and security professionals said they’ve already tested AI for security-specific purposes, including rule creation, attack simulation, and compliance violation detection.

A panel of security pros—including Heather Adkins, VP of security engineering at Google; Daniel Rohrer, VP of software product security,architecture and research at Nvidia; and Bruce Schneier (who spoke with IT Brew in April), security technologist, researcher, and lecturer at the Harvard Kennedy School;; and—met onstage at RSA 2024 to discuss the range of AI risks (like automated malware) and AI benefits (like automated defenses).

In a February 2024 blog post, Microsoft detailed how Russian and North Korean threat actors used large language models for tasks like script writing, vulnerability research, and social engineering.

While defenders and attackers have equal access to large language models, Schneier said that in the near term he’s “very much believing that AI will help the defense more than the offense.” (“Long term? No idea,” he told the crowd.)

“In our industry, we have a data problem, we have a human resources problem. We can apply these technologies in ways the attackers can’t,” Schneier said.

Some data difficulties may exist in a security operations center (SOC) that contains threat intel from a variety of firewalls and intrusion-detection devices.

Rohrer sees a technology like large language models, built to pull concise insight from aggregated, disparate data sources, as helpful for context-driven security and valuable to a SOC responder trying to make sense of the day’s alerts, or an operations pro asking what the most important patch of the day is.

“We have a very rich data source which, uncommonly, is an asymmetry in favor of the defender that we don’t see very often,” Rohrer told the panel.

The RSA group made sure to not make any certain, bold predictions about how a cutting-edge technology will be used—an effort that humans frequently fail to answer correctly, according to Adkins, who proposed “constant watchfulness” in favor of guessing the future and betting on how attackers and defenders will use the technology.

“As an industry, we should continuously reevaluate how we’re using it, how it’s being secured, how it’s safe, and what we as a society are comfortable with,” Adkins told the crowd.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.