AI

Why we’re nowhere near the stage where AI replaces the human cyberanalyst

Two researchers are testing deep-learning algorithms to stop cyberattacks. The results are promising, both for the AI and the human.
article cover

Dianna “Mick” McDougall, Sources: Getty Images, OpenAI

· 3 min read

Sure, humans click on too many links, forget to patch systems, and too frequently make “password” their password, but AI has its limitations too—and data scientists Mahantesh Halappanavar and Samrat Chatterjee are trying to figure out what they are.

The two analysts, along with their team at the Pacific Northwest National Laboratory (PNNL), have been testing how well deep-learning algorithms can stop intruders as they make moves inside a network.

With training examples in the OpenAI Gym simulation environment, the researchers found that one algorithm in particular—Deep Q-Network—frequently prevented a simulated adversary from reaching the exfiltration stage of an attack.

But IT won’t go fully autonomous anytime soon. That requires a lot of power:

“If you want to optimize across a space in a really reliable manner, it’s just going to be an extremely compute-intensive problem. So, we’ll have to accordingly adjust what we can do,” Halappanavar, chief computer scientist at PNNL, told IT Brew.

The two spoke about other reasons why the human touch will stay essential to cybersecurity operations.

The interview below has been edited for length and clarity.

Why are we “nowhere near that stage where AI can replace human cyber analysts”?

Chatterjee: There are different types and levels of decisions that need to be made in order to secure a cyber system. Some decisions are maybe more amenable at human speeds; some decisions, just by design, need to be made at machine speeds…Maybe what is a practical vision for the future is where the AI agent is a teammate that the human can rely on.

Do you have examples where AI is particularly supportive, in situations that require faster response time?

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

Chatterjee: One quick example is your spam filter in your inbox. There’s no human selecting emails and checking whether they should go in a spam filter or in your primary inbox.

How do you envision human and AI alongside each other supporting cybersecurity?

Halappanavar: Just because I turned my computer on, it is going to have vulnerabilities…AI, just because of the way it is, can read volumes and volumes of text, hunt down, do all this automated processing. That could also be a great place: [AI] will just present everything to the human being in terms of making these decisions. And the human being can now make informed decisions.

So, in that case, the AI will say, “There’s a vulnerability here. Do you want to take action on it?”?

Halappanavar: Yeah, it could give you the whole thing: “This is how the vulnerability can be exploited. And here, these are the mitigation actions”…You can just push a button, and then the whole upgrade gets installed. Or it could say, “Oh, you really need to remove this server now.” But maybe this is your payroll server, and you can’t really shut it down!

What’s “hard” about the cybersecurity problem?

Halappanavar: I think “hard” might be more about finding this trade-off where you want to be as operationally efficient as you can, while you are equally secure doing it. So, you can make it extremely hard and remove [a device] from the internet, [but] it’s not going to be a useful system for you. Or you could put up 10 redundant servers, so you’re always secure and safe, but it’s going to be extremely expensive for you.—BH

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.