Sure, humans click on too many links, forget to patch systems, and too frequently make “password” their password, but AI has its limitations too—and data scientists Mahantesh Halappanavar and Samrat Chatterjee are trying to figure out what they are.
The two analysts, along with their team at the Pacific Northwest National Laboratory (PNNL), have been testing how well deep-learning algorithms can stop intruders as they make moves inside a network.
With training examples in the OpenAI Gym simulation environment, the researchers found that one algorithm in particular—Deep Q-Network—frequently prevented a simulated adversary from reaching the exfiltration stage of an attack.
But IT won’t go fully autonomous anytime soon. That requires a lot of power:
“If you want to optimize across a space in a really reliable manner, it’s just going to be an extremely compute-intensive problem. So, we’ll have to accordingly adjust what we can do,” Halappanavar, chief computer scientist at PNNL, told IT Brew.
The two spoke about other reasons why the human touch will stay essential to cybersecurity operations.
The interview below has been edited for length and clarity.
Why are we “nowhere near that stage where AI can replace human cyber analysts”?
Chatterjee: There are different types and levels of decisions that need to be made in order to secure a cyber system. Some decisions are maybe more amenable at human speeds; some decisions, just by design, need to be made at machine speeds…Maybe what is a practical vision for the future is where the AI agent is a teammate that the human can rely on.
Do you have examples where AI is particularly supportive, in situations that require faster response time?
Chatterjee: One quick example is your spam filter in your inbox. There’s no human selecting emails and checking whether they should go in a spam filter or in your primary inbox.
Read more here.—BH
Do you work in IT or have information about your IT department you want to share? Email [email protected].