Software

Microsoft introduces red team augmentation via AI

“AI is a support tool, it is not good at original content—it is good at summarizing a complex problem,” one expert says.
article cover

Francis Scialabba

3 min read

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

Red team, go—with help from AI.

In February, Microsoft and OpenAI introduced a new open automation framework to assist security professionals in managing risk. The Python Risk Identification Toolkit, or PyRIT, provides AI for red teaming, by leveraging the generative capabilities of the technology to expose vulnerabilities.

Steve Winterfeld, advisory CISO with Akamai Technologies, told IT Brew that the new tools are helpful for filling in the gaps for organizations that need assistance with cybersecurity. It’s difficult to find someone who has skill in attacking webpages, social engineering, hacking, and more.

“The thought process is now that a generative AI can do that,” Winterfeld said.

Open source-ame. In a blog post announcing PyRIT, Microsoft said the system provides a more probabilistic result than traditional red-teaming: “Put differently, executing the same attack path multiple times on traditional software systems would likely yield similar results. However, generative AI systems have multiple layers of non-determinism; in other words, the same input can provide different outputs.”

For Joseph Thacker, principal AI engineer at AppOmni, PyRIT represents an opportunity for smaller companies who don’t necessarily have the appropriate access to internal AI tools larger organizations do. The open-source framework Microsoft has introduced with PyRIT allows it to test out its features while lowering the barrier to entry for AI red teaming.

“Most successful companies these days come up with a product so that it’s solved once for everyone instead of everyone solving it independently—that’s exactly what this is doing,” Thacker said. “Instead of expecting every company to have to build their own framework internally to test or generative AI systems, Microsoft is releasing a tool that any company can use to speed up their process for testing their generative AI features for safety and security.”

No replacement. Microsoft cautioned that the tool isn’t meant to replace the human element of control over the red team process. Rather, the tool is meant to augment existing tactics and expose flaws in the process.

“However, PyRIT is more than a prompt generation tool; it changes its tactics based on the response from the generative AI system and generates the next input to the generative AI system,” according to the company blog post. “This automation continues until the security professional’s intended goal is achieved.”

Winterfeld agreed. He told us that AI should be seen solely as a way to add to capabilities. Expecting more out of the technology isn’t realistic.

“AI is a support tool, it is not good at original content—it is good at summarizing a complex problem,” Winterfeld said. “And again, ChatGPT and most of the OpenAI stuff has not been trained to focus on cybersecurity. So, they may help you write a social engineering email, but if you just go in and say, ‘Write a social engineering email to attack Akamai,’ it’s not going to be able to give you anything useful.”

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.