Skip to main content
IT Strategy

Shadow AI must be considered in cybersecurity strategy

“If someone tells you that AI is magic, don’t buy into it,” CEO says.

3 min read

Eoin Higgins is a reporter for IT Brew whose work focuses on the AI sector and IT operations and strategy.

What we do in the shadows might involve AI—and that’s an added stress for IT teams.

“Shadow AI”—when staff deploy AI tools without giving their in-house administrators and cybersecurity experts a heads up—is an ongoing issue for organizations. For IT departments, that means it’s necessary to establish rules and regulations, Silverfort CISO John Paul Cunningham told IT Brew. But time often isn’t on the side of IT pros when it comes to securing top-down control over a company’s AI usage.

“AI is here to stay, at the rate that new things are being implemented, it’s just incredibly hard for security teams to catch up,” Cunningham said. “What we really need as an industry is time to allow the security tools and the capabilities of security to catch up.”

Standard operations. One of the keys to handling shadow AI is ensuring that your team can monitor and maintain standards within organizations. Deploying AI tools without the proper iterative infrastructure around them can lead to problems; Cunningham prefers to look to the past for effective strategies, he said.

“The pattern of least privilege and zero trust that we’ve implemented in the Old World still is applicable to the New World,” Cunningham said. “The pattern of segmenting access and data and the scope of a program, a structured program, an old school program, is still true with AI today, and we need to apply all these patterns and learnings to AI.”

Adding tools. The ubiquity of AI tooling is leading to more specific use cases, which, in turn, is making them more attractive for workers who want to cover their responsibilities with AI tools. But there’s a gap between employees’ AI capabilities and their cybersecurity knowledge. Ido Gaver, Sweep.io CEO and co-founder, told IT Brew that trust issues often come from mistakes rather than malicious action.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

“The tolerance for mistakes is different between different departments,” Gaver said. “What we’re seeing is that the people that are actually driving the change are CIOs or VPs of systems that know that there’s a lot of noise in the market.”

No matter what the case, if you can’t control the tool, you shouldn’t add it—nor let your staff import it secretly via shadow AI, Gaver said. Knowing which tools do what and which are safe is important for IT teams. IT pros can keep tabs on what’s out there in order to get ahead of misuse.

“Technical teams know how difficult their job is, and the reality is that to build AI tools that are making their job easier is also very, very difficult,” Gaver said. “If someone tells you that AI is magic, don’t buy into it; it’s a lot of hard work to build a tool.”

Shadow agents. Some AI agents are given high-level permissions—which makes it hard to keep tabs on who is where and doing what inside a system. Treating AI like a unique identity is a good way to approach the problem, Cunningham told IT Brew, because it can limit access.

“We need to say it’s not just an identity, but it’s an identity for a very specific purpose and we’re going to limit the privilege of that identity to that singular purpose and that singular scope of data and that singular task,” Cunningham said. “It’s like a micro-segmentation of identity.”

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.