Shadow AI and personal devices could create network insecurity
The threat of shadow AI is made worse by the presence of AI agents, experts say.
• 3 min read
Caroline Nihill is a reporter for IT Brew who primarily covers cybersecurity and the way that IT teams operate within market trends and challenges.
When it comes to network security, handling shadow AI should be simple. Just block any AI tools and associated sites that employees shouldn’t be accessing, and update that block list for anything new and suspicious that comes out.
Except it’s never that simple.
Jason Martin, co-founder and co-CEO of Permiso Security, said that while organizations are able to control access on professional devices, personal devices used for professional situations present difficulties for network security..
“You create an isolated, narrow tunnel for them to get in only from a compliant device, and then you monitor that device,” Martin said. “But I don’t know if everyone’s doing that, and I don’t see that happening everywhere.”
The rise of AI agents adds to the potential for network disruption, especially as people interact with agents via mobile devices and other parts of the network.
“I began to get trained as an individual on how I want to consume and use software on my mobile device, which I use more than anything else, and then I start coming back to work systems and I’d be disgruntled or upset at the lack of user experience,” Martin said.
He added: “It’s something that can help me in my personal life and therefore it can also help me in my professional life—and I’m going to want to use it.”
How much of a problem is this? Amanda Grady, VP and general manager of AI platform security for ServiceNow, told IT Brew that network security experts already monitor traffic patterns and communicate with employees about AI policies and security.
Despite that training, though, employees are capable of finding ways around an organization’s guardrails. “The key thing that companies need to do is ensure that they’re giving their employees access to legitimate AI tools,” Grady said, “because if you leave them behind, then I think that runs a greater risk of them going rogue and using shadow tools.”
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
Martin added that the pervasiveness of personal devices within professional networks prevents cybersecurity experts from exerting full control over shadow AI.
While organizations can detect agents by observing activity at endpoints, Martin pointed to the danger of employees giving rogue AI agents too much access to the network.
“You may unintentionally unleash these agents broadly via credentials that have broad levels of access,” Martin said. “That means they could have catastrophic impact. They can also actuate change at rates that have never been seen really before.”
What should be done? Companies should understand which workflows employees want to automate the most and start there, Martin said.
Grady also pointed out the need to communicate policies to employees, along with putting the right tools in place to detect shadow AI. Humans in the loop are also a huge factor in detecting the unauthorized use of AI.
“I think it starts with determining who the owner is for AI within the company, setting the right policies, [and] communicating them clearly, but I think it’s equally important to make sure that you are allowing some AI,” Grady said. “There should be some sanctioned use of AI, otherwise companies are just going to get really left behind in this AI revolution.”
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.