AI adoption is all the rage for today’s businesses—but rushing the transformation could lead to security flaws as staff continue to utilize shadow IT.
A new State of Information Security report from security platform ISMS.online (IO) shows the risks in how AI adoption is managed in the workplace. The use of unmanaged AI, or shadow AI, is a real threat to internal security, and one that’s increasingly having an effect on organizations. IBM’s 2025 Cost of a Data Breach Report found that 20% of studied organizations suffered breaches from employee use of shadow AI.
The shadow knows. Misuse of AI is more than a hypothetical, as IO CEO Chris Newton-Smith told IT Brew. Just over one-third (34%) of organizations polled in the IO survey reported concerns over “internal misuse of generative AI tools” (another way of saying shadow AI). Combined with the IBM numbers, that points to a problem.
“The call to action for organizations isn’t, ‘Stop using AI,’ because that doesn’t make sense—there’s tremendous business value,” Newton-Smith said. “It’s about, how do you complement the pace of your AI technology rollout with an AI governance framework to sit alongside it, so that as you adopt new technology, you can have confidence that you’re not exposing your data and your customers’ data to unnecessary risks or challenges.”
That’s why coming up with the proper framework and good internal policies is so important. In September, Ivanti CSO Daniel Spicer explained what leads to adoption of shadow IT, an umbrella term that includes shadow AI. From his telling, it’s often, if not exclusively, from worker frustration.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
“[From an employee perspective] shadow IT comes from the desire to do my job, that I am not enabled to do my job, and so I need to either come up with my own workaround, or go and find another solution, because it is not being provided to me,” Spicer said.
IO’s report details the risks posed by shadow AI, including the possibility of sharing private data with an LLM, something that could result in the data being accessed by hackers; it could also run afoul of regulations like the EU’s GDPR. Only 21% of those surveyed say they plan to put “responsible AI policies” in place in the next year. As IO points out, that’s less than the 54% who say they adopted the technology too fast.
Keep it simple. Luckily, as IO’s Chief Product Officer Sam Peters told IT Brew, the solution is simple: better regulations and internal policies. Companies adopting AI will need to take the initiative to make working with AI an official part of the job, not something staff feels they need to do on their own.
“You need to make sure that you’re bringing people with you…giving them the appropriate tooling and the appropriate education to make the most of it,” Peters said.