Skip to main content
IT Strategy

Will shadow AI get worse in 2026?

IT pros shine a little light on defensive strategies.

4 min read

Billy Hurley has been a reporter with IT Brew since 2022. He writes stories about cybersecurity threats, AI developments, and IT strategies.

This year, one of our favorite tech terms got an AI makeover.

“Shadow IT”—a familiar expression referring to employees’ unauthorized use of hardware or software—is so 2022. Now it’s all about shadow AI.

According to IT pros who spoke with IT Brew, the challenge of detecting and monitoring the near-daily arrival of new generative AI products will continue.

“We’re still in the infancy of AI,” Jeff Collins, CEO at observability platform WanAware, told IT Brew. “A lot of us would like to think that every organization on the planet is using AI. There’s a lot of organizations out there, especially in the user population, that aren’t using it today. The realm of shadow AI will continue growing substantially through 2026.”

Have at IT! A Cybernews survey of 1,000 US employees, published Sept. 30, found that 59% said they use AI tools unapproved by their employers. According to the study, 77% of companies have an official policy for usage of personal AI, and 52% of employers provide approved AI products.

And the number of products is only increasing: Cybersecurity platform Netskope has tracked “more than 1,550 distinct generative AI SaaS [software-as-a-service] applications,” according to its report released on Aug. 4—a rapid increase from the 317 AI SaaS apps the company had monitored in February.

In a study of anonymized data from its cybersecurity platform between February and May 2025, Netskope also revealed that the average organization uploads a monthly 8.2 GB of data to AI apps, up from around 5 GB in September 2024.

That, too, is a potential issue on the rise: Employees eager to use AI run the risk of placing company data, internal documents, and proprietary code into tools with unknown data policies.

And not all 1,550 GenAI tools are built the same: Some services may use input data to train its models, while others may store the data.

The costs of shadow AI are also clear. According to IBM’s Cost of a Data Breach report, which studied incidents between March 2024 and February 2025, 20% of organizations said they suffered a breach “due to security incidents involving shadow AI.” The exposures, IBM wrote, added $200,000 to the average breach cost. (Personally identifiable information from customers was the most common data exposed in these cases.)

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

“Organizations often don’t look for shadow AI, so it remains undetected,” IBM wrote at the time.

Techfest of champions. Shadow AI puts pressure on IT pros who must manage it while spotting data-security risks.

One strategy for coping with shadow AI is to set an AI strategy in advance. For example, cybersecurity company SentinelOne has approved internal AI tools, such as Gemini and ChatGPT, for specific teams. The vendor also has a “coalition” of eager participants across the organization, SentinelOne’s Chief AI Officer Gregor Stewart said, who can test out new tools and introduce new ones to pilot. This early-access option also allows the company to understand data security controls and enterprise compatibility.

Stewart believes organizations defending against shadow AI must demonstrate an openness to adopt new tools, with options and easy pathways to take the products for a spin.

“Instead of saying, ‘You get fired if you even touch these things,’ you’re trying to say to people, ‘Hey, we have three ways of onboarding a tool,’” Stewart said.

The company has also bet on the importance of monitoring inputs to a large language model, acquiring AI-security company Prompt Security in August 2025.

Other recommendations:

  • Find the tools that teams truly want to use, provide guardrails (like authentication and tracking), and encourage others to use those IT-approved ones, according to Adam Arellano, field CTO at AI DevOps company Harness. Some enterprise-class platforms, for example, log query data. “That’s one of the hardest parts is making sure the developers know where they can go and know what their limits are because, ultimately, they do want to do the right thing. They enjoy secure employment and a paycheck as much as anybody,” Arellano told IT Brew.
  • WanAware’s Collins emphasized the importance of educating users about data risks.
  • For Carl Knoos, CIO at IT consulting firm Fusion Collective, the strategy for shadow AI lies beyond simple blocking and approving. Carefully curate tools and processes, he advises, that clearly communicate to everyone how engineers, developers, and other teams will use the tool. “By having a transparent process, you will have a much faster lead time to discovering issues before they arrive,” Knoos said.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.