Skip to main content
IT Strategy

Asked and answered: How to limit AI and still be seen as IT innovators

Techs pros from PwC and Capterra weigh in on the risk and communication strategies.

image of human and robot shaking hands out of computers

Svetazi/Getty Images

4 min read

During a recent IT Brew event dedicated to guiding tech pros through the challenge of app sprawl, an attendee had the following question:

Culturally, how do I help my IT team be seen in the company as a driver of innovation and not a barrier to bringing on new tools? We recently had to shut down the use of an AI tool we hadn’t fully evaluated yet because of the large amount of proprietary data we have to keep secure and I’m worried it’s created a culture where employees are using tools anyways without us knowing (arguably more unsafe).

IT Brew guest and Gartner Associate Principal Analyst Olivia Montgomery told attendees that IT teams should soften their communication style and foster a more collaborative spirit, rather than banning new AI outright.

“Maybe instead of just shutting down an app completely, you work with them to integrate it into the ecosystem and be very clear about, ‘Hey, let’s together get this into a secure environment [where] you’re happy with the functionality and we’re happy with the security and the usage of it,” Montgomery said.

PwC’s annual CEO survey found that 49% of global chief execs—polled in October and November 2024—expected GenAI to increase profitability over the next year.

We posed the IT Brew reader’s question to Rohan Sen, a partner at PwC and leader on responsible AI use, who discussed a three-part strategy for IT teams to be seen as innovation drivers, not downers:

  • Risk stratification for each use
  • Clear rules on behaviors that are “outside the box” of risk tolerance
  • Forums for IT teams and tech to present risks to business leaders

We asked Sen for examples of the rules and risks.

Below are his responses, edited for length and clarity.

What are some “rules of the road”?

Let’s say we’re sending some data to a third-party vendor. As an example, a rule of the road might mean that that third-party vendor needs to go through your standard procurement intake and due diligence process and be approved—not just from a “know your customer” concept, but maybe there’s also a degree of security reviews that have been done as part of that, the standard onboarding of that third party, and those approvals are in place. So, the question then becomes: If you’ve done that and it’s been approved, go. But if you’ve not done that, then pause, right? And then it becomes a decision about, “Alright, what are the risks here of not having that approval in place? And are those risks small enough to continue?” That’s a business and IT joint decision.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

What kind of trade-offs can be discussed in forums?

In the [attendee] question, there was a hesitance, because the data was sensitive. Well, one approach might be to then de-identify the data, and that might actually reduce risk down to a level that might be acceptable. IT and business should think creatively as to what measures they can take to buy down the risk so that they can continue to drive that innovation.

Can you help me understand the concept of risk stratification, with an example or two?

Let’s say an employee at a company is using something like a Microsoft Copilot to write an email or summarize a document, but they are actually then reviewing that document or reviewing the summary and then incorporating it into some work product—having a human in the loop. Relatively low risk, right? It’s one person. They’re doing it to accelerate their productivity, versus an AI solution that perhaps is looking at large amounts of financial data, and then preparing summaries and narratives that are going out, let’s say, as part of an investor relations packet, or something that’s a little bit more public facing, that’s a little bit more high risk. The cost of getting that wrong right is much, much higher than that productivity example, and so aligning the level of controls and review and requirements based on risk—that’s what I’m talking about in terms of risk stratification. When you start embarking on an AI solution, understand, based on well-defined guidelines, what’s considered low, medium, and high risk.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.