AI

Why PwC has its own ChatGPT

Who needs big ol’ ChatGPT when you can have your own small one?
article cover

Francis Scialabba

· 3 min read

AI can seem big and scary:

It answers difficult questions. It handles data at a rate that humans cannot. It hallucinates!

“People fear AI; it’s one thing to use AI on the internet today to get your next travel itinerary. It’s another thing to use it at work,” said Mohamed Kande, vice chair, US consulting solutions co-leader, and global advisory leader at the professional services firm PwC, during a May news conference in Manhattan.

To address the fears of vast, almost-almost-knowing, AI-powered models, PwC has an idea: smaller, more contained AI-powered models. The consultancy in April deployed a pilot ChatGPT for internal use and a small set of employees.

The AI test run allows for a more controlled deployment of the technology, and is a bet on a more “precise” and limited model than the vast public one that people have gotten to know.

“As you train the models in those proprietary environments, you can actually tighten up its understanding of the data, because now instead of dealing with the large model sets, you’re actually dealing with a more precise data set,” said Joe Atkinson, vice chair and chief products and technology officer at PwC, during the roundtable.

$1b, but start small! In April, PwC announced a $1 billion investment in AI, including a partnership with Microsoft. With Azure’s OpenAI capabilities, PwC launched its own ChatGPT, trained on PwC-approved data. The pilot test aims to answer employee’s questions related to company policies, process, and proprietary data, while also demonstrating the value of small, contained AI.

ChatGPT was trained on huge amounts of human-written internet data and conversations. That immense dataset, combined with its open-to-the-public model, has been found to occasionally make up facts or “hallucinate” outputs as it handles all the info.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

A limited data set potentially improves the quality of answers.

“If you’re building one of these specifically for the medical industry or the financial industry, then you can train that understanding, you can train the algorithm, you can feed in the data a little bit more accurately. You’re not relying on general cases. You’ve got much better knowledge of what the industry needs to know,” said Seth Robinson, VP of industry research at the nonprofit trade organization CompTIA, in a separate conversation with IT Brew.

Making an internal AI model, however, can be costly once one factors in data training and processing power, which may restrict the option of a niche company-specific AI model for many organizations.

“So, if we only have some relatively small number of models that are out there, then does another cottage industry pop up, of putting a layer on top of those models that begins to be tailored for industries?” Robinson told IT Brew. “That will be something pretty interesting to watch.”

Sounds big. And PwC hopes to make AI less scary.

“As we continue to train it, we expect it to get better and better,” said Atkinson. “And part of the way we believe will build that trust is you'll have 200 advocates that will say, ‘I’m pretty amazed at what this thing can do.’”

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.