IT help desks face new challenges as workplaces go all-in on AI
Experts say that strong data governance, limited permissioning, and human oversight are key as IT help desks navigate the GenAI era.
• 6 min read
Login failures, software snafus, and malfunctioning hardware may soon look like quaint problems to IT help desks in an increasingly AI-defined workplace.
The days of submitting a ticket for these relatively straightforward issues are making way for an era in which AI agents are embedded throughout workflows and employees are jumping on the vibe coding bandwagon (don’t even get us started on workslop).
As Saoud Khalifah, CEO and CTO of AI verification startup Ciphero, told us: “I feel sympathy for the IT desks.”
That’s because the humans operating those desks could face problems that are more difficult to untangle, thanks to the black-box nature of LLM-powered models—even as they’re on the front lines of helping workers learn the ins and outs of AI and seeing roles in their own departments become more automated.
Our sources told us that effectively preparing for and reacting to AI-related tech troubles starts at the very top with strong data governance policies, and should include limited permissioning, robust testing and validation, the ability to clearly explain AI tools to users, and strict data oversight.
“Someone that calls a help desk has already started to lose trust in the AI itself,” Todd Moore, VP of data security products at aerospace and defense company Thales, told us. “So the help desk person ends up being more of a therapist, trying to explain what’s happened versus being able to necessarily fix the issue. It has to happen earlier in the process.”
Top down
One piece of advice experts agreed on was the importance of establishing policies that give employees a framework for appropriate use of AI models.
“The help desk can’t really fix an AI issue,” Moore said. “It’s really the company, and who’s implemented the model has to have implemented the appropriate data governance.”
Part of that entails communicating to employees what AI models and products are approved, and discouraging workers from using consumer-facing tools that their employer didn’t sign off on.
But even for those models that are company-approved, there are risks.
“Even if you approve something, it may down the road get hacked through a supply-chain attack and other methods, and now it’s a malicious actor inside your organization,” Khalifah said, adding that he requires his own employees to experiment with new AI models in sandboxes as one guardrail.
“It’s a conundrum we’re in, because the utility of these things [is that] it connects to your valuable information. But by baseline, they’re just insecure,” he added. “I’m noticing that there’s a lot of hype and not security. Security is an afterthought.”
Permission slip
When it comes to using GenAI at work, it’s better to ask for permission than forgiveness.
Khalifah said that one of the biggest pitfalls he sees is organizations giving out too many tokens.
“I think that’s very dangerous,” he said. “You need to be very careful in how you give that access. The second thing is permissioning: What kinds of permissions are you giving this AI model or agent to have? Is it read-only where it can read things and not write and change stuff? Because that’s where the dangers occur. What if it starts writing malicious stuff into a file, and then that gets taken over, the agent executes it, and so on?”
This also gets at the heart of one of the biggest differences between AI and traditional software: AI is probabilistic, not deterministic.
“If you give the AI product a lot of ability to be able to manipulate systems and manipulate accesses and stuff like that, it’s very different than the old-school ways of software,” Khalifah said. “If you somehow inject some kind of Trojan horse into these non-deterministic systems, you can’t control what they do.”
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
By subscribing, you accept our Terms & Privacy Policy.
Alex Kurashev, a security engineer at AI coding agent startup Augment Code, told us that permissioning is one of the biggest issues with AI implementation.
“The least access you can give it, the better the tool is going to perform in terms of not leaking information,” he said.
He recommended establishing strict parameters around what data AI models have access to, and then creating task-specific AI agents that have just enough information to do what you want.
All about the data
Since our robot overlords aren’t in charge yet, it’s important to have human oversight embedded throughout AI workflows.
“Monitor every single action that is destructive that an agent can execute,” Khalifah said. “If you’re using AI to write code…split everything into small changes, not massive changes. These AI agents love to do thousands of lines of code of changes. You can’t review that as a human. It’s going to take you weeks. So you just negated the productivity gain.”
And before rolling out new AI tools, IT and security teams should be validating them.
“That is really the best way we’ve seen to do it is: define your space, constrain it, do a dry run, and then see what the output is,” Kurashev said.
Kurashev also recommended that anyone responsible for AI implementation in the workplace stay up to date on reports of model vulnerabilities, as well as updates from the federal CISA Cybersecurity Advisory Committee.
“But at the end of the day, it’s also doing the internal testing to make sure you break your product internally and see where those cracks are,” he said. “That way you can find out those things before it gets to a hacker.”
Moore emphasized the importance of using up-to-date data in AI models.
“And to keep up to date, you do have to implement things like retrieval augmented generation,” Moore said. “A model gets stale over time, so you always want to make sure that you have that control to keep that data up to date. And making sure when the help desk gets requests, it’s easy enough to ask questions about the data governance model, the integrity of the data, and there’s some checks that help desk person can do.”
Am I hallucinating?
One of the most common AI problems landing on IT help desks is chatbots spitting out incorrect answers, or hallucinations. It’s important to note, Gartner analyst Tori Paulman said via email, that GenAI “is a prediction machine; it is designed to produce an answer, even when it shouldn’t.”
“Hallucinations often stem from poorly framed prompts, missing or inaccessible data or context, and lack of guardrails,” she added.
This is one area where IT help desk employees may want to ensure they’re familiar with the ins and outs of a given AI model, so they can explain it clearly to users and help them write effective prompts.
“When people approach and they’re disappointed in the results they’re seeing from AI, I think it would be good for these IT specialists to explain in high level, but also technology terms, about how AI is different than any other model and system we’ve ever worked with before,” Moore said.
Moore recommended that help desk workers spend plenty of time experimenting with their company’s AI tools themselves.
“Building that trust and credibility back up in AI is really being able to explain how AI is working and putting it into the right language that people can understand,” he said.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
By subscribing, you accept our Terms & Privacy Policy.