While there’s no doubt agentic AI is one of the hottest trends affecting IT pros, the technology presents serious risks for companies looking to employ the tech and take humans out of the loop as a means to automate workflows or reduce the size of their teams.
Experts point to lack of visibility as a reason humans need to check on and be involved with agentic AI, especially when it is able to make independent decisions and drive a set of outcomes.
Thomas Squeo, the CTO for the Americas at Thoughtworks, called the first generation of agents “relatively simplistic” and said that providers were not typically able to offer them at scale in a production enterprise environment.
Human in the loop. The standard right now, Squeo said, is keeping a human in the loop so that an agent isn’t running through an activity on an ongoing basis without human intervention.
If there’s a guardrail strike from the AI, the agent can send a request for a human to take action based on its confidence level of what the model is about to do.
Squeo offered an example of a billing system, where a paper bill comes in and the agent is doing optimal character recognition (OCR) analysis on a document—when an image is converted into machine-readable text—and only has 80% confidence—too low for it to automatically send along.
“You might send that along into a queue for customer service…or somebody that’s in client support that goes and looks at that and says, ‘Yes, this agent was correct,’” Squeo said. “That behavior, when the human in the loop says, yes, it’s correct, it might then say, ‘I can increase my confidence on that case going forward.’”
Now, with newer agents and the maturation of Model Context Protocol (MCP), the connection between agents and data sources, agents are able to understand the environment, act on behalf of a user, and use reasoning skills.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
Squeo said that agents should operate within guardrails, and recommended that IT teams use an observability system to report when an agent strikes against the rails.
“In some cases, [the agent] might strike and it might be fine,” Squeo said. “It might be a strike because of the way that the rules for how that agent is operating is causing it to operate against its criteria for being able to be governed.”
Black box. Cyber professionals will typically divide what should be secured into layers. Alon Diamant-Cohen, principal consultant for Stratascale’s Hybrid Cloud Security team, pointed to the Open Systems Interconnection model, a seven-layer framework that contains application, network, and infrastructure layers for security to cover, as melding into one when an organization introduces agentic AI.
Diamant-Cohen said that an IT pro must figure out how to secure all of the layers in a tech stack at once. For experts, this can look like establishing governance or rethinking how professionals are thinking through policy constructions.
Within the layers is the network layer, where MCP exists for agentic AI. Diamant-Cohen said the concept is “awesome, because you get all these new capabilities out of your tools when you integrate them. But there’s not a lot of consideration to the fact that you’re adding some kind of permanent infrastructure and you need to secure it.”
“A lot of MCP configurations have some black box elements to them, where you just can’t see what it’s doing,” Diamant-Cohen said.
The “most terrifying” aspect of this process, according to Diamant-Cohen, is the network layer, specifically as it pertains to the connectivity between agents.
“You’re essentially standing up a node with outbound and inbound connectivity to your AI agent,” Diamant-Cohen said. “Which is trained on all your sensitive data that has some black-box elements that you do not have full visibility into, or you can’t fully understand why it made the decisions it did.”