Agentic AI and the New IT Risk Landscape
Agentic AI is reshaping IT infrastructure fast. Learn the risks, the opportunities, and when to act—before your company gets left behind.
Agentic AI is the cutting edge of the AI revolution. Tech vendors everywhere are working on building next-generation AI agents that will converse with human users and execute on complex workflows. Imagine an AI agent that can scan paper bills, convert everything it sees into machine-readable text, and then interact with Accounts Payable to pay certain vendors on a schedule.
While executives are excited about agentic AI taking on complex, multi-part tasks with little or no need for human oversight, IT professionals are justifiably concerned about the prospect of AI agents running amok within company infrastructure, creating cybersecurity nightmares.
In this e-book, you’ll learn about how agentic AI is evolving, how it’s already impacting companies and IT infrastructure, and why this technology still demands a human in the loop. These articles offer comprehensive advice for IT pros on how to successfully integrate agentic AI into their current workflows—or hold off on implementation for the time being.
What’s Inside
Table of Contents
Chapter One
How agentic AI is changing tech
- 2025 became the year of agentic AI
- Agentic AI presents opportunities, expanded threat surface
- How protocols turn agentic AI into a game of telephone
Chapter Two
How agentic AI impacts your company and IT
- What they’re saying: IT CEOs on agentic AI
- How CIOs feel agentic AI has changed their roles
- Agentic AI could be the savior of cybersecurity budgets
- How to shop for software in the era of agent-washing
Chapter Three
Agentic AI still needs humans in the loop
- Agentic AI is changing workflows as executives urge human-first approach
- How an AI pro puts ‘handbrakes on agentic decisions’
- Why your agentic AI still needs a human in the loop

2025 became the year of agentic AI
“Organizations are finally starting to realize that we can’t just rely on AI to self-govern or police its own behavior,” a technical director says.
In 2025, AI was halfway there—then agentic was living on a prayer.
You can roughly split the year in half between the pre- and post-agentic AI eras. Agentic AI wasn’t invented this year, but since IT Brew attended the RSAC Conference in late April, it seems every organization is promoting its use of the technology.
NCC Group Technical Director David Brauchler told IT Brew that “agentic” was the word of the year as systems increased their complexity and capabilities. The technology is acting as a force multiplier for organizations—but questions remain about risk.
“We’re seeing a change from isolated use cases where we drag and drop AI into some broader application or system into using AI to power functional operations that we couldn’t do with traditional technologies,” Brauchler said. “That being said, you have a lot of security risks and concerns that come along with that.”
Hold on to what we’ve got. The evolution of the technology has been less chaotic than expected, Globant SVP of Digital Innovation Agustín Huerta told IT Brew. In Huerta’s view, the somewhat more streamlined and almost open-source way agentic AI was developed in 2025 shows the benefits of cooperation. That’s most clear in model context protocols, which connect LLMs to outside sources and are enabling companies to find common ground.
“It’s like they are not competing with each other anymore,” Huerta said. “In that sense, in terms of creating protocols, they are embracing the one that has the idea first and the approach for that, and they understand that the true future for the evolution of agents is that those protocols keep evolving and keep being embraced by as many players as possible.”
Yet this approach may have some negative side effects. There’s no doubt that usage of agentic AI increases the attack surface. IT Brew has reported on how the impulse to integrate the technology is sometimes coming at the expense of careful risk assessment. Securin CEO Srinivas Mukkamala described it as a “geometric” explosion, as opposed to exponential growth. By extending identities with governance and ownership to agents, users may be increasing their attack surface by multiples.
“When you look at human identities, we keep floating the numbers from a billion to two billion [users],” Mukkamala said. “Now, each one of us is extending that identity to 20 agents of ours to do our jobs.”
Got each other. Melissa Ruzzi, AppOmni’s director of artificial intelligence, told IT Brew that she worries there might be too fast of a push to production on the technology—the “easy and quick solution” of AI doesn’t mean a human shouldn’t remain in the loop.
“We should not just simply use agents for everything, and agents should not just be left alone making their own decisions for all kinds of different things,” Ruzzi said.
The defensive capacity that was in place for pre-agentic AI infrastructure isn’t enough. Brauchler noted that internal guardrails put in place to allow the technology to control itself or safety filters are no longer sufficient—and 2025 proved that.
“Organizations are finally starting to realize that we can’t just rely on AI to self-govern or police its own behavior,” Brauchler said.
Agentic AI presents opportunities, expanded threat surface
“Agentic AI will be very polymorphic, very shifty, and as such, our defenses need to evolve to tackle that new ilk of threat,” RSA CEO Rohit Ghai tells IT Brew.
Agentic AI—it’s the hot new thing in the tech space and drove much of the conversation at RSAC in April. But with the promise of technology comes, predictably, more threats, and an expanded threat surface.
IT Brew was on the show floor at RSAC and we talked to industry leaders about the potential of agentic AI and the possibility that its adoption could introduce new problems, as well as solutions.
Clarity on topic. For those unfamiliar, agentic AI is a deployment of generative AI wherein LLMs are used to interact with users as “agents” and make autonomous decisions within a limited framework. The effect can seem like AI agents acting on their own, though there is some amount of human oversight.
But in order to ensure the information is monitored safely and that the agents push out the right data, vigilance is crucial. CrowdStrike Field CTO for the Americas Cristian Rodriguez told IT Brew that in his view, the need for an overall visibility strategy over the information involved is key to ensuring agentic AI avoids external manipulation.
“Data governance, data protection, visibility, IAM assessments are all really part of that strategy to ensure that someone’s not taking advantage of an agentic model,” Rodriguez said.
CrowdStrike works with companies like Nvidia, the full-stack AI provider that’s become famous for its chips. Nvidia is keenly interested in the capabilities of agentic AI, CSO David Reber told us, “whether that’s from new detection techniques, to agentic workflows for the SOC, to agentic workflows for automated pen testing, to just incident response capabilities.”
“If we have to bake anything into the stack to help them mitigate those into the future, we will do that,” Reber said. “We also collaborate with a lot of industry peers, sharing information on threats, sharing things that are happening.”
Leaderboard. All that assistance is helpful, but it doesn’t change the fact that agentic AI, simply by existing as a new factor in the stack, presents an expansion of the threat surface. RSA CEO Rohit Ghai was clear about the danger, telling IT Brew that it could be a “game changer” for attackers and part of a “polymorphic threat”—that is, a threat that shifts and moves rather than stays static.
“Agentic AI will be very polymorphic, very shifty, and as such, our defenses need to evolve to tackle that new ilk of threat,” Ghai said.
Still, he added, the tech could be deployed for the benefit of defenders by managing identity.
“AI as an attacker, AI as a defender, and then AI as an attack surface,” Ghai said. “Those are kind of the three dimensions of AI that we need to be thinking about.”
How protocols turn agentic AI into a game of telephone
“The protocol is there just to create the connection,” RSAC’s Petros Efstathopoulos tells IT Brew. But that can lead to issues.
AI agents are all the rage.
Advocates promise the new generation of autonomous software can make decisions and learn processes without the need for constant human oversight. That sounds great, but there are reasons for companies to hesitate before going all-in on this shiny new toy, no matter how much they think AI agents will streamline workflows.
AI agent protocols—how agents communicate with each other—are one concern that has Dr. Petros Efstathopoulos, RSAC VP of research, on the alert.
In an interview with IT Brew, Efstathopoulos explained the limitations of protocols, what users should look for as the technology continues to evolve, and why human oversight is needed.
This interview has been edited for length and clarity.
What can you tell us about protocols? What are some misconceptions?
This is an area that’s moving very fast, and it’s very early, so [researchers have] done as best they can. I’m sure that [the protocols] will improve with time. We’ve given these folks very little time to architect and help build these things, and I'm sure that with time, they will mature.
Now, these protocols have been designed in order to connect the agents. They're essentially communication protocols at a very high level. The expectation that some people have is that these protocols would also help transform or help check the data that’s being transferred from one agent into another, as if the protocol is also responsible for the integrity and the correctness of the information that's being relayed from one agent to another.
That’s actually not true.
Wait, the protocol doesn’t need to ensure that the information is accurate? Why not?
It’s not the protocol's job to do that. The protocol is there just to create the connection. But having said that, when we have multiple agents that are connected back-to-back, it is often the case that when one of the agents makes a mistake, and that’s passed on to the next agent, the errors and the inaccuracies kind of pile up. And sometimes they get amplified. That’s not an inherent protocol problem, it’s just an inherent problem of connecting multiple agents together.
The fact that sometimes this doesn’t work very well in terms of the accuracy of the results is not the fault of the protocols. It’s just that stacking many agents together can exacerbate mistakes that the elements are making. The protocol itself is just the messenger, so to speak, between the agents.
So it's kind of like a game of telephone, except there's obviously no human involved in the middle—you could put something in, and if it’s slightly off, maybe the result you get on the other end might be even further off. Is that what you mean?
Without these protocols, we cannot connect the agents and stack them together. We would only rely on a single agent, because you wouldn’t have a way to connect them. The protocols enable this stacking of the agents. It’s an enabling mechanism; the fact that now the agents are stuck and connected to one another leads to this game of telephone.
Imagine we were talking on the phone. I relate some information to you, you know, ‘the price of the Amazon stock today is XYZ. And I was incorrect in what I was doing. You wouldn’t expect that the phone company would know how to fix my mistake and change my words, right? So that’s the equivalent, you can’t expect the protocols themselves to to fix the information that’s relayed if that information is not correct. The protocol is there just as a communication medium.
But because the protocol is the enabling factor for stacking these agents, when bad things happen, it gets a little bit of the blame as well.

What they’re saying: IT CEOs on agentic AI
CrowdStrike and Cognition’s CEOs sat down at Nvidia’s conference with other leaders to talk about agentic AI and their cybersecurity needs.
IT executives believe that agentic AI may help software engineers and developers build products with more agility. That was a conclusion from Nvidia’s recent GTC conference in Washington, DC, where a roundtable of Big Tech CEOs discussed not only the rise of agentic AI, but also the need to secure tech stacks against next-generation threats.
Alongside Cognition’s CEO and co-founder Scott Wu and CrowdStrike’s CEO George Kurtz, the roundtable included Aravind Srinivas, the CEO, co-founder, and president of Perplexity, and the CEO and co-founder of Abridge, Shiv Rao.
Step on the gas? Wu said that software engineers are simply faster if they’re “working with the best AI tools and doing the most with that.” In some cases, he suggested, one hour of an engineer’s time using the best tools “corresponds to about six to 10 hours without using the tools.”
Every engineering team has lots of projects to work on, Wu added, “but they have to choose four because that’s how things are with engineering…The ability to speed up and do a lot more is really exciting.”
When asked whether future generations of developer tools could make developers obsolete, Wu believes it will always be up to humans to decide what computers should do. “Every software engineer, what we all love doing as programmers is going on and automating processes, and figuring out how to make this part faster, and make this clear and simpler, and so on,” he said.
Rao added that Abridge's products, which include AI-generated notes for medical clinicians, are an example of technology allowing more person-to-person time between individuals and their medical practitioners. This not only benefits the patient, but also the healthcare practice, as clinicians are compensated based on care they document. AI-generated notes that are compliant with protocol and comprehensive for billing purposes can help “keep the lights on for the health system,” Rao said.
Step on the brake. With the rise of agentic AI, Kurtz said that cybersecurity has to “parallel the slope” of the technology innovation curve: “In every inflection point…you have to have security.”
According to Kurtz, one big challenge is AI enabling more sophisticated threats.
From Kurtz’s perspective, data is the key to solving “almost every security use case,” meaning the more data an organization has, the more problems it can resolve. If that’s the case, Kurtz said, the only way for organizations to keep up is to supplement human security analysts with AI agents who can keep up with threats, provided they have enough data.
“AI seems to be a good opportunity to deal with lots of that data,” Kurtz said. “What we try to do and what we see in the adversary universe is the time has dramatically been cut for the adversary to actually find vulnerabilities, exploit them, get in and pivot.”
But agentic AI isn’t a magic bullet for security, Kurtz said, adding that there isn’t one company or technology able to secure everything: “You’ve got to apply the right security technologies to each of those technologies, and then you gotta connect the dots across them.”
How CIOs feel agentic AI has changed their roles
More than half of surveyed CIOs say agentic AI has pushed them to improve their communication skills.
Like newlyweds-to-be reciting vows on their wedding day, CIOs believe agentic AI has changed them for the better.
According to Salesforce’s second annual CIO study, which queried 200 global CIOs, some 61% of CIOs claimed to improve their leadership skills in preparation for agentic AI. More than half (57%) also said they boosted their storytelling and communication skills, as well as their communication and change management abilities (55%) for the tech.
In a Nov. 12 media briefing, Salesforce CIO Daniel Shmitt shared how his own storytelling skills have evolved because of the emerging technology. Shmitt was joined on the call by Adobe Population Health CIO Alex Waddell and DeVry University CIO Chris Campbell.
“A huge part of my job has become about enabling employees and helping them understand that AI can complement the work they do and isn’t here to just replace them, used correctly,” Shmitt said during the call.
Soft life. The findings come at a time when soft skills are seemingly having a moment in the broader tech industry. IT Brew previously reported that “communication” and “leadership” were keywords found in 49% and 23% of IT job postings, respectively. Technologists are also increasingly seeking out talent for AI-related roles who possess soft skills such as strong ethical judgment.
Moving on up. In addition to their boosted soft skills, CIOs feel more confident in their companies’ AI implementation progress. More than six in 10 (61%) say they are ahead of competition when it comes to AI, up from four in 10 (43%) last year.
Shmitt said establishing a good process and having a “clean data set” is what helped Salesforce scale AI within its organization.
“You have to have a solid process that you have faith in,” Shmitt said. “If we test, observe, modify, and repeat without really good, automated validation and a process to make that useful, I don’t think we would have come as far as fast as we did.”
Waddell said building trust in AI among his staff, which includes non-tech-savvy healthcare providers, and demonstrating the benefits of the tech was an important part of expanding it within Adobe: “If we didn’t get it, we wouldn’t be adopting AI.”
Agentic AI could be the savior of cybersecurity budgets
“So much of security tends to be reactive, and that’s always been the complaint,” one expert says.
With seemingly endless cybersecurity tools available to organizations, it can be difficult to determine what to invest in—whether tried and true tools or something newer like agentic AI. But with cybersecurity budgets in flux, experts say relying on agentic AI could prove a cost-efficient way to handle threats…with some caveats.
AI can help cybersecurity workflows with everything from natural language processing and data mining to predictive analytics and machine learning, according to CrowdStrike.
Experts like Loreli Cadapan, VP of product management at software delivery solutions provider for enterprises company CloudBees, said cybersecurity applications using agentic AI can reduce friction throughout code production. While she acknowledged the approach isn’t directly reducing infrastructure spend, it reduces cost from a “developers’ toil perspective.”
“There’s a lot less context switching on the developer side, and that also allows them to spend more time on innovation, rather than the mundane task of triaging build failures…or triaging a security vulnerability,” Cadapan said.
Kara Sprague, CEO of HackerOne, said she believes it’s smart to employ agentic AI in a cybersecurity context, “because we have surpassed the point where humans can scale to the level needed for security.”
However, it’s important for IT professionals to ensure AI-supported cybersecurity has proper guardrails in place. “A lot of times, those humans are going through some sort of background check, some sort of verification, there’s organizational data and information sharing policies that that human has to comply with,” Sprague said. “We need to think about similar controls around the AI agents that we’re implementing.”
How do we solve a problem like no money? The Institute for Applied Network Security (IANS) reported that in 2025 organizations have scaled back security budgets.The average annual security budget growth dropped to its lowest point in five years at 4%—compared to 2024’s growth level of 8%.
When funding levels drop, Vidya Shankaran, field CTO of emerging technologies at Commvault, says that a strong business case needs to be made for cybersecurity solutions.
“It’s not as straightforward as saying, my antivirus tool is called, it’s going to cost me XYZ, so I’m just going to go all guns blazing and invest in the cheapest product,” Shankaran said. “Because sometimes that may not [be] correct, probably it does not meet all the risk parameters that you’re looking to offset within your organization.”
AI, can we shift left? Cadapan said that if an IT team isn’t focused on increasing efficiency in the project timeline, it gets more expensive to fix issues from a security and compliance perspective.
“With the rise of AI, more developers are actually producing code, leveraging AI…and many other tools out there that [are] available for developers today,” Cadapan said. “Where I’m seeing and hearing from customers, from developers out there, is the shift in mindset of, ‘How can we leverage AI to reduce this friction on the code review, on the testing, all the way down to the production perspective?’”
Taking the bad with the good. Cybersecurity pros can’t really check every single sensor or detecting tool every single day (or if they did, it would take too much time). Erin McFarlane, VP of operations at Fairmarkit, suggested that agentic AI can give an organization a guard who works around the clock.
“[AI] can really give you a best-practices view of whatever it is that you’re looking at from an agentic perspective,” McFarlane said. “So much of security tends to be reactive, and that’s always been the complaint, that we’re preparing for something after it happens rather than… for sensing various signals.”
In McFarlane’s opinion, AI should increase organization budgets because it presents new risks, even as it saves IT infrastructure from bad actors.
Sprague said that those who are looking to implement agentic AI into a cybersecurity stack on a budget should examine existing tools and ensure that the enterprise is taking advantage of the full set of capabilities already available.
“I would advocate for security leaders to be doing some amount of zero-based budgeting and really to…do an assessment of where are the biggest risks across my organization, and am I invested appropriately across my security tools and practices and solutions to mitigate those risks,” Sprague said.
How to shop for software in the era of agent-washing
Gartner says agent-washing is now commonplace in the tech industry.
A pig in lipstick is still a pig, a mouse in armor is still a mouse, and a piece of non-agentic software marketed as agentic but with no real agentic capabilities is still…non-agentic software.
Tech leaders shopping for new software are encountering a new spin on the not-so-new problem of AI washing: agent-washing.
Agent-washing, you say? Gartner defines agent-washing as products “inaccurately labeled or rebranded as AI agents or agentic AI,” causing confusion for customers. The research firm said the practice is currently “commonplace” within the industry.
Philip Carter, general manager and group VP for IDC’s AI, data, and automation research practice, said agent-washing is the “biggest gap ever between vendor hype around an emerging technology…versus organizational readiness to adopt” the technology.
Part of the issue, according to Carter, is that vendors are quickly embracing agentic AI strategies, leaving some buyers overwhelmed with the amount of options in the marketplace and different pricing models. Adding to the complexity: There are several different types of agents, each with different levels of capabilities. An AI assistant, for example, can answer questions and perform basic tasks. Meanwhile, AI agents can autonomously perform different tasks on behalf of a user, and may even use persistent memory to improve their performance.
“It’s also exacerbated by the fact that there are a range of other agentic terms that are being thrown out there: agent fleets, agent swarms, headless agents, ambient agents, all doing a slightly different type of thing,” Carter said.
How to sniff the funk. Rivka Gewirtz Little, chief growth officer at digital identity verification company Socure, said tech professionals can detect faux agents by first determining what type of AI functionality a piece of software is offering.
“What are we really having here?” Little said. “Is it an assistant or an agent? That’s one.”
After professionals understand the vendor’s proposed functionalities and whether those are truly agentic, they must figure out whether the agentic AI can actually solve their problems.
“Are they an agent? Sure?” Little said. “Are they effective? Maybe for two or three parts of the workflow, and then…you get to part five, which is critical to the ultimate outcome, and it’s a terrible result.”
Rory O’Brien, VP of client experience at orchestration platform company Tonkean, said leaders can do this by checking customer references or even turning to testimonials on platforms like Reddit.
“References are big,” O’Brien said. “If they’re saying they do a lot of these things, definitely talk to a reference and then get very specific on the use case that they solved.”
O’Brien said potential buyers should also have conversations with vendors about the composition of their tech stack.
“I would also ask if I’m talking to a vendor, ‘What is your IP? What are you going to go get a copyright for at your organization?’” O’Brien said. “If they can’t answer anything or if they just say, ‘We have really good prompts,’ that probably doesn’t have enough depth.”
Ripple effect. It’s important for organizations to dodge agent-washed purchases because of the unintended consequences it may have down the line, such as agent sprawl, according to Carter.
“Once you’ve got an agent sprawl, you potentially have a major cost problem,” Carter said, “Because you’re paying for a whole bunch of agents that you’re not necessarily using, or you are using but you’re not seeing the value from them or the value that you expected.”

Agentic AI is changing workflows as executives urge human-first approach
“We’re telling people, your work is changing, the world is changing,” executive tells IT Brew.
Agentic AI in the workplace is changing how people do their jobs—and changing what’s expected.
That’s not necessarily a bad thing, but it depends on how the changes are applied, as Appfire CTO Ed Frederici told IT Brew. Appfire prefers a human-centric approach, Frederici said, but that’s not necessarily true across the industry. Some tech leaders are using the implementation of agents to avoid giving the real reason for layoffs.
“We had a long period of time where companies overhired and overstaffed and put themselves in a position where their cost model was untenable, and you see a correction occurring as they let people go,” Frederici explained. “It’s a convenient way for companies to kind of hide the fact that they made poor hiring decisions.”
Change is coming. You can see the change in a June KPMG survey with 87% of tech leaders saying they “think agents will require organizations to redefine performance metrics, and will also prompt organizations to upskill employees currently in roles that may be displaced.” Edwige Sacco, KPMG’s head of workforce innovation, told IT Brew that she sees the change in metrics as a positive development that can adjust expectations for the better.
“We’re telling people your work is changing, the world is changing, the workforce, the workplace is changing,” Sacco said. “We’re going to do everything we can to get you there, but we need you on this journey with us.”
In order to adjust those expectations in the most human-centric way, Sacco continued, the workforce will have to change how it operates with respect to AI, and looking at team metrics and how people work with agents rather than at individual accomplishments will be key. But the first step is adoption.
“It’s about getting our people first to welcome the entity into their lives—be it some combination of personal and, of course, in the workplace—and then giving them the skills, the confidence, the language that they need to develop a healthy relationship,” Sacco said.
Some companies, like Cisco, want to take advantage of the “gold rush” opportunity agentic AI offers. At this year’s Cisco Live conference, the firm’s president and CPO Jeetu Patel said Cisco aims to be “the infrastructure company that powers AI during the agentic movement.”
Human-first approach. Appfire’s approach is a good example of human-centric AI adoption, Frederici said, as the company utilizes agentic AI to accentuate existing staffers and made a conscious decision not to replace employees with agents.
“Every response that is generated outside of the app context that uses AI still involves the human being,” Frederici said. “One of the benefits it’s had for us is in a standard world, a single support agent can be a master of about five applications.”
While things could change in the future, for now Frederici believes the best way for AI to function within the workplace is alongside a human operator. Businesses will have to weigh what’s most important.
“You have a corporate responsibility to both your customers and your employees to ensure that, as you adopt AI, you do it in a way that provides the greatest value to both of those constituencies,” Frederici said. “And your ultimate role as a business is to provide the best value to your client, and today that is still 100% including the human being.”
How an AI pro puts ‘handbrakes’ on agentic decisions
Slowing down an LLM’s decision making is very important, Valtech Chief Data Scientist Richard Bownes tells us.
Autonomy is a funny thing. On a good day, an assistant’s autonomous decisions can save time. On a bad day, a toddler’s autonomy can lead to 500 purchases of Elmo merch.
For Richard Bownes, chief data scientist for Europe at consultancy Valtech, deploying generative AI for clients requires “handbrakes”—processes that slow down the speedy process to ensure an agent’s decisions don’t lead to catastrophe.
Bownes recently helped a public services org—one dealing with people’s finances, he said—to use GenAI to “speed up their case management.”
“It would be inconceivably irresponsible to create a system in which decisions could be made by a public services organization, in the name of the public good, that would make up an answer, that was actually an artifact of hallucination or bias, or just a model not really understanding the question,” Bownes said.
What’s an agent? Bownes’s definition of an agent sounds more like a thermostat, at least on the surface. It is, according to Bownes, “a model which can take an action within its environment, with some level of autonomy.”
Agentic actions today could include the drafting of a meeting agenda, monitoring machinery to predict failures, or providing personalized recommendations. And plenty of vendors, from Microsoft to Salesforce to Google, are getting in on the agentic action.
Them’s the brakes. At the heart of an agent is an LLM, which orchestrates and outputs a step-by-step plan from an input prompt.
To prevent irresponsible consequences of rogue AI agents, Bownes recommends sequestering the outputted data for review before influencing decisions or customers. Once isolated, consider these handbrakes, he said:
- Constrained autonomy: Agent owners should pre-define critical functions that must be shown to a human before executing, Bownes recommended in a follow-up email. In the case of a booked flight (or ordered Elmo, for that matter), for example, an agent should not execute the purchase without first showing you the prices.
- The four eyes principle: To avoid biases in inputs and outputs, engineers should call out to an additional model, Bownes said.
- Modularity: There are fewer opportunities for errors when a task like “book flight” gets broken down into component actions that are their own little agents: One for getting location data, one for finding the nearest airport, one for checking available flights for a desired day, and one for selecting a seat. Breaking the main action down into subtasks will improve workflow execution, Bownes told us.
A 2024 Q4 report from Deloitte, released in January 2025, found that 26% of 2,773 business leaders said their orgs “were already exploring autonomous agent development to a large or very large extent.” Some companies recently shared with Business Insider how they’re using agentic capabilities for tasks like drafting a personalized reminder for invoices or crafting customer-support responses.
“There’s going to be a lot of discovery in what the appropriate balance is between letting the agents do their thing, while having human intervention and oversight,” Chanley Howell, partner and intellectual property lawyer with Foley & Lardner, told us.
When considering the critical importance of adding handbrakes to an AI model, Bownes considered the adage often attributed to a 1970s IBM training manual: A computer can never be held accountable. Therefore, a computer must never make a management decision.
“That’s extra true with autonomous agents because they are, in some ways, enabled to make decisions through their autonomy, because they can call tools which can have an action.”
Why your agentic AI still needs a human in the loop
Agentic AI systems are now able to understand an environment, act on behalf of a user, and reason with newer models and a matured Model Context Protocol.
While there’s no doubt agentic AI is one of the hottest trends affecting IT pros, the technology presents serious risks for companies looking to employ the tech and take humans out of the loop as a means to automate workflows or reduce the size of their teams.
Experts point to lack of visibility as a reason humans need to check on and be involved with agentic AI, especially when it is able to make independent decisions and drive a set of outcomes.
Thomas Squeo, the CTO for the Americas at Thoughtworks, called the first generation of agents “relatively simplistic” and said that providers were not typically able to offer them at scale in a production enterprise environment.
Human in the loop. The standard right now, Squeo said, is keeping a human in the loop so that an agent isn’t running through an activity on an ongoing basis without human intervention.
If there’s a guardrail strike from the AI, the agent can send a request for a human to take action based on its confidence level of what the model is about to do.
Squeo offered an example of a billing system, where a paper bill comes in and the agent is doing optimal character recognition (OCR) analysis on a document—when an image is converted into machine-readable text—and only has 80% confidence—too low for it to automatically send along.
“You might send that along into a queue for customer service…or somebody that’s in client support that goes and looks at that and says, ‘Yes, this agent was correct,’” Squeo said. “That behavior, when the human in the loop says, yes, it’s correct, it might then say, ‘I can increase my confidence on that case going forward.’”
Now, with newer agents and the maturation of Model Context Protocol (MCP), the connection between agents and data sources, agents are able to understand the environment, act on behalf of a user, and use reasoning skills.
Squeo said that agents should operate within guardrails, and recommended that IT teams use an observability system to report when an agent strikes against the rails.
“In some cases, [the agent] might strike and it might be fine,” Squeo said. “It might be a strike because of the way that the rules for how that agent is operating is causing it to operate against its criteria for being able to be governed.”
Black box. Cyber professionals will typically divide what should be secured into layers. Alon Diamant-Cohen, principal consultant for Stratascale’s Hybrid Cloud Security team, pointed to the Open Systems Interconnection model, a seven-layer framework that contains application, network, and infrastructure layers for security to cover, as melding into one when an organization introduces agentic AI.
Diamant-Cohen said that an IT pro must figure out how to secure all of the layers in a tech stack at once. For experts, this can look like establishing governance or rethinking how professionals are thinking through policy constructions.
Within the layers is the network layer, where MCP exists for agentic AI. Diamant-Cohen said the concept is “awesome, because you get all these new capabilities out of your tools when you integrate them. But there’s not a lot of consideration to the fact that you’re adding some kind of permanent infrastructure and you need to secure it.”
“A lot of MCP configurations have some black box elements to them, where you just can’t see what it’s doing,” Diamant-Cohen said.
The “most terrifying” aspect of this process, according to Diamant-Cohen, is the network layer, specifically as it pertains to the connectivity between agents.
“You’re essentially standing up a node with outbound and inbound connectivity to your AI agent,” Diamant-Cohen said. “Which is trained on all your sensitive data that has some black-box elements that you do not have full visibility into, or you can’t fully understand why it made the decisions it did.”
Yes, agentic AI offers some stunning possibilities for IT pros—who wouldn’t want a super-intelligent companion handling the complex drudgery that underlies so many workflows? However, it’s still very early days for this technology, with many questions still unanswered. If you’d like to know more about how AI is impacting every part of the IT world, check out IT Brew’s comprehensive articles on everything from AI-based cybersecurity to the code-writing potential of generative AI.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
By subscribing, you accept our Terms & Privacy Policy.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
By subscribing, you accept our Terms & Privacy Policy.