AI agents are all the rage.
Advocates promise the new generation of autonomous software can make decisions and learn processes without the need for constant human oversight. That sounds great, but there are reasons for companies to hesitate before going all-in on this shiny new toy, no matter how much they think AI agents will streamline workflows.
AI agent protocols—how agents communicate with each other—are one concern that has Dr. Petros Efstathopoulos, RSAC VP of research, on the alert.
In an interview with IT Brew, Efstathopoulos explained the limitations of protocols, what users should look for as the technology continues to evolve, and why human oversight is needed.
This interview has been edited for length and clarity.
What can you tell us about protocols? What are some misconceptions?
This is an area that’s moving very fast, and it’s very early, so [researchers have] done as best they can. I’m sure that [the protocols] will improve with time. We’ve given these folks very little time to architect and help build these things, and I'm sure that with time, they will mature.
Now, these protocols have been designed in order to connect the agents. They're essentially communication protocols at a very high level. The expectation that some people have is that these protocols would also help transform or help check the data that’s being transferred from one agent into another, as if the protocol is also responsible for the integrity and the correctness of the information that's being relayed from one agent to another.
That’s actually not true.
Wait, the protocol doesn’t need to ensure that the information is accurate? Why not?
It’s not the protocol's job to do that. The protocol is there just to create the connection. But having said that, when we have multiple agents that are connected back-to-back, it is often the case that when one of the agents makes a mistake, and that’s passed on to the next agent, the errors and the inaccuracies kind of pile up. And sometimes they get amplified. That’s not an inherent protocol problem, it’s just an inherent problem of connecting multiple agents together.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
The fact that sometimes this doesn’t work very well in terms of the accuracy of the results is not the fault of the protocols. It’s just that stacking many agents together can exacerbate mistakes that the elements are making. The protocol itself is just the messenger, so to speak, between the agents.
So it's kind of like a game of telephone, except there's obviously no human involved in the middle—you could put something in, and if it’s slightly off, maybe the result you get on the other end might be even further off. Is that what you mean?
Without these protocols, we cannot connect the agents and stack them together. We would only rely on a single agent, because you wouldn’t have a way to connect them. The protocols enable this stacking of the agents. It’s an enabling mechanism; the fact that now the agents are stuck and connected to one another leads to this game of telephone.
Imagine we were talking on the phone. I relate some information to you, you know, ‘the price of the Amazon stock today is XYZ. And I was incorrect in what I was doing. You wouldn’t expect that the phone company would know how to fix my mistake and change my words, right? So that’s the equivalent, you can’t expect the protocols themselves to to fix the information that’s relayed if that information is not correct. The protocol is there just as a communication medium.
But because the protocol is the enabling factor for stacking these agents, when bad things happen, it gets a little bit of the blame as well.