Skip to main content
Cybersecurity

2025 became the year of agentic AI

“Organizations are finally starting to realize that we can’t just rely on AI to self-govern or police its own behavior,” a technical director says.

3 min read

Eoin Higgins is a reporter for IT Brew whose work focuses on the AI sector and IT operations and strategy.

In 2025, AI was halfway there—then agentic was living on a prayer.

You can roughly split the year in half between the pre- and post-agentic AI eras. Agentic AI wasn’t invented this year, but since IT Brew attended the RSAC Conference in late April, it seems every organization is promoting its use of the technology.

NCC Group Technical Director David Brauchler told IT Brew that “agentic” was the word of the year as systems increased their complexity and capabilities. The technology is acting as a force multiplier for organizations—but questions remain about risk.

“We’re seeing a change from isolated use cases where we drag and drop AI into some broader application or system into using AI to power functional operations that we couldn’t do with traditional technologies,” Brauchler said. “That being said, you have a lot of security risks and concerns that come along with that.”

Hold on to what we’ve got. The evolution of the technology has been less chaotic than expected, Globant SVP of Digital Innovation Agustín Huerta told IT Brew. In Huerta’s view, the somewhat more streamlined and almost open-source way agentic AI was developed in 2025 shows the benefits of cooperation. That’s most clear in model context protocols, which connect LLMs to outside sources and are enabling companies to find common ground.

“It’s like they are not competing with each other anymore,” Huerta said. “In that sense, in terms of creating protocols, they are embracing the one that has the idea first and the approach for that, and they understand that the true future for the evolution of agents is that those protocols keep evolving and keep being embraced by as many players as possible.”

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

Yet this approach may have some negative side effects. There’s no doubt that usage of agentic AI increases the attack surface. IT Brew has reported on how the impulse to integrate the technology is sometimes coming at the expense of careful risk assessment. Securin CEO Srinivas Mukkamala described it as a “geometric” explosion, as opposed to exponential growth. By extending identities with governance and ownership to agents, users may be increasing their attack surface by multiples.

“When you look at human identities, we keep floating the numbers from a billion to two billion [users],” Mukkamala said. “Now, each one of us is extending that identity to 20 agents of ours to do our jobs.”

Got each other. Melissa Ruzzi, AppOmni’s director of artificial intelligence, told IT Brew that she worries there might be too fast of a push to production on the technology—the “easy and quick solution” of AI doesn’t mean a human shouldn’t remain in the loop.

“We should not just simply use agents for everything, and agents should not just be left alone making their own decisions for all kinds of different things,” Ruzzi said.

The defensive capacity that was in place for pre-agentic AI infrastructure isn’t enough. Brauchler noted that internal guardrails put in place to allow the technology to control itself or safety filters are no longer sufficient—and 2025 proved that.

“Organizations are finally starting to realize that we can’t just rely on AI to self-govern or police its own behavior,” Brauchler said.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.