Skip to main content
Cybersecurity

AI increasingly a weak link in security, research finds

“The key problem here is the agent is having the same privilege level as a user,” SquareX exec says.

human hand touching robot's through a looking glass type image

Just_super/Getty Images

4 min read

You are the weakest link, AI.

As concerns grow about how the technology is being applied across industries and how it could be exploited, recent research from both Netcraft and SquareX has revealed the extent to which some LLMs and agents are vulnerable to attack—and it’s worse than you might think.

Audrey Adeline, a member of the SquareX Founder’s Office, told IT Brew that AI isn’t capable of making snap decisions. Because the technology is trained to complete tasks, it will continue to do those tasks without any knowledge of security risks. That’s a different way of processing information than human intelligence.

“We have these browser AI agents that are acting on the behalf of users, and at least for employees, even though they’re the ‘weakest link,’ they do receive some sort of security training,” Adeline said.

Undetected. SquareX found that when AI agents—used by 79% of organizations, a PwC AI Agents Survey found—are given permissions like they’re human beings, the potential for exploitation grows. As software robots, they’re not capable of judging each situation on its own merits, meaning that errors can go uncorrected until they’re addressed by human oversight. The problem is, primarily, that an agent may reveal information or expose data without relaying that to their controllers, leaving a gap in understanding that could lead to a security flaw.

“The key problem here is the agent is having the same privilege level as a user,” Adeline said. “For example, if I told an AI agent to do something, they literally have access to every single enterprise app, every single password that I as a user have access to.”

Bad directions. The problem isn’t isolated to AI agents. About one-third of the time, according to Netcraft research, LLMs are responding to queries for sign-in URLs with fake or compromised sites rather than the correct logins. By the numbers, that works out to 6 LLMs returning the correct site 66% of the time, but 29% “were unregistered, parked, or had no active content.” A further 5% belonged to unrelated brands. That’s potentially creating problems for companies as attackers look to take advantage of the opportunity, research author Bilaal Rashid told IT Brew.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.

“The big multinational brands, the LLMs are a lot more reliable—our belief is that they included a lot more of the training data, so the LLMs were less likely to hallucinate the login sites and the domains that were owned by those brands,” Rashid said. “But for those smaller ones, those kind of more national ones which are still big in their own right, that’s really where the LLMs are starting to jump and fall.”

For IT teams, the best move is to focus on raising awareness of the danger, according to Rashid, who noted that any real solution to the problem will have to prioritize guardrails and mitigation for sanitizing LLM outputs. Mitigation of the possibility of malicious phishing at the URL level is a good starting point—but the onus is also on affected companies to clean up their own house.

“There’s a responsibility from the brands as well to try and make sure that they are pushing their content and that their benign content is there,” Rashid said, adding that companies also need to combat phishing sites as they pop up.

Looking ahead. When it comes to managing the agentic security threat, Adeline recommended that in the immediate term, browser-native detection capability can stop the agent from malicious activity. Creating sub-identities for agents can restrict their permissions and actions. And in the long term, technology will hopefully be able to differentiate between human users and agents in real time.

“Now this is quite a technological challenge, so it might take a while, but we’re hoping that research like ours will incentivize them to do something, because I think this is really the future of internet browsing,” Adeline said.

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.