AI shoppers open the door to a world of uncertainty
“The technology is too immature to actually use its scale successfully and securely right now,” tech expert says.
• 3 min read
Eoin Higgins is a reporter for IT Brew whose work focuses on the AI sector and IT operations and strategy.
Holiday shopping is here and everyone’s looking for help—and with some people turning to AI, there’s a new security concern under the tree.
AI shoppers are growing in usage and importance as consumers try to automate the boring and relentless work of finding just the right product. AI personal assistant technology isn’t expected to reach mass market level until 2026, according to a new analysis from IEEE, but it’s already becoming an important element of the online retail experience.
The way it works. IEEE Senior Member Kayne McGladrey told IT Brew that, in theory, an AI shopping agent would be able to handle purchasing if given enough information. But that hints at the underlying security concern: If an agent has your payment information, personal details, and access to email, it opens the door to greater threats. And attackers are taking notice.
“I’ve seen working concepts where the AI will get tricked into not only finding the wrong object, but getting the credit card information from you and sending that credit card information off to whoever’s hosting the fake scam object, and taking your bank account and collecting those credentials too, because it’s got access to all of that,” McGladrey said.
Threat actors and defenders alike are increasingly reliant on AI for managing the cybersecurity landscape. For those on the side of the angels, that means using the technology to streamline incident reporting and detection. But AI isn’t always reliable; the technology can open the door to exploitation solely due to its level of access.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
Purchase power. Whether attacks on AI shoppers count as a cybersecurity threat or just old-fashioned fraud, the vulnerability is real, and while there’s potential for AI agents to be eventually deployed to automate any number of everyday tasks, there are very real roadblocks to that future.
“The technology is too immature to actually use its scale successfully and securely right now, and I think until we have some unfortunate outcomes, there’s no real economic incentive for the people who are making the AIs to make them more resilient,” McGladrey told IT Brew.
There’s also the concern that agents could be used to infiltrate sites and attack internal systems. With an influx of shopper agents, it’s hard to sort the real from the fraudulent, meaning that defenders need to be vigilant. IT teams have to stay on top of the danger and limit the damage.
“We can take the same controls that we use for malicious, hostile traffic and apply it to those bots because otherwise they will possibly either overload websites that don’t have enough capacity to create mini DDoS, distributed denial service-style attacks, which is not good for anyone,” McGladrey said. “Having your website go down because AI has decided that it’s going to take over is not a good outcome.”
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.