Careful with that AI, Eugene.
That’s the message IT teams are trying to get across to their companies as problems mount with the technology, which provides a wealth of convenience at the cost of being able to integrate the tool into your workflow and protect sensitive information. AI is being deployed in sectors across the economy, making its correct application and security an increasingly important concern.
Threat actors are noticing, and that has opened the door for unique attacks. It’s adding to what some in the cybersecurity industry are referring to as the attack atmosphere, a spin on the term attack surface that acknowledges and prioritizes how the threat has evolved.
Learning to fly. With budget cuts and efficiency prioritization, AI integration is increasingly a topic of focus for IT teams. But rushing the process could mean “building a house of cards,” as Brian Fox, Sonatype co-founder and CTO, told IT Brew.
“How do I limit the damage, limit the scope of what these things can do?” Fox said. “Maybe that means keeping some humans in the loop on some of these things and not just giving it free rein to take automatic actions.”
Another brick. Researchers are trying to get ahead of the danger of added AI integration by testing out possible tactics adversaries might use, and finding that AI could present a danger to the organizations that deploy it.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
One group, drawn from the University of Illinois at Urbana-Champaign, Boise State University, and Intel, found that it was possible to jailbreak AI models by flooding them with information through a tactic they dubbed “information overload.” Researchers implanted commands into complex, jargon-heavy queries that appeared to overwhelm the AI models and open the door to exploitation.
“This attack reveals a fundamental vulnerability in AI alignment systems, where the model’s decision-making under highly complex information load leads to misclassification of harmful intent,” the researchers wrote.
Us and them. Audrey Adeline, a member of SquareX’s founder’s office, told IT Brew that jailbreak attacks like the ones deployed in the research paper are worrying—but that the biggest danger for organizations using AI comes from within.
“That has to come from a malicious insider threat, where you’re jailbreaking the approved AI agent within the organization,” Adeline said. “I think the larger threat is employees that are well-intended are trying to use AI agents for legitimate use cases…but then, unintentionally, because these AI agents are so unpredictable they go ahead and perform something malicious.”