Aitor Diago/Getty Images
AI may be complex, but its attackers don’t have to be. Hackers can mess with learning models using a simple malicious indirect prompt here, or a tiny dataset modification there.
As organizations begin deploying artificial intelligence and machine-learning systems, a panel at April’s RSA Conference in San Francisco urged the importance of making them resilient against attacks that have been fairly basic…so far.
“The malicious actors in this space have a lot of room even to evolve. But they don’t actually need to yet to take advantage of these vulnerabilities of our systems, which is why we’re seeing so many low-level-of-sophistication attacks be successful,” said Christina Liaghati, AI strategy execution and operations manager for MITRE’s AI and Autonomy Innovation Center.
Many attackers are “poking” at AI models, Liaghati told the RSA audience.
Liaghati spoke at the presentation titled “Hardening AI/ML Systems—The Next Frontier of Cybersecurity,” along with Bob Lawton, chief of mission capabilities at the Office of the Director of National Intelligence, and Neil Serebryany, CEO at security vendor CalypsoAI.
Some early pokes at AI.
-
Feb. 2022: A New Jersey man (with a curly wig!) exploited facial biometric recognition systems that used machine-learning techniques to authenticate identities and initiate unemployment insurance claims.
-
March 2021: Tax scammers in China were caught hacking a government-run facial recognition system to produce fake tax invoices, according to the South China Morning Post.
-
Early 2023: Language models like ChatGPT have lowered the barrier to entry, assisting malware makers and email compromisers alike. “The quality and the number of spearphishing attacks has just gone up wildly,” said Serebryany during the panel.
Attacks on machine-learning systems fall into categories of “data poisoning” (compromising the model’s learning information), “evasion” (getting around the model’s constraints), or a denial-of-service tactic that overwhelms the system.
Read more here.—BH
Do you work in IT or have information about your IT department you want to share? Email [email protected].
|
|
TOGETHER WITH ROCKET SOFTWARE
|
Modernizing your mission-critical systems comes with constant pressure to cut costs and improve the customer experience, all while maintaining data integrity and security. You need to consider everything from hybrid cloud and automation to DevOps and performance.
But no matter where you are in your IT modernization journey, Rocket Software has the expertise and solutions to move your business forward—without growing pains. And their 97% customer satisfaction rating means you can do it all with confidence.
They take the systems that are currently working for you and optimize them for added data mobility, process efficiency, and business security. Pain-free growth? That’s the power of modernization without disruption.
Learn more.
|
|
Petesphotography/Getty Images
Strife in the cloud? Time for better security.
On April 18, Palo Alto Networks threat intelligence research arm, Unit42 (not to be confused with the hacking organization APT43) released its seventh Cloud Threat Report.
In the study, researchers note that threat actors are moving quickly to take advantage of outdated cloud security tactics—and IT teams aren’t moving fast enough to deal with the fallout.
By the numbers. The data in the Unit42 report shows how far behind companies and organizations are when it comes to cloud cybersecurity—and what they’re leaving open to threat actors:
-
Codebases. The report found that a slim majority—51%—of source code used in the cloud depends on over 100 open-source packages, only 23% of which are directly imported by developers. Sixty-three percent of codebases have unpatched vulnerabilities rated high or critical.
-
MFA. Adoption of multi-factor authentication lags in most organizations and companies, and cloud logins are not immune. A staggering 76% of organizations don’t enforce the use of MFA for cloud console users, and 58% don’t enforce it for more important root and admin users.
-
Insecure. PII, financial records, and intellectual property—among other sensitive data—are found in 66% of storage buckets and 63% of publicly exposed buckets. When there are cloud security issues, it takes security teams an average of 145 hours, or six days, to resolve the issues; 60% of organizations report resolution taking over four days.
Can you see what I see? Bob West, a CSO with Palo Alto, told IT Brew that one of the main problems he sees is that organizations lack good visibility in the cloud. That leads to problems and makes it hard to respond to threats.
Read more here.—EH
Do you work in IT or have information about your IT department you want to share? Email [email protected].
|
|
Francis Scialabba
Generative AI tools like OpenAI’s ChatGPT are everywhere—and so are users unknowingly leaking sensitive data while querying them for answers.
In March, according to Economist Korea, Samsung experienced several incidents at its Korea-based semiconductor business involving employees pasting lines of proprietary code into ChatGPT within weeks of rescinding a division-wide ban on the use of the AI chatbot. Bloomberg reported that on May 1, Samsung warned staff of a ban on the use of ChatGPT or similar tools.
Despite warnings to employees using the generative AI not to upload sensitive internal information as ChatGPT prompts, two staffers reportedly uploaded segments of proprietary code for bug-fixing purposes, while a third uploaded a recording of a meeting via a personal assistant app. Immediately after discovering the incident in March, Economist Korea reported, Samsung throttled all future uploads to ChatGPT to just 1,024 bytes. According to Bloomberg, Samsung then conducted an internal survey that found 65% of respondents believed generative AI was a security risk. In the May 1 memo, the company warned consequences could include termination of employment.
“Because it’s such a powerful productivity tool, we’re seeing all kinds of activity that can be deemed very risky and dangerous, like CEOs uploading sensitive emails to their board of directors to get it rewritten,” Howard Ting, CEO of data protection firm Cyberhaven, told IT Brew.
It’s not clear whether the Samsung employees in question were using the paid API version of ChatGPT—which OpenAI says does not contribute submitted data to the AI’s training set—or the free version, which OpenAI says in its terms of service is used to further train ChatGPT. An internal Samsung memo obtained by Economist Korea didn’t draw a distinction, noting that violations of policy occurred as soon as proprietary data left Samsung’s control.
Keep reading here.—TM
Do you work in IT or have information about your IT department you want to share? Email [email protected]. Want to go encrypted? Ask Tom for his Signal.
|
|
|
Plan for change. As the world finds its new normal, IT departments are doing the same. Let Microsoft’s Windows 365 E-Book: The Only Constant is Change guide help your workplace empower and enable your teams with cloud-based tech designed specifically for flexible work. Get your copy.
|
|
Francis Scialabba
Today’s top IT reads.
Stat: 94%. That’s how many polled IT leaders say their cloud costs are rising as the technology becomes ever more important for tech teams and companies. (ITPro Today)
Quote: “Not only can I generate this stuff, I can carpet-bomb the internet with it.”—Hany Farid, a digital-forensics expert at UC Berkeley, on AI deep fakes’ potential for disruption (the Wall Street Journal)
Read: The WGA strike, streaming AI, and how the creative industry is responding to technological changes. (Wired)
How they did it: Secure data is the backbone of a successful business. Attend Rubrik Forward Virtual to learn cybersecurity tips from leading brands—and catch closing remarks from Ryan Reynolds. See you May 17.*
*This is sponsored advertising content.
|
|
-
Elon Musk’s pay-for-verification plan has backfired, making the blue check a polarizing symbol rather than a sign of authentication, per the WSJ.
-
Competitor Bluesky, meanwhile, has a lot of buzz—it’s one of a number of Twitter competitors looking to capitalize on Musk’s mismanagement.
-
NSA research chief endorsed the use of private AI by US intelligence agencies.
-
US authorities took down a Z-Library hub, but the site’s users are still active in an ongoing game of intellectual property Whac-a-Mole.
|
|
Check out the IT Brew stories you may have missed.
|
|
|