With shared AI chats, malware masquerades as help
A Huntress blog entry revealed how attackers can hide malicious instructions in ChatGPT conversations.
• 4 min read
Billy Hurley has been a reporter with IT Brew since 2022. He writes stories about cybersecurity threats, AI developments, and IT strategies.
According to a recent report from cybersecurity company Huntress, threat actors drove fraudulent and malicious ChatGPT- and Grok-based troubleshooting conversations to appear prominently in search results.
The entries seem like legitimate help for a task, such as how to “clear disk space on macOS.” Instead of containing helpful troubleshooting advice, however, the manipulated entries offer copy-and-paste steps for installing infostealers.
The chats appear near the top of Google results, and avoid traditional malware downloads in favor of four everyday, often harmless actions: search, click, copy, paste. And IT pros should be concerned, according to Jonathan Semon, principal SOC analyst and co-writer of the December 9 report summary on the Huntress site, given people’s willingness to trust chatbots’ answers.
“It’s stealthy, it’s quiet, it’s quick, it’s cheap, it’s scalable, and it’s most importantly, in my opinion, psychologically effective,” Semon told IT Brew. “All it takes is one admin to have a password leaked or to have a backdoor created on their machine, and that’s how ransomware gets in.”
How it works. Anyone trying to figure out tech—including your typical IT pro!—has to search stuff now and then, and the adversaries crafting and sharing seemingly legitimate AI chats are banking on that habit. According to Semon, a Huntress customer googling for technical help found a sponsored search result leading to a manipulated ChatGPT conversation, but legitimately hosted on ChatGPT.com.
This conversation, the company’s SOC analysts discovered, was crafted by a hacker who wanted to steal data and damage machines. The entry’s step-by-step “troubleshooting plan” told the user to paste a malicious command—one that downloads and runs a data stealer—in their terminal.
Semon mentioned one plausible tactic: Attackers could leverage AI platforms’ content-rendering features to retrieve an externally hosted HTML file or similar content, and present it as text in a shared conversation; the instructions appear as though they were generated directly by the AI rather than an outside, nefarious source.
Where have I seen this before? The tactic is a spin on SEO poisoning—a persistent ploy that relies on tools like bots or keyword stuffing to boost harmful pages to the top of search results.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
The tactic is also another round of scam-yourself social engineering, which lures a target into running a malicious command on their own, often bypassing browser-security protections since the browser isn’t doing the downloading.
The Atomic macOS Stealer (AMOS) payload found by Huntress, according to the company’s report, exfiltrates data, harvests credentials, and escalates privileges.
Earlier this year, a report from IBM showed the number of infostealer credentials available for sale on the dark web increased 12% year over year in 2024.
What to do. Semon said he shared findings and the backlinks with Google, OpenAI, and xAI. “Distributing malware is an egregious violation of our ads policies, and we’ve suspended the accounts linked to these campaigns. We continue to monitor for abuse to keep this content off our platforms,” Google spokesperson Nate Funkhouser wrote in an email to IT Brew. (OpenAI did not respond to IT Brew’s requests for comment. xAI did not answer IT Brew’s request directly, responding: “Legacy Media Lies.”)
Semon suggests vendors offer a notification that alerts users of a shared conversation from an AI platform and advises a potential victim to not run commands or download anything. Basically: Don’t trust this.
Similarly, he said, employees should be advised to not run commands from an untrusted source, and to deploy multifactor authentication and password managers that defend against infostealing malware.
Does an attack focused on IT-related questions mean attackers are purposely targeting IT professionals, especially those with valuable credentials?
“Information-stealing is the No. 1 thing that’s on the market right now. And if all it takes is abusing somebody’s trust in something like ChatGPT to get them to execute a command, and now you have all of their passwords. You have all of their cryptocurrency. You have all of their machine data,” Semon said. “I think it can target anybody really.”
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.