Infostealers nab 300,000 ChatGPT credentials: IBM
Why chatbot creds might be a new kind of business email compromise.
• 4 min read
Billy Hurley has been a reporter with IT Brew since 2022. He writes stories about cybersecurity threats, AI developments, and IT strategies.
An adversary who steals your ChatGPT conversations might see a lot more than your recent query asking “how do you dance at a party”—they might also figure out your organization’s intellectual property and strategy.
IBM, in its annual X-Force Threat Intelligence Index, reported that infostealers snatched 300,000 ChatGPT credentials last year. The finding suggests that AI is providing a new spin on a common cybersecurity threat, business email compromise, as adversaries target chatbots rich with potential IP, search histories, and strategy docs.
“Instead of a business email compromise, you have a threat actor hiding in the AI prompting. So, they can really just sit, and watch, and discover what you’re trying to develop,” Ryan Anschutz, North America leader for IBM’s X-Force incident response team told IT Brew.
Chat’s crazy. The integration of AI chatbots into business operations has “created a new attack vector for cybercriminals utilizing infostealer malware,” according to IBM’s Feb. 25 report. (ChatGPT has reportedly reached 800 million weekly users, and other AI chatbots are also seeing widespread adoption.)
Here are other notable conclusions from the X-Force Threat Intelligence Index:
- The theft of AI chatbot credentials could lead to infiltration of other systems via token-based access.
- The researchers said they reviewed ChatGPT (OpenAI), Microsoft Copilot, Google Gemini, Perplexity, and Claude AI (Anthropic) for credential theft on the dark web, but many (like Apple, Google, and Microsoft) use third-party providers and “do not have platform-specific credentials stored and cannot be identified in credential data.”
- IBM’s researchers noted that a threat actor posted examples of stolen credentials on forum sites in February 2025. The attacker claimed to have stolen more than 20 million accounts.
- In a follow-up email, Anschutz shared that the volume of exposed ChatGPT credentials “are consistent with the prior year.”
Infostealers—which often end up on machines through deception and phishing-led downloads—pull as much browser data as possible, including logins, credit cards, chatbot passwords, and more. While many operators preconfigure their malware to hunt for popular apps, Anschutz says there’s no indication that AI desktop applications are being singled out—at least for now.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
To address the chatbot-credential exposure, IBM’s researchers recommended companies examine their AI chatbot policies and prioritize credential protection, like multi-factor authentication and passkeys.
Anschutz also advised IT pros to consider having controlled, internally sanctioned AI that could be placed behind security controls like a VPN.
“Chatbots and AI and other tools have become embedded across all of those business functions. And I think that the threat actors have identified that,” Anschutz said.
How much is in the account? Aside from free chatbot usage, attackers could leverage an AI platform like ChatGPT for valuable data, especially if an employee uses a tool in the shadows.
An adversary can potentially see everything you’ve queried, including sensitive corporate and sensitive personal information, according to Nick Hyatt, senior threat intelligence analyst at GuidePoint Security. He noted, however, that infostealers harvest at scale, and he hasn’t seen infostealers specifically targeting large language models (LLMs).
“More and more people are relying on LLMs to make their life simpler, whether that be in their personal life or their corporate life. And so the value of that credential goes up as people rely on those tools more,” Hyatt said. “I think it is a matter of time until we see specific attacks against [LLMs].”
Threat actors with valid tokens can also “impersonate legitimate users, disrupt business workflows, and inject malicious instructions directly into AI-assisted processes to manipulate outputs,” James Shank, director of threat operations at Expel, wrote to us—a concern, potentially, for teams drafting security alerts, summarizing intelligence, or making operational decisions.
“The truth is the world hasn’t yet seen all the ways these credentials will be abused by attackers, just as the world hasn’t seen the full evolution of what AI will mean in the future,” Shank shared in an email to IT Brew.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.