Skip to main content
Sponsored
Cybersecurity

Cyberattackers taking inventory of exposed LLMs

And how and why you might want to avoid getting on this list.

4 min read

Billy Hurley has been a reporter with IT Brew since 2022. He writes stories about cybersecurity threats, AI developments, and IT strategies.

Your machine data knows things: Unlock it with Splunk and get game-changing insights—plus a critical resource to power AI. Tapping into machine data can boost your security and reliability. See more.

For an easy question—how many states are there in the United States?—mysterious groups on the internet seem intent on asking it 27,000 times.

Possible attackers are pairing scan-the-internet scripts with innocuous prompts like the aforementioned USA query to find large language models (LLMs) accessible without the need to authenticate, according to recent reports from cybersecurity training group SANS Institute and cybersecurity company GreyNoise.

Researchers said these LLM-ventories could support a range of adversarial tasks, including compute theft and extraction of restricted information.

After investigating their honeypot infrastructure, GreyNoise researchers detected:

  • Two IPs launching “a methodical probe of 73+ LLM model endpoints,” access points that use APIs to interact with the system.
  • Beginning on December 28, 2025, the scanners generated over 80,000 sessions in 11 days. According to the post, they aimed to find “misconfigured proxy servers” leaking access.

SANS volunteer incident handler Didier Stevens recently reported seeing many calls to LLM models with the same “how many states” query.

“This is recon to find open LLMs. Not necessarily to exploit them, but to use them,” Stevens wrote.

Bob Rudis, VP of data science and security research at GreyNoise, said early LLMs especially had problems with the question. The large-scale query discovered by GreyNoise can not only determine the presence of an LLM, but also differentiate one model from another based on how it handles the question.

Why the list? Rudis sees an index of available LLMs as a valuable tool for brokers looking to sell access.

“It felt to me, as I was going through the data, that this was an attacker that was just trying to get a really good inventory of what was out there, so that they could sell that to somebody else who really wanted to do the actual targeted abuse of those systems,” Rudis told IT Brew.

One dangerous concern, too, he added, is that an adversary using an exposed LLM could drain a company’s IT resources, such as GPUs.

In addition to gaining free use of a large language model, Johannes Ullrich, dean of research at SANS Technology Institute, thinks attackers could leverage this technique to retrieve sensitive information from company databases. “If they know the company they're attacking, they could ask some specific questions about the company. Give me the drawings for this particular airplane,” he said, as a theoretical example.

Another risk, Ullrich mentioned, included attackers finding vulnerabilities in an LLM or supporting website structure that provide greater access to a server, much like the threats posed by insecure, internet-connected smart devices (think casino hacks that start with a fish tank thermometer).

Open LLMs have been a longtime security concern. A Dataiku and Databricks market research report from ye olden days of September 2024 found that 56% of 800 surveyed “global data leaders” were experimenting with self-hosted, open-source LLMs. GreyNoise’s research suggests attackers see open LLMs as a target-rich environment.

Configuration recommendations. A private LLM installed on an organization’s server offers advantages like dedicated resources—as long as it’s configured correctly.

Ullrich recommends companies:

  • Limit access to an internal network or VPN. If external access is required from someone LLM’ing from home, implement authentication.
  • Deploy features like HTTPS for encryption.

Ullrich considers the LLM exposures a result of “haphazard deployment.”

“People are so caught up in the AI madness and the AI craze, and they feel left behind if they’re not doing something with AI,” Rudis said, driving them to leave an LLM open so that employees can easily access it. “They probably don’t know that attackers can find it and use it.”

Top insights for IT pros

From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.