Threat actors tailor prompt-injection tricks—not for humans, but AI
Phishers want to overwhelm AI-based systems with lots of boring text.
• 4 min read
A long-winded email isn’t just a cure for insomnia...it’s becoming a cybersecurity threat, with attackers using excessive dummy text to lull AI defenses to sleep.
The tactic—spotted recently by email security company Sublime Security—is less an example of flashy, model-breaking commands like “ignore all previous instructions” and instead more like, “nothing to see here, drowsy LLM.”
“They’re sort of poking the bear a little bit, and seeing what they can get away with,” Luke Wescott, Sublime’s threat detection engineer, told IT Brew.
Hidden message. In a May 5 post, Wescott and Machine Learning Researcher Anna Bertiger broke down how the tactic works.
Within the phishing email’s HTML, threat actors hide a heap of harmless looking text: an Adidas newsletter, in one example, a whole romance novel in another. The phishers bury the text using a font too tiny for humans to see—but AI tools can.
What do sportswear and a love story have in common? Threat actors hope the “hidden” text is innocuous enough to fool an AI-based system into incorrectly considering the malicious email a harmless one, and let it through the spam filter.
And sometimes the tactic works, the report said: “If the hidden content is rich enough (and other malicious signals weak enough), it could sway a final verdict of an AI from suspicious to benign.” According to the researchers, Sublime flagged a message with this stealth text as malicious based on additional signals like brand impersonation, urgent language, suspicious links, and first-time sender.
“We did not see it work, but we can see a situation where it would work,” Wescott told us.
Wescott said he saw the tactic about three months ago, and even observed “a few dozen” unique cases in the week of our interview alone.
Be direct. Researchers have demonstrated showier methods of prompt injection, like “jailbreaking” a model to go against its programming. IT Brew reported in July 2025 on a command that used zero-size font to abuse the “summarize this email” feature and send a false security alert. Check Point Research, in a June 2025 post, revealed threat actors creating malware instructing AI-based security tools to respond “NO MALWARE DETECTED.”
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
By subscribing, you accept our Terms & Privacy Policy.
What to do. Etay Maor, VP of threat intelligence at enterprise networking and network security platform company Cato Networks, said IT pros need to understand how AI-powered systems reason, as attackers will increasingly try to mess with that logic. An AI tool can’t be a black box anymore, he told us.
“It’s important that we get deeper into: How does my model work? What is it trained on?” Maor said. “When new input comes in, how can it affect, potentially, the knowledge base or the guardrails, or the decision-making, the reasoning of the model?”
Wescott advised IT pros to treat LLM-based detection as one security layer, not the security layer.
Additional checks should include end-user reporting, risk scoring that surfaces highly suspicious emails for human review, and security event monitoring or automated notification of suspicious behavior, like an anomalous link or attachment detonation, according to Tom Gould, director of cybersecurity and IT resilience, at consulting firm West Monroe.
“AI shouldn’t be the only thing deciding whether something is safe,” Gould said.
A 2026 report from cybersecurity company Mimecast found that 55% of surveyed organizations around the world currently “use AI for threat detection and real-time monitoring,” up from 46% the previous year.
“As the landscape starts to shift into more LLM usage,” Wescott said, “the game is going to sort of change from from ‘fool you and me’ to ‘fool the robots.’”
About the author
Billy Hurley
Billy Hurley has been a reporter with IT Brew since 2022. He writes stories about cybersecurity threats, AI developments, and IT strategies.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
By subscribing, you accept our Terms & Privacy Policy.