It’s not your imagination, phishing attempts are getting better. While the tactics bad actors employ haven’t changed dramatically, their methods and scale surely have.
Abu Qureshi, the lead threat intelligence and mitigation researcher at BforeAI (the team that discovered that the Department of Education was subject to a phishing attempt last month), said attackers are able to use AI to personalize the social engineering that goes into phishing attempts and also quickly deploy phishing kits.
“The attackers are more adaptive, meaning that us, as defenders, we’re always trying to take the site down or deter them from creating that infrastructure or targeting that customer,” Qureshi said. “But now, with using AI, they’ll have fall-back infrastructure ready to go because they’re using hosting platforms which natively support multiple deployments at once.”
Naama Ilany-Tzur, an assistant professor at Carnegie Mellon University, found in a study that individuals respond differently to phishing attempts based on which device they’re using. She said that AI is negatively impacting the threat landscape by making it harder for individuals to know what is real and what is fake.
In January, All About Cookies released survey findings that 77% of US adults have been deceived by AI-generated content online.
Sure, I’ll bite. For a phishing attempt to succeed, a user must collaborate with the perpetrator in some kind of way—and even though people are aware of phishing attempts, that by itself isn’t enough to effectively defend against them.
Ilany-Tzur set out to understand in her study how the upgrade in devices over the years has also changed the decision-making process for users. As time goes on, for instance, people have become more aware of the risk that is involved with using phones, computers, tablets, and other electronics.
Nonetheless, statistics show that cybercrime is on the rise.
“You would think if it’s just a matter of awareness, so the increased awareness would reduce the percentage of crimes or victims, but that’s not the case,” Ilany-Tzur said. “So, that’s the other motivation: to explain how, although awareness is rising and we progress with technology, the percentage of crimes and victims is also increasing.”
Qureshi said it is easy for humans to fall victim to the social engineering element of phishing, even if we’ve seen a certain kind of attempt multiple times and avoided it.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
“On that day, it didn’t register in your mind at the time, and you clicked the link, or you downloaded the attachment, or you entered the credentials,” Qureshi said. “You can have as many safeguards as possible, but as humans, if we continue to have that lapse, phishing will never go away.”
The safeguards in place for a lot of users and organizations might not be enough, but Qureshi pointed to the need for digital preventions like multi-factor authentication and antivirus programs. He also suggested that companies require multi-factor authentication for employees.
For corporations, he said that the predictive side of defense is “no longer a luxury.” Now that AI is in the picture, professionals are “way behind again.”
“We need to study the behavior, study the attacks, study what their motives are, but also how they are performing the attacks,” Qureshi said. “When we find those breadcrumbs and those repetitive behaviors, we can start to make predictions on which infrastructure is going to be used next…We can disrupt that infrastructure prematurely, and we keep making the victim pool smaller.”
Fool me once. Some AI tools can be used offensively to defend against phishing attempts, but experts agree the tooling isn’t always where it needs to be.
After Ilany-Tzur published a paper, she started receiving emails asking her to speak at events with personalized messaging. Some of those emails were phishing, but she reported that it was difficult to separate the fake ones from the real.
“I’m quite concerned about AI because I know that the negative impact is already there, but the positive impact…I’m not aware of a system using AI that is doing a good job and helping people to sift between the legit and the attack,” Ilany-Tzur said.
One anti-phishing AI system in use by Bluefin Payment Systems, according to the company’s CISO Brent Johnson, is able to test links that phishers include when they use platforms like Docusign to hide malicious URLs.
“The AI tools on the defensive side are getting better and better for companies that have purchased those,” Johnson said. “I’m sure your general ones from Google or Microsoft will get it there eventually, but they’re not there yet.”