Life is full of trade-offs. A late-night Uber may get you home faster, but for a higher price than walking. A pair of cute heels may win you compliments, but can hurt your feet…and AI coding may speed up development, but can open your software to large security risks.
Research from agentic application security platform Apiiro suggests AI coding assistants might speed up developers’ workflow at a hefty security-related cost.
The findings. Apiiro researchers examined code from thousands of code repositories produced by developers from Fortune 50 companies. They concluded that AI-assisted developers produced 3-4 times more commits than regular coders, but had fewer push requests, a crucial part of the coding process that enables other team members to review code.
“If the pull request is very large [and] it contains a lot of code, it’s really hard to do a proper review because it overloads the security review that you need to do,” Apiiro Product Manager Itay Nussbaum told IT Brew.
Apiiro researchers also claim that while AI-written code had significantly fewer syntax errors and logic bugs, it often opened the door to much larger issues. Privilege escalation paths increased 322% in AI-assisted coding compared to standard-written code. Meanwhile, architectural design flaws jumped 153%.
“In other words, AI is fixing the typos but creating the timebombs,” the report wrote. “That makes reviews and automated scans less effective, and raises the stakes for context-aware analysis at design and code time.”
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
As of June, Apiiro said AI code produced 10,000 new security findings per month, a ten-fold uptick since December 2024.
Tale as old as time! Apiiro’s research echoes several concerns around AI-assisted coding as it continues to make waves across the industry. IT Brew has previously reported on the dangers of slopsquatting in vibe coding and the risk of package hallucinations when using LLMs to generate code.
Quentin Rhoads-Herrera, VP of security services at Stratascale, told IT Brew that he leverages AI-assisted coding in his personal life and has seen firsthand the risks that come with it.
“I’ve seen pretty much everything I’ve coded as a hobby has vulnerabilities in it because it’s just giving me working code to my exact request, not looking at what the best practices are,” Rhoads-Herrera said.
Guardrails. Nussbaum said AI-assisted developers should make sure they have protective mechanisms in place, such as automating the security review process, to protect themselves from vulnerabilities.
Rhoads-Herrera also advised AI-assisted teams to continue to keep the human in the loop: “The human review of all the code is still very critical.”