Start your day with intelligence. Get The OODA Daily Pulse.
While AI is becoming better at generating that functional code, it is also enabling attackers to identify and exploit vulnerabilities in that code more quickly and effectively. This is making it easier for less-skilled programmers to attack the code, increasing the speed and sophistication of those attacks — creating a situation in which code vulnerabilities are increasing even as the ability to exploit them is becoming easier, according to new research from application risk management software provider Veracode. AI-generated code introduced security vulnerabilities in 45% of 80 curated coding tasks across more than 100 LLMs, according to the 2025 GenAI Code Security Report. The research also found that GenAI models chose an insecure method to write code over a secure method 45% of the time. So, even though AI can create code that is functional and syntaactically correct, the report reveals that security performance has not kept pace. “The rise of vibe coding, where developers rely on AI to generate code, typically without explicitly defining security requirements, represents a fundamental shift in how software is built,” Jens Wessling, chief technology officer at Veracode, said in a statement announcing the report.
Full report : AI-Generated Code Poses Major Security Risks in Nearly Half of All Development Tasks, Veracode Research Reveals.
For more see the OODA Company Profile on Veracode.