{"689636":{"#nid":"689636","#data":{"type":"news","title":"Bad Vibes: AI-Generated Code is Vulnerable, Researchers Warn","body":[{"value":"\u003Cp\u003EVibe coding programmers are releasing batches of vulnerable code, according to researchers at the School of Cybersecurity and Privacy (SCP) at Georgia Tech, who have scanned over 43,000 security advisories across the web.\u003C\/p\u003E\u003Cp\u003EThe programming style relies on using generative artificial intelligence (AI) to create software code using tools like Claude, Gemini, and GitHub Copilot. According to graduate research assistant \u003Cstrong\u003EHanqing Zhao\u003C\/strong\u003E of the \u003Ca href=\u0022https:\/\/gts3.org\/\u0022\u003ESystems Software \u0026amp; Security Lab\u003C\/a\u003E (SSLab), no one had been tracking these common vulnerabilities and exposures before the launch of their \u003Ca href=\u0022https:\/\/vibe-radar-ten.vercel.app\/\u0022\u003EVibe Security Radar\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003E\u201cThe vulnerabilities we found lead to breaches,\u201d he said. \u201cEveryone is using these tools now. We need a feedback loop to identify which tools, which patterns, and which workflows create the most risk.\u201d\u003C\/p\u003E\u003Cp\u003EThe radar extensively scans public vulnerability databases, finds the error for each vulnerability, and then examines the code\u2019s history to find who introduced the bug. If they discover an AI tool\u0027s signature, the radar flags it.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EOf the 74 confirmed cases uncovered so far by the tool, 14 are critical risks, and 25 are high. These vulnerabilities include command injection, authentication bypass, and server-side request forgery. Zhao explained that since AI models tend to repeat the same mistakes, an attacker would need to find these bugs just once.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cMillions of developers using the same models means the same bugs showing up across different projects,\u201d he said. \u201cFind one pattern in one AI codebase, you can scan for it across thousands of repositories.\u201d\u003C\/p\u003E\u003Cp\u003EDespite its success, the team has only scratched the surface of the problem. The radar can trace metadata like co-author tags, bot emails, and other known tool signatures, but it can\u0027t identify an issue if these markers have been removed.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThe next step is behavioral detection. AI-written code has patterns in how it names variables, structures functions, and handles errors.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cWe\u0027re building models that can identify AI code from the code itself, no metadata needed,\u201d said Zhao. \u201cThat opens up a lot of cases we currently can\u0027t touch.\u201d\u003C\/p\u003E\u003Cp\u003EThe team is also improving its verification pipeline and expanding its sources to include more vulnerability databases. The goal is to get a more complete picture of AI-introduced vulnerabilities across open source, not just the ones that happen to leave signatures behind.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EAs more programmers rely on vibe coding, Zhao warns that it still needs to be reviewed as thoroughly as any other project.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cThe whole point of vibe coding is not reading it afterward, I know,\u201d he said. \u201cBut if you\u0027re shipping AI output to production, review it the way you\u0027d review a junior developer\u0027s pull request. Especially anything around input handling and authentication.\u201d\u003C\/p\u003E\u003Cp\u003EWhen prompting AI, SSLab also recommends providing more detailed instructions to get it closer to production-ready. There are also tools to check the code for vulnerabilities after \u0026nbsp;code it has been generated. Not double-checking could lead to a catastrophe.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cThe attack surface keeps growing,\u201d said Zhao. \u201cMore people running AI agents locally means the attacker doesn\u0027t need to break into the company infrastructure. They just need one vulnerability in a model context protocol server that someone installed and never reviewed.\u201d\u003C\/p\u003E\u003Cp\u003EOne reason the attack surfaces are expanding rapidly is AI\u2019s evolution. In the second half of 2025, the Vibe Security Radar found about 18 cases across seven months. Then, in the first three months of 2026, it identified 56. March 2026 alone had 35, more than all of 2025 combined.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EMany tools, like Claude, are now more autonomous, allowing developers to write entire features, create files, and even make architecture decisions.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cWhen an agent builds something without authentication, that\u0027s not a typo,\u201d said Zhao. \u201cIt\u0027s a design flaw baked in from the start. Claude Code and Copilot together account for most of what we detect, but that\u0027s partly because they leave the clearest signatures.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EResearchers at the Georgia Tech School of Cybersecurity and Privacy are uncovering a growing risk in modern software development: vulnerabilities introduced by AI-generated code.\u003C\/p\u003E\u003Cp\u003EUsing the Vibe Security Radar, the team analyzed more than 43,000 security advisories and identified dozens of confirmed vulnerabilities tied to tools like GitHub Copilot, Claude, and Gemini\u2014including critical flaws such as authentication bypass and command injection.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Researchers at the Georgia Tech School of Cybersecurity and Privacy are uncovering a growing risk in modern software development: vulnerabilities introduced by AI-generated code."}],"uid":"36253","created_gmt":"2026-04-13 14:32:02","changed_gmt":"2026-04-13 14:44:00","author":"John Popham","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2026-04-13T00:00:00-04:00","iso_date":"2026-04-13T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"679920":{"id":"679920","type":"image","title":"Vibe-Coding.jpg","body":null,"created":"1776090752","gmt_created":"2026-04-13 14:32:32","changed":"1776090752","gmt_changed":"2026-04-13 14:32:32","alt":"A man typing on a computer. There is a hovering screen hovering over his hands that says \u0022Vibe Coding\u0022","file":{"fid":"264142","name":"Vibe-Coding.jpg","image_path":"\/sites\/default\/files\/2026\/04\/13\/Vibe-Coding.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2026\/04\/13\/Vibe-Coding.jpg","mime":"image\/jpeg","size":1783427,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2026\/04\/13\/Vibe-Coding.jpg?itok=jhk18PZE"}}},"media_ids":["679920"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"660367","name":"School of Cybersecurity and Privacy"}],"categories":[{"id":"194606","name":"Artificial Intelligence"},{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"2835","name":"ai"},{"id":"192863","name":"go-ai"},{"id":"187915","name":"go-researchnews"},{"id":"186861","name":"go-cyber"},{"id":"194393","name":"AI and Cybersecurity"},{"id":"1404","name":"Cybersecurity"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"145171","name":"Cybersecurity"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EJohn Popham\u003C\/p\u003E\u003Cp\u003ECommunications Officer II at the School of Cybersecurity and Privacy\u003C\/p\u003E","format":"limited_html"}],"email":["jpopham3@gatech.edu"],"slides":[],"orientation":[],"userdata":""}}}