{"687708":{"#nid":"687708","#data":{"type":"news","title":" Researchers Warn AI \u2018Blind Spot\u2019 Could Allow Attackers to Hijack Self-Driving Vehicles","body":[{"value":"\u003Cdiv\u003E\u003Cdiv\u003E\u003Cp\u003EA newly discovered vulnerability could allow cybercriminals to silently hijack the artificial intelligence (AI) systems in self-driving cars, raising concerns about the security of autonomous systems increasingly used on public roads.\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;Georgia Tech cybersecurity researchers discovered the vulnerability, dubbed VillainNet, and found it can remain dormant in a self-driving vehicle\u2019s AI system until triggered by specific conditions.\u003C\/p\u003E\u003Cp\u003EOnce triggered, VillainNet is almost certain to succeed, giving attackers control of the targeted vehicle.\u003C\/p\u003E\u003Cp\u003EThe research finds that attackers could program almost any action within a self-driving vehicle\u2019s AI super network to trigger VillainNet. In one possible scenario, it could be triggered when a self-driving taxi\u2019s AI responds to rainfall and changing road conditions.\u003C\/p\u003E\u003Cp\u003EOnce in control, hackers could hold the passengers hostage and threaten to crash the taxi.\u003C\/p\u003E\u003Cp\u003EThe researchers discovered this new backdoor attack threat in the AI super networks that power autonomous driving systems.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cSuper networks are designed to be the Swiss Army knife of AI, swapping out tools, or in this case sub networks, as needed for the task at hand,\u0022 said \u003Ca href=\u0022https:\/\/davidoygenblik.github.io\/\u0022\u003E\u003Cstrong\u003EDavid Oygenblik\u003C\/strong\u003E\u003C\/a\u003E, Ph.D. student at Georgia Tech and the lead researcher on the project.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u0022However, we found that an adversary can exploit this by attacking just one of those tiny tools. The attack remains completely dormant until that specific subnetwork is used, effectively hiding across billions of other benign configurations.\u0022\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThis backdoor attack is nearly guaranteed to work, according to Oygenblik. This blind spot is nearly undetectable with current tools and can impact any autonomous vehicle that runs on AI. It can also be hidden at any stage of development and include billions of scenarios.\u003C\/p\u003E\u003Cp\u003E\u201cWith VillainNet, the attacker forces defenders to find a single needle in a haystack that can be as large as 10 quintillion straws,\u0022 said Oygenblik.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u0022Our work is a call to action for the security community. As AI systems become more complex and adaptive, we must develop new defenses capable of addressing these novel, hyper-targeted threats.\u0022\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThe hypothetical fix to the problem was to add security measures to the super networks. These networks contain billions of specialized subnetworks that can be activated on the fly, but Oygenblik wanted to see what would happen if he attacked a single subnetwork tool.\u003C\/p\u003E\u003Cp\u003EIn experiments, the VillainNet attack proved highly effective. It achieved a 99% success rate when activated while remaining invisible throughout the AI system.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThe research also shows that detecting a VillainNet backdoor would require 66x more computing power and time to verify the AI system is safe. This challenge dramatically expands the search space for attack detection and is not feasible, according to the researchers.\u003C\/p\u003E\u003Cp\u003EThe project was \u003Ca href=\u0022https:\/\/www.youtube.com\/watch?v=H1fyPD8vWDo\u0022\u003Epresented\u003C\/a\u003E at the ACM Conference on Computer and Communications Security (CCS) in October 2025. The paper, \u003Ca href=\u0022https:\/\/davidoygenblik.github.io\/pdfs\/VNET.pdf\u0022\u003E\u003Cem\u003EVillainNet: Targeted Poisoning Attacks Against SuperNets Along the Accuracy-Latency Pareto Frontier\u003C\/em\u003E\u003C\/a\u003E, was co-authored by Oygenblik, master\u0027s students \u003Cstrong\u003EAbhinav Vemulapalli \u003C\/strong\u003Eand \u003Cstrong\u003EAnimesh Agrawal\u003C\/strong\u003E, Ph.D. student \u003Cstrong\u003EDebopam Sanyal\u003C\/strong\u003E, Associate Professor \u003Cstrong\u003EAlexey Tumanov\u003C\/strong\u003E, and Associate Professor \u003Cstrong\u003EBrendan Saltaformaggio\u003C\/strong\u003E.\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003C\/div\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EA newly discovered vulnerability could allow cybercriminals to silently hijack the artificial intelligence (AI) systems in self-driving cars, raising concerns about the security of autonomous systems increasingly used on public roads.\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;Georgia Tech cybersecurity researchers discovered the vulnerability, dubbed VillainNet, and found it can remain dormant in a self-driving vehicle\u2019s AI system until triggered by specific conditions.\u003C\/p\u003E\u003Cp\u003EOnce triggered, VillainNet is almost certain to succeed, giving attackers control of the targeted vehicle.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A newly discovered vulnerability could allow cybercriminals to silently hijack the artificial intelligence (AI) systems in self-driving cars, raising concerns about the security of autonomous systems increasingly used on public roads."}],"uid":"36253","created_gmt":"2026-01-27 14:51:58","changed_gmt":"2026-02-19 17:34:58","author":"John Popham","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2026-01-27T00:00:00-05:00","iso_date":"2026-01-27T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"679102":{"id":"679102","type":"image","title":"Car-Blind-Spot.jpeg","body":null,"created":"1769525530","gmt_created":"2026-01-27 14:52:10","changed":"1769525530","gmt_changed":"2026-01-27 14:52:10","alt":"A car\u0027s side view mirror with a alert in the center of the mirror. ","file":{"fid":"263221","name":"Car-Blind-Spot.jpeg","image_path":"\/sites\/default\/files\/2026\/01\/27\/Car-Blind-Spot.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2026\/01\/27\/Car-Blind-Spot.jpeg","mime":"image\/jpeg","size":467609,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2026\/01\/27\/Car-Blind-Spot.jpeg?itok=6bYsIEkx"}}},"media_ids":["679102"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"660367","name":"School of Cybersecurity and Privacy"}],"categories":[{"id":"194606","name":"Artificial Intelligence"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"145","name":"Engineering"},{"id":"135","name":"Research"}],"keywords":[{"id":"182941","name":"cc-research; ic-cybersecurity; ic-hcc"},{"id":"175307","name":"Brendan Saltaformaggio"},{"id":"365","name":"Research"},{"id":"192863","name":"go-ai"},{"id":"188667","name":"go-"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"145171","name":"Cybersecurity"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpopham3@gatech.edu\u0022\u003EJohn Popham\u003C\/a\u003E\u003Cbr\u003ECommunications Officer II\u0026nbsp;\u003Cbr\u003ESchool of Cybersecurity and Privacy\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}}}