{"688516":{"#nid":"688516","#data":{"type":"news","title":" Is This Your AI? Researchers Crack AI Blackbox","body":[{"value":"\u003Cdiv\u003E\u003Cdiv\u003E\u003Cp\u003EArtificial intelligence (AI) systems power everything from chatbots to security cameras, yet many of the most advanced models operate as \u201cblack boxes.\u201d Companies can use them, but outsiders can\u2019t see how they were built, where they came from, or whether they contain hidden flaws.\u003C\/p\u003E\u003Cp\u003EThis lack of transparency creates real risks. A model could contain security vulnerabilities or hidden backdoors. It could also be a lightly modified version of an open-source system \u2014 repackaged in violation of its license \u2014 with no easy way to prove it.\u003C\/p\u003E\u003Cp\u003EResearchers at the Georgia Institute of Technology have developed a new framework, ZEN, to help solve this problem. The tool can recover a model\u2019s unique \u201cfingerprint\u201d directly from its memory, allowing experts to trace its origins and reconstruct how it was assembled.\u003C\/p\u003E\u003Cp\u003E\u201cAnalyzing a proprietary AI model without identifying where it came from and how it is constructed is like trying to fix a car engine with the hood welded shut,\u201d said \u003Ca href=\u0022https:\/\/davidoygenblik.github.io\/\u0022\u003E\u003Cstrong\u003EDavid Oygenblik\u003C\/strong\u003E\u003C\/a\u003E, a Ph.D. student at Georgia Tech and the study\u2019s lead author.\u003C\/p\u003E\u003Cp\u003E\u201cZEN not only X-rays the engine but also provides the complete wiring diagram.\u201d\u003C\/p\u003E\u003Cp\u003EZEN works by taking a snapshot of a running AI system and extracting information about both its mathematical structure and the code that defines it. It compares that fingerprint against a database of known open-source models to determine the system\u2019s origin.\u003C\/p\u003E\u003Cp\u003EIf it finds a match, ZEN identifies the exact changes and generates software patches that allow investigators to recreate a working replica of the proprietary model for testing.\u003C\/p\u003E\u003Cp\u003EThat capability has major implications for both security and intellectual property protection.\u003C\/p\u003E\u003Cp\u003E\u201cWith ZEN, a security analyst can finally test a black-box model for hidden backdoors, and a company can gather concrete evidence to prove its software license was infringed,\u201d Oygenblik said.\u003C\/p\u003E\u003Cp\u003ETo evaluate the system, the research team tested ZEN on 21 state-of-the-art AI models, including Llama 3, YOLOv10, and other well-known systems.\u003C\/p\u003E\u003Cp\u003EZEN correctly traced every customized model back to its original open-source foundation \u2014 achieving 100% attribution accuracy. Even when models had been heavily modified \u2014 differing by more than 83% from their original versions \u2014 ZEN successfully identified the changes and enabled full reconstruction for security testing.\u003C\/p\u003E\u003Cp\u003EThe researchers will present their findings at the 2026 \u003Ca href=\u0022https:\/\/www.ndss-symposium.org\/\u0022\u003ENetwork and Distributed System Security (NDSS) Symposium\u003C\/a\u003E. The paper, \u003Ca href=\u0022https:\/\/www.ndss-symposium.org\/ndss-paper\/achieving-zen-combining-mathematical-and-programmatic-deep-learning-model-representations-for-attribution-and-reuse\/\u0022\u003E\u003Cem\u003EAchieving Zen: Combining Mathematical and Programmatic Deep Learning Model Representations for Attribution and Reuse\u003C\/em\u003E\u003C\/a\u003E, was authored by Oygenblik, master\u2019s student \u003Cstrong\u003EDinko Dermendzhiev\u003C\/strong\u003E, Ph.D. students \u003Cstrong\u003EFilippos Sofias\u003C\/strong\u003E, \u003Cstrong\u003EMingxuan Yao\u003C\/strong\u003E, \u003Cstrong\u003EHaichuan Xu\u003C\/strong\u003E, and \u003Cstrong\u003ERunze Zhang\u003C\/strong\u003E, post-doctorate scholars \u003Cstrong\u003EJeman Park\u003C\/strong\u003E, and \u003Cstrong\u003EAmit Kumar Sikder\u003C\/strong\u003E, as well as Associate Professor \u003Cstrong\u003EBrendan Saltaformaggio\u003C\/strong\u003E.\u003C\/p\u003E\u003C\/div\u003E\u003C\/div\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cp\u003EResearchers have developed a technique to identify the origins of proprietary \u201cblack-box\u201d AI models, even when their internal structure and training data are hidden. Because many commercial AI systems cannot be externally inspected, it is difficult to detect security vulnerabilities, intellectual property theft, licensing violations, or trace a model\u2019s lineage. The new approach enables researchers to attribute models, determine whether one was derived from another, and identify potential misuse of protected data. By improving transparency and enabling verification of model provenance, the work strengthens accountability and trust in AI systems.\u003C\/p\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Researchers have developed a technique to identify the origins of proprietary \u201cblack-box\u201d AI models, even when their internal structure and training data are hidden."}],"uid":"36253","created_gmt":"2026-02-25 17:33:20","changed_gmt":"2026-03-20 12:52:42","author":"John Popham","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2026-02-25T00:00:00-05:00","iso_date":"2026-02-25T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"679429":{"id":"679429","type":"image","title":"Is-this-your-AI.jpg","body":null,"created":"1772040810","gmt_created":"2026-02-25 17:33:30","changed":"1772040810","gmt_changed":"2026-02-25 17:33:30","alt":"A graphic showing an AI model in an outstretched hand. ","file":{"fid":"263592","name":"Is-this-your-AI.jpg","image_path":"\/sites\/default\/files\/2026\/02\/25\/Is-this-your-AI.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2026\/02\/25\/Is-this-your-AI.jpg","mime":"image\/jpeg","size":1346270,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2026\/02\/25\/Is-this-your-AI.jpg?itok=ehbGALRW"}}},"media_ids":["679429"],"related_links":[{"url":"https:\/\/www.ndss-symposium.org\/wp-content\/uploads\/2026-s1628-paper.pdf","title":"Read the Paper"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"660367","name":"School of Cybersecurity and Privacy"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"135","name":"Research"}],"keywords":[{"id":"2835","name":"ai"},{"id":"193860","name":"Artifical Intelligence"},{"id":"192863","name":"go-ai"},{"id":"365","name":"Research"},{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"145171","name":"Cybersecurity"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EJohn Popham\u003C\/p\u003E\u003Cp\u003ECommunications Officer II\u0026nbsp;School of Cybersecurity and Privacy\u0026nbsp;\u003C\/p\u003E","format":"limited_html"}],"email":["jpopham3@gatech.edu"],"slides":[],"orientation":[],"userdata":""}}}