<node id="688516">
  <nid>688516</nid>
  <type>news</type>
  <uid>
    <user id="36253"><![CDATA[36253]]></user>
  </uid>
  <created>1772040800</created>
  <changed>1774011162</changed>
  <title><![CDATA[ Is This Your AI? Researchers Crack AI Blackbox]]></title>
  <body><![CDATA[<div><div><p>Artificial intelligence (AI) systems power everything from chatbots to security cameras, yet many of the most advanced models operate as “black boxes.” Companies can use them, but outsiders can’t see how they were built, where they came from, or whether they contain hidden flaws.</p><p>This lack of transparency creates real risks. A model could contain security vulnerabilities or hidden backdoors. It could also be a lightly modified version of an open-source system — repackaged in violation of its license — with no easy way to prove it.</p><p>Researchers at the Georgia Institute of Technology have developed a new framework, ZEN, to help solve this problem. The tool can recover a model’s unique “fingerprint” directly from its memory, allowing experts to trace its origins and reconstruct how it was assembled.</p><p>“Analyzing a proprietary AI model without identifying where it came from and how it is constructed is like trying to fix a car engine with the hood welded shut,” said <a href="https://davidoygenblik.github.io/"><strong>David Oygenblik</strong></a>, a Ph.D. student at Georgia Tech and the study’s lead author.</p><p>“ZEN not only X-rays the engine but also provides the complete wiring diagram.”</p><p>ZEN works by taking a snapshot of a running AI system and extracting information about both its mathematical structure and the code that defines it. It compares that fingerprint against a database of known open-source models to determine the system’s origin.</p><p>If it finds a match, ZEN identifies the exact changes and generates software patches that allow investigators to recreate a working replica of the proprietary model for testing.</p><p>That capability has major implications for both security and intellectual property protection.</p><p>“With ZEN, a security analyst can finally test a black-box model for hidden backdoors, and a company can gather concrete evidence to prove its software license was infringed,” Oygenblik said.</p><p>To evaluate the system, the research team tested ZEN on 21 state-of-the-art AI models, including Llama 3, YOLOv10, and other well-known systems.</p><p>ZEN correctly traced every customized model back to its original open-source foundation — achieving 100% attribution accuracy. Even when models had been heavily modified — differing by more than 83% from their original versions — ZEN successfully identified the changes and enabled full reconstruction for security testing.</p><p>The researchers will present their findings at the 2026 <a href="https://www.ndss-symposium.org/">Network and Distributed System Security (NDSS) Symposium</a>. The paper, <a href="https://www.ndss-symposium.org/ndss-paper/achieving-zen-combining-mathematical-and-programmatic-deep-learning-model-representations-for-attribution-and-reuse/"><em>Achieving Zen: Combining Mathematical and Programmatic Deep Learning Model Representations for Attribution and Reuse</em></a>, was authored by Oygenblik, master’s student <strong>Dinko Dermendzhiev</strong>, Ph.D. students <strong>Filippos Sofias</strong>, <strong>Mingxuan Yao</strong>, <strong>Haichuan Xu</strong>, and <strong>Runze Zhang</strong>, post-doctorate scholars <strong>Jeman Park</strong>, and <strong>Amit Kumar Sikder</strong>, as well as Associate Professor <strong>Brendan Saltaformaggio</strong>.</p></div></div>]]></body>
  <field_subtitle>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_subtitle>
  <field_dateline>
    <item>
      <value>2026-02-25T00:00:00-05:00</value>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_dateline>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Researchers have developed a technique to identify the origins of proprietary “black-box” AI models, even when their internal structure and training data are hidden.]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[<div><div><div><div><div><div><p>Researchers have developed a technique to identify the origins of proprietary “black-box” AI models, even when their internal structure and training data are hidden. Because many commercial AI systems cannot be externally inspected, it is difficult to detect security vulnerabilities, intellectual property theft, licensing violations, or trace a model’s lineage. The new approach enables researchers to attribute models, determine whether one was derived from another, and identify potential misuse of protected data. By improving transparency and enabling verification of model provenance, the work strengthens accountability and trust in AI systems.</p></div></div></div></div></div></div>]]></value>
    </item>
  </field_summary>
  <field_media>
          <item>
        <nid>
          <node id="679429">
            <nid>679429</nid>
            <type>image</type>
            <title><![CDATA[Is-this-your-AI.jpg]]></title>
            <body><![CDATA[]]></body>
                          <field_image>
                <item>
                  <fid>263592</fid>
                  <filename><![CDATA[Is-this-your-AI.jpg]]></filename>
                  <filepath><![CDATA[/sites/default/files/2026/02/25/Is-this-your-AI.jpg]]></filepath>
                  <file_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2026/02/25/Is-this-your-AI.jpg]]></file_full_path>
                  <filemime>image/jpeg</filemime>
                  <image_740><![CDATA[]]></image_740>
                  <image_alt><![CDATA[A graphic showing an AI model in an outstretched hand. ]]></image_alt>
                </item>
              </field_image>
            
                      </node>
        </nid>
      </item>
      </field_media>
  <field_contact_email>
    <item>
      <email><![CDATA[jpopham3@gatech.edu]]></email>
    </item>
  </field_contact_email>
  <field_location>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_location>
  <field_contact>
    <item>
      <value><![CDATA[<p>John Popham</p><p>Communications Officer II&nbsp;School of Cybersecurity and Privacy&nbsp;</p>]]></value>
    </item>
  </field_contact>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <!--  TO DO: correct to not conflate categories and news room topics  -->
  <!--  Disquisition: it's funny how I write these TODOs and then never
         revisit them. It's as though the act of writing the thing down frees me
         from the responsibility to actually solve the problem. But what can I
         say? There are more problems than there's time to solve.  -->
  <links_related> </links_related>
  <files> </files>
  <og_groups>
          <item>47223</item>
          <item>1188</item>
          <item>660367</item>
      </og_groups>
  <og_groups_both>
          <item>
        <![CDATA[Computer Science/Information Technology and Security]]>
      </item>
          <item>
        <![CDATA[Research]]>
      </item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>153</tid>
        <value><![CDATA[Computer Science/Information Technology and Security]]></value>
      </item>
          <item>
        <tid>135</tid>
        <value><![CDATA[Research]]></value>
      </item>
      </field_categories>
  <core_research_areas>
          <term tid="193655"><![CDATA[Artificial Intelligence at Georgia Tech]]></term>
          <term tid="145171"><![CDATA[Cybersecurity]]></term>
      </core_research_areas>
  <field_news_room_topics>
          <item>
        <tid>71881</tid>
        <value><![CDATA[Science and Technology]]></value>
      </item>
      </field_news_room_topics>
  <links_related>
          <link>
      <url>https://www.ndss-symposium.org/wp-content/uploads/2026-s1628-paper.pdf</url>
      <title></title>
      </link>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>47223</item>
          <item>1188</item>
          <item>660367</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[College of Computing]]></item>
          <item><![CDATA[Research Horizons]]></item>
          <item><![CDATA[School of Cybersecurity and Privacy]]></item>
      </og_groups_both>
  <field_keywords>
          <item>
        <tid>2835</tid>
        <value><![CDATA[ai]]></value>
      </item>
          <item>
        <tid>193860</tid>
        <value><![CDATA[Artifical Intelligence]]></value>
      </item>
          <item>
        <tid>192863</tid>
        <value><![CDATA[go-ai]]></value>
      </item>
          <item>
        <tid>365</tid>
        <value><![CDATA[Research]]></value>
      </item>
          <item>
        <tid>187915</tid>
        <value><![CDATA[go-researchnews]]></value>
      </item>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
