<node id="686422">
  <nid>686422</nid>
  <type>news</type>
  <uid>
    <user id="36530"><![CDATA[36530]]></user>
  </uid>
  <created>1763068438</created>
  <changed>1763068498</changed>
  <title><![CDATA[Ph.D. Student’s Framework Used to Bolster Nvidia’s Cosmos Predict-2 Model]]></title>
  <body><![CDATA[<p>A new deep learning architectural framework could boost the development and deployment efficiency of autonomous vehicles and humanoid robots. The framework will lower training costs and reduce the amount of real-world data needed for training.</p><p>World foundation models (WFMs) enable physical AI systems to learn and operate within&nbsp;synthetic worlds created by generative artificial intelligence (genAI). For example, these models use predictive capabilities to generate up to 30 seconds of video that accurately reflects the real world.</p><p>The new framework, developed by a Georgia Tech researcher, enhances the processing speed of the neural networks that simulate these real-world environments from text, images, or video inputs.</p><p>The neural networks that make up the architectures of large language models like ChatGPT and visual models like Sora process contextual information using the “attention mechanism.”</p><p>Attention refers to a model’s ability to focus on the most relevant parts of input.</p><p>The Neighborhood Attention Extension (NATTEN) allows models that require GPUs or high-performance computing systems to process information and generate outputs more efficiently.</p><p>Processing speeds can increase by up to 2.6 times, said <a href="https://alihassanijr.com/"><strong>Ali Hassani</strong></a>, a Ph.D. student in the School of Interactive Computing and the creator of NATTEN. Hassani is advised by Associate Professor <a href="https://www.humphreyshi.com/"><strong>Humphrey Shi</strong></a>.</p><p>Hassani is also a research scientist at Nvidia, where he introduced NATTEN to <a href="https://www.nvidia.com/en-us/ai/cosmos/"><strong>Cosmos</strong></a> — a family of WFMs the company uses to train robots, autonomous vehicles, and other physical AI applications.</p><p>“You can map just about anything from a prompt or an image or any combination of frames from an existing video to predict future videos,” Hassani said. “Instead of generating words with an LLM, you’re generating a world.</p><p>“Unlike LLMs that generate a single token at a time, these models are compute-heavy. They generate many images — often hundreds of frames at a time — so the models put a lot of work on the GPU. NATTEN lets us decrease some of that work and proportionately accelerate the model.”</p>]]></body>
  <field_subtitle>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_subtitle>
  <field_dateline>
    <item>
      <value>2025-11-03T00:00:00-05:00</value>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_dateline>
  <field_summary_sentence>
    <item>
      <value><![CDATA[A new deep learning architectural framework, Neighborhood Attention Extension (NATTEN), is being used by Nvidia to  increase the processing speed of their Cosmos Predict-2 Model for training autonomous vehicles and humanoid robots.]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[<p>Georgia Tech Ph.D. student Ali Hassani developed the Neighborhood Attention Extension (NATTEN), a deep learning architectural framework that is being integrated into Nvidia's Cosmos Predict-2 world foundation model. NATTEN enhances the processing speed of neural networks that simulate real-world environments for physical AI systems, which are used to train autonomous vehicles and humanoid robots.&nbsp;</p>]]></value>
    </item>
  </field_summary>
  <field_media>
          <item>
        <nid>
          <node id="678621">
            <nid>678621</nid>
            <type>image</type>
            <title><![CDATA[2X6A3487.jpg]]></title>
            <body><![CDATA[]]></body>
                          <field_image>
                <item>
                  <fid>262676</fid>
                  <filename><![CDATA[2X6A3487.jpg]]></filename>
                  <filepath><![CDATA[/sites/default/files/2025/11/13/2X6A3487.jpg]]></filepath>
                  <file_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/11/13/2X6A3487.jpg]]></file_full_path>
                  <filemime>image/jpeg</filemime>
                  <image_740><![CDATA[]]></image_740>
                  <image_alt><![CDATA[Humprhey Shi and Ali Hassani]]></image_alt>
                </item>
              </field_image>
            
                      </node>
        </nid>
      </item>
      </field_media>
  <field_contact_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_contact_email>
  <field_location>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_location>
  <field_contact>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_contact>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <!--  TO DO: correct to not conflate categories and news room topics  -->
  <!--  Disquisition: it's funny how I write these TODOs and then never
         revisit them. It's as though the act of writing the thing down frees me
         from the responsibility to actually solve the problem. But what can I
         say? There are more problems than there's time to solve.  -->
  <links_related> </links_related>
  <files> </files>
  <og_groups>
          <item>47223</item>
          <item>1188</item>
          <item>50876</item>
      </og_groups>
  <og_groups_both>
          <item>
        <![CDATA[Computer Science/Information Technology and Security]]>
      </item>
          <item>
        <![CDATA[Industry]]>
      </item>
          <item>
        <![CDATA[Robotics]]>
      </item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>153</tid>
        <value><![CDATA[Computer Science/Information Technology and Security]]></value>
      </item>
          <item>
        <tid>194609</tid>
        <value><![CDATA[Industry]]></value>
      </item>
          <item>
        <tid>152</tid>
        <value><![CDATA[Robotics]]></value>
      </item>
      </field_categories>
  <core_research_areas>
          <term tid="193655"><![CDATA[Artificial Intelligence at Georgia Tech]]></term>
      </core_research_areas>
  <field_news_room_topics>
      </field_news_room_topics>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>47223</item>
          <item>1188</item>
          <item>50876</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[College of Computing]]></item>
          <item><![CDATA[Research Horizons]]></item>
          <item><![CDATA[School of Interactive Computing]]></item>
      </og_groups_both>
  <field_keywords>
          <item>
        <tid>192863</tid>
        <value><![CDATA[go-ai]]></value>
      </item>
          <item>
        <tid>193860</tid>
        <value><![CDATA[Artifical Intelligence]]></value>
      </item>
          <item>
        <tid>194701</tid>
        <value><![CDATA[go-resarchnews]]></value>
      </item>
          <item>
        <tid>9153</tid>
        <value><![CDATA[Research Horizons]]></value>
      </item>
          <item>
        <tid>14549</tid>
        <value><![CDATA[nvidia]]></value>
      </item>
          <item>
        <tid>191138</tid>
        <value><![CDATA[artificial neural networks]]></value>
      </item>
          <item>
        <tid>97281</tid>
        <value><![CDATA[autonomous vehicles]]></value>
      </item>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
