<node id="685732">
  <nid>685732</nid>
  <type>news</type>
  <uid>
    <user id="36756"><![CDATA[36756]]></user>
  </uid>
  <created>1760566894</created>
  <changed>1760566921</changed>
  <title><![CDATA[Teaching at the Speed of AI: Professor Uses Digital Twin to Teach About GenAI]]></title>
  <body><![CDATA[<p>Instructors creating online courses have long faced a tradeoff: use text-based materials that are easy to update, or invest in engaging but time-consuming video formats. As a result, learners often get either flexibility or immersion, but rarely both.</p><p>“In a field that moves as fast as artificial intelligence, it’s important to be able to update material frequently,” says David Joyner, executive director of online education in the College of Computing. “That’s usually a problem because re-recording means going back into the studio and trying to make the new content fit in with the old.”</p><p>Joyner’s latest massive open online course (MOOC), <em>Foundations of Generative AI</em>, uses artificial intelligence to solve that challenge. Images for the course are created using Sora and DALL·E 3, while early drafts of quizzes were generated by GPT-5. The course also uses Grady, an AI autograder that provides feedback on open-ended essays.</p><p>The most striking innovation is DAI-vid (pronounced day-eye-vid), a video avatar of Joyner that leads the instruction. To create it, Joyner uploaded a five-minute clip of himself to the generative AI platform HeyGen, along with course scripts and other inputs. The result is a lifelike digital instructor who can let Joyner update his lessons far more easily.</p><p>“With AI, we can just modify the text and have the updated video pop right out,” Joyner says. “It takes minutes at my desk instead of an hour in the studio.”</p><p>This approach allows Joyner to keep course materials current and produce new videos entirely on his own. “It’s strange, but in a lot of ways this course feels more like it’s mine than the ones where I’m on camera,” he says. “Because AI lets me handle every part of production myself, the finished product feels like my complete work.”</p><p>Joyner sees this experiment as an example of AI’s potential to enhance human talent rather than replace it. “Give me AI and I can do five times more than I could alone,” he says. “But give it to our professional video producers, and they will still far outpace me, because expertise matters most. AI just amplifies it.”</p><p><em>Foundations of Generative AI</em> is now available on edX, and the same material is also part of the OMSCS course CS7637: Knowledge-Based AI.</p>]]></body>
  <field_subtitle>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_subtitle>
  <field_dateline>
    <item>
      <value>2025-10-15T00:00:00-04:00</value>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_dateline>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Georgia Tech’s David Joyner is using generative AI to reinvent online teaching, blending human expertise with AI tools to create courses that evolve as fast as the technology they explore.]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[<p>Georgia Tech’s David Joyner built a digital twin to teach his new <em>Foundations of Generative AI</em> course. The lifelike avatar lets him update lessons in minutes—showing how AI can amplify, not replace, human creativity and expertise.</p><div>&nbsp;</div>]]></value>
    </item>
  </field_summary>
  <field_media>
      </field_media>
  <field_contact_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_contact_email>
  <field_location>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_location>
  <field_contact>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_contact>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <!--  TO DO: correct to not conflate categories and news room topics  -->
  <!--  Disquisition: it's funny how I write these TODOs and then never
         revisit them. It's as though the act of writing the thing down frees me
         from the responsibility to actually solve the problem. But what can I
         say? There are more problems than there's time to solve.  -->
  <links_related> </links_related>
  <files> </files>
  <og_groups>
          <item>660375</item>
      </og_groups>
  <og_groups_both>
          <item>
        <![CDATA[Artificial Intelligence]]>
      </item>
          <item>
        <![CDATA[Computer Science/Information Technology and Security]]>
      </item>
          <item>
        <![CDATA[Education]]>
      </item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>194606</tid>
        <value><![CDATA[Artificial Intelligence]]></value>
      </item>
          <item>
        <tid>153</tid>
        <value><![CDATA[Computer Science/Information Technology and Security]]></value>
      </item>
          <item>
        <tid>42911</tid>
        <value><![CDATA[Education]]></value>
      </item>
      </field_categories>
  <core_research_areas>
          <term tid="193655"><![CDATA[Artificial Intelligence at Georgia Tech]]></term>
      </core_research_areas>
  <field_news_room_topics>
      </field_news_room_topics>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>660375</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[Lifetime Learning]]></item>
      </og_groups_both>
  <field_keywords>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
