<nodes> <node id="681971">  <title><![CDATA[Over the Rainbow and Into 15K: Alumni Help Bring Oz to Life at the Las Vegas Sphere]]></title>  <uid>32045</uid>  <body><![CDATA[<p>For anyone who has only seen the movie on television, <em>The Wizard of Oz</em> is an incredible movie theater experience. Its larger-than-life characters, vivid colors, and memorable soundtrack were made for the big screen.</p><p>Now, a Georgia Tech professor and several alumni are helping bring the 1939 classic Hollywood film to what will likely be its largest screen ever: the Las Vegas Sphere's 160,000-square-foot interior screen.</p><p><a href="https://www.cc.gatech.edu/news/lions-tigers-and-tech-oh-my-alumni-help-dorothy-debut-ultra-hd-sphere">Read more to discover their pivotal role and how generative AI is used to "reconceptualize" the film for the August 28 premiere of <em>The Wizard of Oz at Sphere</em></a>.</p>]]></body>  <author>Ben Snedeker</author>  <status>1</status>  <created>1745346336</created>  <gmt_created>2025-04-22 18:25:36</gmt_created>  <changed>1745591941</changed>  <gmt_changed>2025-04-25 14:39:01</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Debuting in August, "The Wizard of Oz at Sphere' has a solid connection to Georgia Tech's AI community.]]></teaser>  <type>news</type>  <sentence><![CDATA[Debuting in August, "The Wizard of Oz at Sphere' has a solid connection to Georgia Tech's AI community.]]></sentence>  <summary><![CDATA[<p>Debuting in August, "The Wizard of Oz at Sphere' has a solid connection to Georgia Tech's AI community. A Georgia Tech professor and several alumni are helping bring the 1939 classic Hollywood film to what will likely be its largest screen ever: the Las Vegas Sphere's 160,000-square-foot interior screen.</p>]]></summary>  <dateline>2025-04-22T00:00:00-04:00</dateline>  <iso_dateline>2025-04-22T00:00:00-04:00</iso_dateline>  <gmt_dateline>2025-04-22 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Ben Snedeker</p><p>Communications Manager</p><p>Georgia Tech College of Computing</p><p>albert.snedeker@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>676907</item>      </media>  <hg_media>          <item>          <nid>676907</nid>          <type>image</type>          <title><![CDATA[The Wizard of Oz at Sphere courtesy of Google & Sphere]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Cloud_WoZ_SS.width-1300.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2025/04/22/Cloud_WoZ_SS.width-1300.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/04/22/Cloud_WoZ_SS.width-1300.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2025/04/22/Cloud_WoZ_SS.width-1300.jpg?itok=t2VSxbkT]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA['The Wizard of Oz at Sphere,' image courtesy of Google & Sphere]]></image_alt>                    <created>1745346361</created>          <gmt_created>2025-04-22 18:26:01</gmt_created>          <changed>1745346361</changed>          <gmt_changed>2025-04-22 18:26:01</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>      </groups>  <categories>          <category tid="130"><![CDATA[Alumni]]></category>      </categories>  <news_terms>          <term tid="130"><![CDATA[Alumni]]></term>      </news_terms>  <keywords>          <keyword tid="506"><![CDATA[alumni]]></keyword>          <keyword tid="596"><![CDATA[Alumni Association]]></keyword>          <keyword tid="10199"><![CDATA[Daily Digest]]></keyword>          <keyword tid="181991"><![CDATA[Georgia Tech News Center]]></keyword>          <keyword tid="187812"><![CDATA[artificial intelligence (AI)]]></keyword>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>          <keyword tid="192390"><![CDATA[generative AI]]></keyword>      </keywords>  <core_research_areas>          <term tid="193655"><![CDATA[Artificial Intelligence at Georgia Tech]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71901"><![CDATA[Society and Culture]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="682001">  <title><![CDATA[Professor's CNBC Course Highlights College’s Leadership in Expanding AI Literacy]]></title>  <uid>32045</uid>  <body><![CDATA[<p>If you’re worried about artificial intelligence (AI) taking your job, Georgia Tech’s <strong>Mark</strong> <strong>Riedl</strong> says that probably won’t happen. However, losing your job to someone who knows how to leverage AI tools in the workplace is something to be concerned about.</p><p>To help people beyond campus understand what AI tools are available and how to use them effectively, Riedl recently co-taught an online course by CNBC Make It titled <em>How to Use AI to Be More Successful at Work</em>.</p><p>“The running joke right now is that AI will not replace people, but people who use AI will replace people who do not use AI,” said Riedl, professor in the <a href="https://ic.gatech.edu/"><strong>School of Interactive Computing</strong></a>.&nbsp;</p><p>The 90-minute course offers tips and hacks to users who are:</p><ul><li>Inexperienced in using AI tools in the workplace and are looking to grow in professional development</li><li>Small business owners who are overwhelmed with administrative tasks, marketing, industry research, and data analysis</li><li>Job seekers looking to stand out from the crowd</li><li>People seeking to improve their work-life balance</li></ul><p>Riedl, whose research focuses on human-centered and explainable AI, taught sections of the course on the foundation of AI. One of the biggest sections of the course covers large-language models (LLMs).&nbsp;</p><p>“When large language models were put forward as chatbots, this was the first time that any person out in the world could naturally interact with an AI system without having to learn to program or write code,” Riedl said.</p><p>For less than $100, the on-demand course includes a detailed workbook that helps users consider each aspect of their jobs and daily lives and how AI can improve them.</p><p><strong>The Big Picture</strong></p><p>CNBC’s use of Riedl’s expertise is one of many examples of how College of Computing faculty are leading the way in teaching AI literacy.</p><p><strong>David</strong> <strong>Joyner</strong>, executive director of online education, said Georgia Tech’s <a href="https://omscs.gatech.edu/"><strong>Online Master of Science in Computer Science (OMSCS)</strong></a> program continues to innovate with AI literacy in mind.</p><p><a href="https://www.cc.gatech.edu/news/experts-say-life-long-learning-must-keep-pace-generative-ai"><strong>[RELATED: Experts Say Life-long Learning is a Must to Keep Pace with Generative AI]</strong></a></p><p>He said companies and employees alike are learning to navigate AI. Companies are considering AI from a general perspective, focusing on how it can make their businesses more efficient, while employees are using it to become more versatile and valuable workers.</p><p>“It’s an interesting dichotomy,” Joyner said. “If companies are trying to figure out how to operate more efficiently, and you have people using these tools to be more productive, at what point does the company need to prioritize using these tools instead of letting their use be organic? We’re still in this experimental phase.”</p><p>In a conversation with former College of Computing interim dean <strong>Alex</strong> <strong>Orso</strong>, Joyner discusses how OMSCS is staying at the forefront in equipping students with the latest technology skills they need to be successful in a fluctuating industry.</p><p>“We must figure out what generative AI can do well and properly leverage it so we’re not cutting out the foundation of a building and replacing it with sticks,” Joyner said.</p><p>The <a href="https://youtu.be/pVG8d1JkQj4?feature=shared"><strong>complete conversation between Joyner and Orso is available on the College's Youtube</strong></a> channel.</p>]]></body>  <author>Ben Snedeker</author>  <status>1</status>  <created>1745500373</created>  <gmt_created>2025-04-24 13:12:53</gmt_created>  <changed>1745591934</changed>  <gmt_changed>2025-04-25 14:38:54</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech Professor Mark Riedl is helping people learn new skills to stay competitive in the workplace.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech Professor Mark Riedl is helping people learn new skills to stay competitive in the workplace.]]></sentence>  <summary><![CDATA[<p>Georgia Tech Professor Mark Riedl is helping people learn new workplace skills to stay competitive.</p>]]></summary>  <dateline>2025-04-24T00:00:00-04:00</dateline>  <iso_dateline>2025-04-24T00:00:00-04:00</iso_dateline>  <gmt_dateline>2025-04-24 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Nathan Deen, Communications Officer</p><p>Georgia Tech School of Interactive Computing</p><p>nathan.deen@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>676921</item>      </media>  <hg_media>          <item>          <nid>676921</nid>          <type>image</type>          <title><![CDATA[Interactive Computing Professor Mark Riedl co-organized the 2024 Summit on Responsible Computing, AI, and Society, where AI literacy was a key topic. Photo by Terence Rushon/College of Computing]]></title>          <body><![CDATA[<p>Interactive Computing Professor Mark Riedl co-organized the 2024 Summit on Responsible Computing, AI, and Society, where AI literacy was a key topic. Photo by Terence Rushon/College of Computing</p>]]></body>                      <image_name><![CDATA[Summit-on-Responsible-Computing--AI--and-Society_86A9631-Enhanced-NR.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2025/04/24/Summit-on-Responsible-Computing--AI--and-Society_86A9631-Enhanced-NR.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/04/24/Summit-on-Responsible-Computing--AI--and-Society_86A9631-Enhanced-NR.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2025/04/24/Summit-on-Responsible-Computing--AI--and-Society_86A9631-Enhanced-NR.jpg?itok=lOm6XCZ3]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Interactive Computing Professor Mark Riedl co-organized the 2024 Summit on Responsible Computing, AI, and Society, where AI literacy was a key topic. Photo by Terence Rushon/College of Computing]]></image_alt>                    <created>1745500775</created>          <gmt_created>2025-04-24 13:19:35</gmt_created>          <changed>1745500775</changed>          <gmt_changed>2025-04-24 13:19:35</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>      </groups>  <categories>          <category tid="132"><![CDATA[Institute Leadership]]></category>      </categories>  <news_terms>          <term tid="132"><![CDATA[Institute Leadership]]></term>      </news_terms>  <keywords>          <keyword tid="10199"><![CDATA[Daily Digest]]></keyword>          <keyword tid="2556"><![CDATA[artificial intelligence]]></keyword>          <keyword tid="187812"><![CDATA[artificial intelligence (AI)]]></keyword>          <keyword tid="181991"><![CDATA[Georgia Tech News Center]]></keyword>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>      </keywords>  <core_research_areas>          <term tid="193655"><![CDATA[Artificial Intelligence at Georgia Tech]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71881"><![CDATA[Science and Technology]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="677243">  <title><![CDATA[SKYSCENES Leverages New Algorithms to Improve Safety for Autonomous Flying Vehicles]]></title>  <uid>32045</uid>  <body><![CDATA[<p>An artificial intelligence (AI) training dataset developed at Georgia Tech is <a href="https://www.cc.gatech.edu/news/skyscenes-dataset-could-lead-safe-reliable-autonomous-flying-vehicles">setting a new standard for the safety and reliability of autonomous drones and flying vehicles</a>.</p><p>SKYSCENES compiles more than 33,000 annotated computer-generated aerial images. With applications in urban planning, disaster response, and autonomous navigation, the dataset trains computer vision models to better detect and identify objects in aerial images, which can be challenging for existing AI models.</p><p><a href="https://www.cc.gatech.edu/news/skyscenes-dataset-could-lead-safe-reliable-autonomous-flying-vehicles">Read the full story</a> to learn how School of Interactive Computing Ph.D. student <strong>Sahil</strong> <strong>Khose</strong> and Assistant Professor <strong>Judy</strong> <strong>Hoffman</strong> developed this groundbreaking dataset to pave the way for the future of autonomous aviation.</p>]]></body>  <author>Ben Snedeker</author>  <status>1</status>  <created>1727881504</created>  <gmt_created>2024-10-02 15:05:04</gmt_created>  <changed>1729101968</changed>  <gmt_changed>2024-10-16 18:06:08</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[New research from Georgia Tech’s School of Interactive Computing is paving the way for the future of autonomous aviation.]]></teaser>  <type>news</type>  <sentence><![CDATA[New research from Georgia Tech’s School of Interactive Computing is paving the way for the future of autonomous aviation.]]></sentence>  <summary><![CDATA[<p>Georgia Tech researchers have created a new benchmark dataset of computer-generated aerial images. Judy Hoffman, an assistant professor at Georgia Tech’s School of Interactive Computing, worked with students to create SKYSCENES, a dataset containing over 33,000 computer-generated aerial images of cities.</p>]]></summary>  <dateline>2024-10-02T00:00:00-04:00</dateline>  <iso_dateline>2024-10-02T00:00:00-04:00</iso_dateline>  <gmt_dateline>2024-10-02 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Nathan Deen, Communications Officer</p><p>Georgia Tech School of Interactive Computing</p><p><a href="mailto:nathan.deen@cc.gatech.edu">nathan.deen@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>675195</item>      </media>  <hg_media>          <item>          <nid>675195</nid>          <type>image</type>          <title><![CDATA[Georgia Tech School of Interactive Computing Ph.D. student Sahil Khose]]></title>          <body><![CDATA[<p>Ph.D. student Sahil Khose worked with Assistant Professor Judy Hoffman to curate SKYSCENES, a new benchmark dataset that provides well-annotated aerial images of cities that computer vision algorithms can use to operate autonomous flying vehicles. Photos by Kevin Beasley/College of Computing.</p>]]></body>                      <image_name><![CDATA[2X6A9656 (1).jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2024/10/02/2X6A9656%20%281%29.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2024/10/02/2X6A9656%20%281%29.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2024/10/02/2X6A9656%2520%25281%2529.jpg?itok=cSsB9SI0]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Georgia Tech School of Interactive Computing Ph.D. student Sahil Khose]]></image_alt>                    <created>1727881514</created>          <gmt_created>2024-10-02 15:05:14</gmt_created>          <changed>1727881514</changed>          <gmt_changed>2024-10-02 15:05:14</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.cc.gatech.edu/news/skyscenes-dataset-could-lead-safe-reliable-autonomous-flying-vehicles]]></url>        <title><![CDATA[SKYSCENES Dataset Could Lead to Safe, Reliable Autonomous Flying Vehicles]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="135"><![CDATA[Research]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="135"><![CDATA[Research]]></term>      </news_terms>  <keywords>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>          <keyword tid="10199"><![CDATA[Daily Digest]]></keyword>          <keyword tid="187812"><![CDATA[artificial intelligence (AI)]]></keyword>      </keywords>  <core_research_areas>          <term tid="193655"><![CDATA[Artificial Intelligence at Georgia Tech]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71881"><![CDATA[Science and Technology]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="668663">  <title><![CDATA[Students Earn Prestigious Fellowships Underscoring Institute’s Leadership in AI]]></title>  <uid>32045</uid>  <body><![CDATA[<p>Artificial intelligence (AI) research by two Georgia Institute of Technology students has caught the attention of one of the world's leading financial services companies.&nbsp;</p><p>Gaurav Verma and Yuxi Wu are recipients of 2023&nbsp;<a href="https://www.jpmorgan.com/technology/artificial-intelligence/research-awards">J.P. Morgan AI Research Ph.D. Fellowship Awards</a>. They are among 13 scholars being honored this year by J.P. Morgan Chase &amp; Co. for AI research projects taking on real-world challenges.</p><p>"Our goal is to recognize and enable the next generation of leading AI researchers. We want to create an environment where researchers can inspire change and make a lasting impact in our communities and across our industry," said Manuela Veloso, Ph.D., head of AI Research, J.P. Morgan Chase &amp; Co.</p><p><a href="https://www.jpmorgan.com/technology/artificial-intelligence/research-awards/phd-fellowship-2023/gaurav-verma">Verma</a>&nbsp;is pursuing his Ph.D. in the&nbsp;<a href="https://cse.gatech.edu/">School of Computational Science and Engineering</a>. Working with his advisor, Assistant Professor Srijan Kumar, Verma expects to ensure safety, equity, and well-being by creating multimodal learning and natural language processing approaches to achieve better human-AI interactions.</p><p><a href="https://www.jpmorgan.com/technology/artificial-intelligence/research-awards/phd-fellowship-2023/yuxi-wu">Wu</a>&nbsp;is a Ph.D. candidate in the&nbsp;<a href="https://ic.gatech.edu/">School of Interactive Computing</a>. Empowering people regarding their privacy concerns is at the core of her research. Wu examines how cross-sector, collective action systems could better support end-user privacy. Professor Keith Edwards and Adjunct Assistant Professor Sauvik Das advise Wu.</p><p>"It's inspiring to see our students and their work being honored with these prestigious fellowships," said Irfan Essa, computer science professor and director of the&nbsp;<a href="https://ml.gatech.edu/">Machine Learning Center at Georgia Tech</a>.</p><p>"Georgia Tech continues to lead in AI education and research. These fellowships for Gaurav and Yuxi are evidence that we're continuing to move in the right direction."</p><p>Verma and Wu are part of a spectrum of AI research spanning Georgia Tech. To unite this broad community and ensure it continues moving in the right direction, the Institute recently established&nbsp;<a href="https://news.gatech.edu/news/2023/06/06/ai-hub-georgia-tech-unite-campus-artificial-intelligence-rd-and-commercialization">AI Hub at Georgia Tech</a>.</p><p>"AI has a deep history at Georgia Tech, and we continue to serve as leaders in many areas of AI research and education," said Essa, interim co-director of AI Hub at Georgia Tech.</p><p>"Bringing all areas of AI under one umbrella, AI Hub at Georgia Tech will provide structure and governance as the Institute continues to lead and innovate in the burgeoning discipline of AI."</p>]]></body>  <author>Ben Snedeker</author>  <status>1</status>  <created>1690916359</created>  <gmt_created>2023-08-01 18:59:19</gmt_created>  <changed>1715611733</changed>  <gmt_changed>2024-05-13 14:48:53</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Two Georgia Tech Ph.D. students are being recognized for their innovative research taking on real-world problems.]]></teaser>  <type>news</type>  <sentence><![CDATA[Two Georgia Tech Ph.D. students are being recognized for their innovative research taking on real-world problems.]]></sentence>  <summary><![CDATA[<p>Two Georgia Tech Ph.D. students are being recognized for their innovative research with J.P. Morgan AI Research Fellowships.</p>]]></summary>  <dateline>2023-08-01T00:00:00-04:00</dateline>  <iso_dateline>2023-08-01T00:00:00-04:00</iso_dateline>  <gmt_dateline>2023-08-01 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Ben Snedeker, Communications Manager II</p><p>Georgia Tech College of Computing</p><p><a href="mailto:albert.snedeker@cc.gatech.edu">albert.snedeker@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>671294</item>      </media>  <hg_media>          <item>          <nid>671294</nid>          <type>image</type>          <title><![CDATA[Georgia Tech Ph.D. students Gaurav Verma and Yuxi Wu ]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Screen Shot 2023-08-01 at 10.29.55 AM.png]]></image_name>            <image_path><![CDATA[/sites/default/files/2023/08/01/Screen%20Shot%202023-08-01%20at%2010.29.55%20AM.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2023/08/01/Screen%20Shot%202023-08-01%20at%2010.29.55%20AM.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2023/08/01/Screen%2520Shot%25202023-08-01%2520at%252010.29.55%2520AM.png?itok=k5wJ-wBn]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Georgia Tech Ph.D. students Gaurav Verma and Yuxi Wu ]]></image_alt>                    <created>1690916372</created>          <gmt_created>2023-08-01 18:59:32</gmt_created>          <changed>1690916372</changed>          <gmt_changed>2023-08-01 18:59:32</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="132"><![CDATA[Institute Leadership]]></category>          <category tid="8862"><![CDATA[Student Research]]></category>      </categories>  <news_terms>          <term tid="132"><![CDATA[Institute Leadership]]></term>          <term tid="8862"><![CDATA[Student Research]]></term>      </news_terms>  <keywords>          <keyword tid="10199"><![CDATA[Daily Digest]]></keyword>      </keywords>  <core_research_areas>          <term tid="39431"><![CDATA[Data Engineering and Science]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71871"><![CDATA[Campus and Community]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="667967">  <title><![CDATA[Breakthrough Scaling Approach Cuts Cost, Improves Accuracy of Training DNN Models]]></title>  <uid>32045</uid>  <body><![CDATA[<p>A new machine-learning (ML) framework for clients with varied computing resources is the first of its kind to successfully scale deep neural network (DNN) models like those used to detect and recognize objects in still and video images.</p><p>The ability to uniformly scale the width (number of neurons) and depth (number of neural layers) of a DNN model means that remote clients can equitably participate in distributed, real-time training regardless of their computing resources. Resulting benefits include improved accuracy, increased efficiency, and reduced computational costs.</p><p>Developed by Georgia Tech researchers, the ScaleFL framework advances federated learning, which is an ML approach inspired by the personal data scandals of the past decade.</p><p>Federated learning (FL), a term coined by Google in 2016, enables a DNN model to be trained across decentralized devices or servers. Because data aren’t centralized with this approach, threats to data privacy and security are minimized.</p><p>The FL process begins with sending the initial parameters of a global DNN model to smartphones, IoT devices, edge servers, or other participating devices. These edge clients train their local version of the model using their unique data. All local results are aggregated and used to update the global model.</p><p>The process is repeated until the new model is fully trained and meets its design specifications.</p><p>Federated learning works best when remote clients involved in training a new DNN model have comparable computational power and bandwidth. But training can bog down if some participating remote-client devices have limited or fluctuating computing resources.</p><p>“In most real-life applications computational resources tend to differ significantly across clients. This heterogeneity prevents clients with insufficient resources from participating in certain FL tasks that require large models,” said School of Computer Science (CS) Ph.D. student Fatih Ilhan.</p><p>“Federated learning should promote equitable AI practice by supporting a resource-adaptive learning framework that can scale to heterogeneous clients with limited capacity,” said Ilhan, who is advised by Professor Ling Liu.</p><p>Ilhan is the lead author of&nbsp;<a href="https://openaccess.thecvf.com/content/CVPR2023/papers/Ilhan_ScaleFL_Resource-Adaptive_Federated_Learning_With_Heterogeneous_Clients_CVPR_2023_paper.pdf"><em>ScaleFL: Resource-Adaptive Federated Learning with Heterogeneous Clients</em></a>, which he is presenting at the <a href="https://cvpr2023.thecvf.com/">2023 Conference on Computer Vision and Pattern Recognition</a>. CVPR 23 is set for June 18-22 in Vancouver, Canada.</p><p>Creating a framework that can adaptively scale the global DNN model based on a remote client’s computing resources is no easy feat. Ilhan says the balance between a model’s basic and complex feature extraction capabilities can be easily thrown out of whack when manipulating the number of neurons or the number of neuron layers of a DNN model.</p><p>“Since a deeper model is more capable of extracting higher order, complex features while a wider model has access to a finer resolution of lower-order, basic features, performing model size reduction across one dimension causes unbalance in terms of the learning capabilities of the resulting model,” said Ilhan.</p><p>The team overcomes these challenges in part by incorporating early exit classifiers into ScaleFL.</p><p>These ML-based tools are designed to optimize accuracy and efficiency by introducing intermediate decision points in the classification process. This capability enables a model to complete an inference task as soon as it is confident in its prediction, without having to process the whole model.</p><p>“ScaleFL injects these classifiers to the global model at certain layers based on the model architecture and computational constraints at each complexity level. This enables forming low-cost local models by keeping the layers up to the corresponding exit,” said Ilhan.</p><p>“Two-dimensional scaling with splitting the model along depth and width dimensions yields uniformly scaled, efficient local models for resource-constrained clients. As a result, not only does the global model achieves better performance compared to baseline FL approaches and existing algorithms, but local models at different complexity levels also perform significantly better for clients that are resource-constrained at inference time.”</p><p>The exit classifiers that help balance a model’s basic and complex features also play into the second part of ScaleFL’s secret sauce, self-distillation.</p><p>Self-distillation is a form of knowledge distillation, which has been used to transfer knowledge from a ‘teacher’ model to a smaller ‘student’ model. ScaleFL applies this process within the same network by comparing early predictions made by the exit classifiers (students) and the final predictions of the last exit (teacher) of local models during optimization. This technique prevents isolation and improves the knowledge transfer among subnetworks of different levels in ScaleFL.</p><p>Ilhan and his collaborators extensively tested ScaleFL on three image classification datasets and two natural language processing datasets.</p><p>“Our experiments show that ScaleFL outperforms existing representative heterogeneous federated learning approaches. In local model evaluations, we were able to reduce latency by two times, and the model size by four times, all while keeping the performance loss below 2%,” said Ilhan.</p>]]></body>  <author>Ben Snedeker</author>  <status>1</status>  <created>1685671538</created>  <gmt_created>2023-06-02 02:05:38</gmt_created>  <changed>1689185480</changed>  <gmt_changed>2023-07-12 18:11:20</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A new machine learning framework promotes equitable AI practice while advancing a popular distributed model training approach.]]></teaser>  <type>news</type>  <sentence><![CDATA[A new machine learning framework promotes equitable AI practice while advancing a popular distributed model training approach.]]></sentence>  <summary><![CDATA[<p>School of Computer Science researchers have developed a new framework that advances federated learning, a distributed, real-time approach for training deep neural network models. The new framework enables remote clients to equitably participate in training regardless of their computing resources.</p>]]></summary>  <dateline>2023-06-02T00:00:00-04:00</dateline>  <iso_dateline>2023-06-02T00:00:00-04:00</iso_dateline>  <gmt_dateline>2023-06-02 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[albert.snedeker@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Ben Snedeker, Communications Manager II<br />Georgia Tech<br />College of Computing</p><p>albert.snedeker@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>670912</item>      </media>  <hg_media>          <item>          <nid>670912</nid>          <type>image</type>          <title><![CDATA[Georgia Tech CS Ph.D. student Ilhan Fatih]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Screen Shot 2023-06-01 at 2.48.19 PM.png]]></image_name>            <image_path><![CDATA[/sites/default/files/2023/06/01/Screen%20Shot%202023-06-01%20at%202.48.19%20PM.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2023/06/01/Screen%20Shot%202023-06-01%20at%202.48.19%20PM.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2023/06/01/Screen%2520Shot%25202023-06-01%2520at%25202.48.19%2520PM.png?itok=oPnyXX0d]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[An outdoor photo portrait of Georgia Tech CS Ph.D. student Ilhan Fatih]]></image_alt>                    <created>1685672138</created>          <gmt_created>2023-06-02 02:15:38</gmt_created>          <changed>1685672138</changed>          <gmt_changed>2023-06-02 02:15:38</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="135"><![CDATA[Research]]></category>          <category tid="8862"><![CDATA[Student Research]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="135"><![CDATA[Research]]></term>          <term tid="8862"><![CDATA[Student Research]]></term>      </news_terms>  <keywords>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>      </keywords>  <core_research_areas>          <term tid="39431"><![CDATA[Data Engineering and Science]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="667599">  <title><![CDATA[Like Humans and Animals, AI Agents Find Their Way Through Memory]]></title>  <uid>32045</uid>  <body><![CDATA[<p>Memory may be just as important to artificial intelligence (AI) agents in creating ‘mental maps’ as it is to humans and animals.</p><p>A recent paper authored by Georgia Tech researchers makes a surprising discovery — blind AI agents use memory to create maps and navigate through their surrounding environment.</p><p>Erik Wijmans, the lead author of the paper, said the idea for his research began by asking if AI agents might mimic human and animal behavior in how they navigate and adjust to their environments.</p><p>“Humans and animals navigate with some type of spatial representation — what is commonly referred to as a cognitive map,” Wijmans said. “So, we were wondering how AI agents navigate and if it’s similar to that.</p><p>“The first question we asked was, ‘Is memory important to these agents?' It is. They tend to remember at least the past thousand interactions with their environment.”</p><p>Wijmans completed his Ph.D. in computer science in 2022 and is currently a research scientist at Apple.</p><p>Wijmans created blind AI agents and trained them by dropping them into the floorplans of more than 500 houses with the goal of navigating from one area of the house to another area. The only sense it had to work with was egomotion — the ability to know how far it has moved.</p><p>The agent bumped its way around from room to room, backtracking as needed, before finding its destination. Wijmans then created a second probe agent that was injected with the memories of the first agent. The probe agent used the memory of the original agent to take shortcuts to quickly reach its objective.</p><p>“It’s surprising that they can do this without vision because they’re in an unknown environment that they’ve never seen before, so they have to figure out how to navigate in that environment and also figure out the structure of it,” Wijmans said.</p><p>“This is a result that shows that our hypothesis is true, or at the very least along the right direction. We took an agent and put it in a complex environment and trained it for a task that requires it to interact with that environment, and the result was mapping.”</p><p>Wijman’s paper,&nbsp;<em>Emergence of Maps in the Memories of Blind Navigation Agents</em>, is one of four outstanding paper award winners for the 2023 International Conference on Learning Representations, which is being held May 1-5 in Kigali, Rwanda. His research was also recognized by the Georgia Tech chapter of Sigma Xi (The Scientific Research Society) and received a 2023 GT Sigma XI Best Ph.D. Thesis Award.</p><p>Wijmans is advised by School of Interactive Computing Distinguished Professor Irfan Essa and Associate Professor Dhruv Batra.</p><p>“Erik makes fundamental contributions to multiple sub-areas of AI, including reinforcement learning, robotics, and embodied perception,” Batra said. “His hypothesis is a bold one — that intelligence emerges via large-scale learning by an embodied agent accomplishing goals in a rich 3D environment.”</p><p>In his paper, Wijmans describes mapping as an emerging phenomenon. Neural network models for navigation have performed well despite not containing any explicit mapping modules.</p><p>Wijman’s AI agents showed a 95% success rate when they used memory to navigate, whereas memoryless agents failed entirely. This seems to suggest that agents create mental maps as a natural part of learning to navigate.</p><p>“The results were so initially surprising, that my first gut instinct was that we had done something wrong in our experimental design,” he said.</p><p>“This is a work with a very complex body of experiments that tie together a single narrative,” he said. “This is a challenging thing to do. When you’re trying to test whether something involves memory, you must come up with ideas of what to test for and how to test for that. You must make each experiment as precise as possible to not get false positives, and that involves considerable experimental design and effort.”</p><p>Wijmans said he made it as difficult as possible for the agent to reach its goal, removing vision, audio, olfactory, haptic, and magnetic sensing and gave it no bias toward mapping. It had no supervision or any kind of outside help.</p><p>“Surprisingly, even under these deliberately harsh conditions, we find the emergence of map-like spatial representations in the agent’s non-spatial unstructured memory. It not only successfully navigates to the goal but also exhibits intelligent behavior like taking shortcuts, following walls, and detecting collisions.”</p><p>The discovery also suggests that AI, humans, and animals all share a natural characteristic of problem solving and navigation.</p><p>“The one link that we can make is the idea of convergent evolution, which is where you see the same mechanism evolve multiple times in species that have no common ancestor that shares that mechanism,” Wijmans said. “Mammals build maps, insects build maps, and now AI agents build maps. So perhaps mapping is the natural solution to navigation.”</p>]]></body>  <author>Ben Snedeker</author>  <status>1</status>  <created>1683033806</created>  <gmt_created>2023-05-02 13:23:26</gmt_created>  <changed>1683034087</changed>  <gmt_changed>2023-05-02 13:28:07</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A recent paper authored by Georgia Tech researchers makes a surprising discovery — blind AI agents use memory to create maps and navigate through their surrounding environment.]]></teaser>  <type>news</type>  <sentence><![CDATA[A recent paper authored by Georgia Tech researchers makes a surprising discovery — blind AI agents use memory to create maps and navigate through their surrounding environment.]]></sentence>  <summary><![CDATA[<p>A recent paper authored by Georgia Tech researchers makes a surprising discovery — blind AI agents use memory to create maps and navigate through their surrounding environment.</p>]]></summary>  <dateline>2023-05-02T00:00:00-04:00</dateline>  <iso_dateline>2023-05-02T00:00:00-04:00</iso_dateline>  <gmt_dateline>2023-05-02 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[nathan.deen@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Nathan Deen<br />Communications Officer I<br />School of Interactive Computing<br />nathan.deen@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>670706</item>      </media>  <hg_media>          <item>          <nid>670706</nid>          <type>image</type>          <title><![CDATA[Erik Wijmans, Irfan Essa]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Erik Wijmans, Irfan Essa_86A9563.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/2023/05/02/Erik%20Wijmans%2C%20Irfan%20Essa_86A9563.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2023/05/02/Erik%20Wijmans%2C%20Irfan%20Essa_86A9563.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2023/05/02/Erik%2520Wijmans%252C%2520Irfan%2520Essa_86A9563.jpeg?itok=L1Sg5kfx]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Georgia Tech Ph.D. student Erik Wijmans and Distinguished Professor Irfan Essa]]></image_alt>                    <created>1683033816</created>          <gmt_created>2023-05-02 13:23:36</gmt_created>          <changed>1683033816</changed>          <gmt_changed>2023-05-02 13:23:36</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="8862"><![CDATA[Student Research]]></category>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="152"><![CDATA[Robotics]]></category>          <category tid="135"><![CDATA[Research]]></category>      </categories>  <news_terms>          <term tid="8862"><![CDATA[Student Research]]></term>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="152"><![CDATA[Robotics]]></term>          <term tid="135"><![CDATA[Research]]></term>      </news_terms>  <keywords>          <keyword tid="187812"><![CDATA[artificial intelligence (AI)]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="667347">  <title><![CDATA[Examining the Boundaries of Using AI 'Sensing' to Understand Office Workers’ Performance and Wellbeing]]></title>  <uid>32045</uid>  <body><![CDATA[<p><em>New research findings show that social acceptability and select sharing of AI results in the workplace are key to future implementation</em></p><p>Commercial monitoring tools are being introduced in offices alongside newer modes of work – screen meetings, remote collaboration, digital-first workflows – as a way for employers to better understand performance of their workforces.</p><p>Researchers at Georgia Tech and Northeastern University conducted a study with information workers to learn about their perspectives on being monitored and their information being collected with passive-sensing enabled artificial intelligence (PSAI), where computing devices can unobtrusively detect and collect user behaviors. That information could then be used to train machine learning models that infer performance and wellbeing of workers.</p><p>“We wanted to take a closer look at how workers perceive passive-sensing AI in order to make this technology work for the workers, as opposed to making them work for the technology,” said&nbsp;<strong>Vedant Das Swain</strong>, lead researcher and a Ph.D. candidate in computer science at Georgia Tech.</p><p>He says there is an organizational need – for both employer and employee alike – to get better insights.</p><p>“One of the underlying subtexts of the research is that there are these asymmetries at work because the employee doesn’t have as much power as the employer. And if these technologies keep progressing as they are, this gap is going to widen because the employer will just keep getting more and more worker information.”</p><p>Researchers found that some technologies – fitness trackers and web cams, for example – used for personal activities may not translate well to work life if they are implemented without considering new norms of work. Technologies can now “breach physical boundaries,” as Das Swain puts it, and using a web cam for work while at home might involve extra setup to close doors and blur backgrounds on the screen. Workers also want careful consideration of the context in which devices can gain information.</p><p>Work devices monitoring worker activity is appropriate in many cases but work-related apps on personal devices might be a tougher sell.</p><p>The research results fall in two primary categories:</p><ul><li><strong>Appropriateness</strong>&nbsp;– Understanding socially acceptable data to collect with passive-sensing AI and acceptable circumstances to infer worker performance and wellbeing.</li><li><strong>Distribution</strong>&nbsp;– Determining&nbsp;what to share about worker data – and when&nbsp;–&nbsp;with other stakeholders and the methods used.</li></ul><p>Regarding the appropriateness aspect, Das Swain says that people in general don’t want to feel dehumanized by algorithms. His team’s work takes that idea further by learning about the mental models different workers use to determine what’s appropriate for using PSAI.&nbsp;</p><p>“Different workers have different ideas of what’s insightful,” he said. “For example, if I don’t talk to my supervisor about my personal life, why should this machine be sensing that type of information? The alternative viewpoint is that I already know what I’m doing at work, so give me more data. I could use sleep and commute data to infer how those activities might affect my work.”</p><p>Das Swain says there is no one-size-fits-all solution.</p><p>“And it’s not just about privacy, it’s about utility,” he said. “People find utility in different things. Some want more precise information in a work context, and some might want the holistic view of the data, in both cases to find insights for themselves.”</p><p>The second category of results – distribution – is no less tricky. Worker information is ostensibly personal in nature, but collaborative and performance measures at work necessitate the sharing of this information.</p><p>The researchers found that participants strongly felt that if a machine predicted something related to performance or wellbeing, then they should have enough time to make changes and provide context, such as if a worker is on paternity leave and must alter project deadlines.</p><p>“Only at a later point, if at all, can the data be escalated to someone else to help as the situation requires,” said Das Swain. “That was very clear in the study.”</p><p>One red flag so to speak, for Das Swain as a researcher, is that these technologies don’t afford users any control to understand newer types of personal data that are being collected and stored at work.</p><p>With algorithmic uncertainty now at the forefront of many conversations, Das Swain views these results from the Georgia Tech and Northeastern group as tangible guideposts for regulators and companies making decisions around public and commercial deployment of AI sensing tech for information workers.</p><p>The published results will be presented at the ACM CHI Conference on Human Factors in Computing Systems, taking place April 23-28, in Hamburg, Germany. The academic paper,&nbsp;<a href="https://programs.sigchi.org/chi/2023/program/content/95708"><em>Algorithmic Power or Punishment: Information Worker Perspectives on Passive Sensing Enabled AI Phenotyping of Performance and Wellbeing</em></a>, is co-authored by Das Swain,&nbsp;<strong>Lan Gao</strong>,&nbsp;<strong>William Wood</strong>,&nbsp;<strong>Srikruthi C. Matli</strong>,&nbsp;<strong>Gregory Abowd</strong>, and&nbsp;<strong>Munmun De Choudhury</strong>. The work is funded in part by Cisco.</p>]]></body>  <author>Ben Snedeker</author>  <status>1</status>  <created>1681482202</created>  <gmt_created>2023-04-14 14:23:22</gmt_created>  <changed>1681482381</changed>  <gmt_changed>2023-04-14 14:26:21</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[New research findings show that social acceptability and select sharing of AI results in the workplace are key to future implementation.]]></teaser>  <type>news</type>  <sentence><![CDATA[New research findings show that social acceptability and select sharing of AI results in the workplace are key to future implementation.]]></sentence>  <summary><![CDATA[<p>Researchers at Georgia Tech and Northeastern University conducted a study with information workers to learn about their perspectives on being monitored and their information being collected with passive-sensing enabled artificial intelligence (PSAI), where computing devices can unobtrusively detect and collect user behaviors.</p>]]></summary>  <dateline>2023-04-14T00:00:00-04:00</dateline>  <iso_dateline>2023-04-14T00:00:00-04:00</iso_dateline>  <gmt_dateline>2023-04-14 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Josh Preston<br />Research Communications Manager<br /><a href="jpreston@cc.gatech.edu">jpreston@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>670546</item>      </media>  <hg_media>          <item>          <nid>670546</nid>          <type>image</type>          <title><![CDATA[pic_web_cc_vedant das swain2.png]]></title>          <body><![CDATA[<p>School of Interactive Computing Ph.D. candidate Vedant Das Swain, lead researcher of a study dubbed "Algorithmic Power or Punishment" that identifies current boundaries of using AI "sensing" tools in office spaces. <em>(Photos by Kevin Beasley/College of Computing)</em></p>]]></body>                      <image_name><![CDATA[pic_web_cc_vedant das swain2.png]]></image_name>            <image_path><![CDATA[/sites/default/files/2023/04/14/pic_web_cc_vedant%20das%20swain2.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2023/04/14/pic_web_cc_vedant%20das%20swain2.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2023/04/14/pic_web_cc_vedant%2520das%2520swain2.png?itok=pG-tV1xE]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Vedant Das Swain, Ph.D. candidate in computer science at Georgia Tech.]]></image_alt>                    <created>1681482219</created>          <gmt_created>2023-04-14 14:23:39</gmt_created>          <changed>1681482219</changed>          <gmt_changed>2023-04-14 14:23:39</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="667346">  <title><![CDATA[Misinformation Detection Models are Vulnerable to ChatGPT and Other LLMs]]></title>  <uid>32045</uid>  <body><![CDATA[<p>Existing machine learning (ML) models used to detect online misinformation are less effective when matched against content created by ChatGPT or other large language models (LLMs), according to new research from Georgia Tech.</p><p>Current ML models designed for and trained on human-written content have significant performance discrepancies in detecting paired human-generated misinformation and misinformation generated by artificial intelligence (AI) systems, said Jiawei Zhou, a Ph.D. student in Georgia Tech’s School of Interactive Computing.</p><p>Zhou’s paper detailing the findings is set to receive a best paper honorable mention award at the 2023&nbsp;ACM CHI Conference on Human Factors in Computing Systems. Advised by Associate Professor Munmun De Choudhury, Zhou’s research demonstrates that LLMs can manipulate tone and linguistics to allow AI-generated misinformation to slip through the cracks.</p><p>“We found the AI-generated misinformation carried more emotions and cognitive processing expressions than its human-created counterparts,” Zhou said. “It also tended to enhance details, communicate uncertainties, draw conclusions, and simulate personal tones.</p><p>“We’re one of the very first to look at this risk. As more people started to use ChatGPT, they’ve noticed this problem, but we were one of the first to provide evidence of this risk. And there are more efforts needed to raise public awareness about this potential and call for more research efforts to combat this risk.”</p><p>Zhou started exploring GPT-3 in 2022 because she wanted to know how one of the early predecessors to ChatGPT would handle prompts that included misinformation about the Covid-19 pandemic. She asked GPT-3 to explain how the Covid-19 vaccines could cause cancer.</p><p>“The results were very concerning because it is so persuasive,” Zhou said. “I had been studying informatics and misinformation for a time, and it was still persuasive, even to me. The output would say, ‘It can cause cancer because there is this researcher at this institute, and their research is based on medical records and diverse demographics. The research supports this possibility.’ The writing of it is so scientific.”</p><p>Zhou and her collaborators accumulated a dataset of human-created misinformation, including more than 6,700 news reports and 5,600 social media posts. From that set, Zhou and her team extracted the most representative topics and documents of human-generated misinformation. They used those to create narrative prompts, which they fed to GPT and recorded the output.</p><p>Both the GPT-generated output and the original human-created dataset were used to test an existing misinformation detection model called COVID-Twitter-BERT (CT-BERT).</p><p>Zhou said while the human- and AI-generated datasets were intentionally paired, a statistical test showed there are significant differences in detection model performance.</p><p>CT-BERT experienced a decline in performance in detecting AI-generated misinformation. Out of 500 prompts based on AI-generated misinformation, it failed to recognize 27 as false or misleading, compared to missing only two from the human-generated prompts.</p><p>“The core reason is they are linguistically different,” Zhou said. “Our error analysis reveals that AI misinformation tends to be more complex, and it tends to mix factual statements. It uses one fact to explain another, though the two things might not be related. The tone and sentiment are also different. And there are less keywords that detection tools normally look for.”</p><p>Zhou’s experiments showed that GPT could use information to create a news story using objective, straightforward language and use that same information to create a sympathetic social media post. That points to its capability of changing tone and tailoring messages.</p><p>“If someone wants to promote propaganda, they can use it to customize a narrative toward a specific community,” Zhou said. “That makes the risk even greater. It shows that it has some flexibility to alter its tone for different purposes. For news, it can sound logical and reliable. For social media, it conveys information quickly and clearly.”</p><p>As LLMs continue to rapidly grow and expand, so do the risks of misinformation.</p><p>ChatGPT operates on Open AI’s GPT-3.5 and GPT-4 models, the latter of which was released on March 14. Since ChatGPT was released, Zhou has given it the same prompts she gave to GPT-3. The results have improved with some corrections, but the latter has the advantage of having more available information about Covid-19, she said.</p><p>Zhou said steps should be taken immediately to evaluate how misinformation detection tools can adapt to ever-improving LLMs. She described the situation as an “AI arms race,” and the tools that are currently used to combat misinformation are well behind.</p><p>“They are improving the generative capabilities of LLMs,” she said. “They’re more human-like, more fluent, and less and less distinguishable from human creations. We need to think about ways we can distinguish them and how we can improve our misinformation detection abilities to catch up.”</p>]]></body>  <author>Ben Snedeker</author>  <status>1</status>  <created>1681481706</created>  <gmt_created>2023-04-14 14:15:06</gmt_created>  <changed>1681482036</changed>  <gmt_changed>2023-04-14 14:20:36</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Because falsehoods generated by ChatGPT are so convincing, even trained researchers struggle to identify misinformation.]]></teaser>  <type>news</type>  <sentence><![CDATA[Because falsehoods generated by ChatGPT are so convincing, even trained researchers struggle to identify misinformation.]]></sentence>  <summary><![CDATA[<p>New research indicates that current machine learning models trained on human-produced content can struggle to detect falsehoods generated by AI-powered chatbots.</p>]]></summary>  <dateline>2023-04-14T00:00:00-04:00</dateline>  <iso_dateline>2023-04-14T00:00:00-04:00</iso_dateline>  <gmt_dateline>2023-04-14 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Nathan Deen<br />School of Interactive Computing<br />Communications Officer<br /><a href="nathan.deen@cc.gatech.edu">nathan.deen@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>670544</item>          <item>670545</item>      </media>  <hg_media>          <item>          <nid>670544</nid>          <type>image</type>          <title><![CDATA[Jiawei Zhou, a Ph.D. student in Georgia Tech’s School of Interactive Computing.jpeg]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Jiawei Zhou, a Ph.D. student in Georgia Tech’s School of Interactive Computing.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/2023/04/14/Jiawei%20Zhou%2C%20a%20Ph.D.%20student%20in%20Georgia%20Tech%E2%80%99s%20School%20of%20Interactive%20Computing.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2023/04/14/Jiawei%20Zhou%2C%20a%20Ph.D.%20student%20in%20Georgia%20Tech%E2%80%99s%20School%20of%20Interactive%20Computing.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2023/04/14/Jiawei%2520Zhou%252C%2520a%2520Ph.D.%2520student%2520in%2520Georgia%2520Tech%25E2%2580%2599s%2520School%2520of%2520Interactive%2520Computing.jpeg?itok=Ab3T05KE]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Jiawei Zhou, a Ph.D. student in Georgia Tech’s School of Interactive Computing.]]></image_alt>                    <created>1681481714</created>          <gmt_created>2023-04-14 14:15:14</gmt_created>          <changed>1681481714</changed>          <gmt_changed>2023-04-14 14:15:14</gmt_changed>      </item>          <item>          <nid>670545</nid>          <type>image</type>          <title><![CDATA[Jiawei Zhou-munmun.jpeg]]></title>          <body><![CDATA[<p>School of Interactive Computing Ph.D. student Jiawei Zhou, left, and associate professor Munmun De Choudhury, demonstrate in their latest paper that misinformation detection models are vulnerable to content generated by large language models. (Photos by Kevin Beasley/College of Computing)</p>]]></body>                      <image_name><![CDATA[Jiawei Zhou-munmun.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/2023/04/14/Jiawei%20Zhou-munmun.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2023/04/14/Jiawei%20Zhou-munmun.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2023/04/14/Jiawei%2520Zhou-munmun.jpeg?itok=0ICoh4JM]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[School of Interactive Computing Ph.D. student Jiawei Zhou, left, and associate professor Munmun De Choudhury, demonstrate in their latest paper that misinformation detection models are vulnerable to content generated by large language models.]]></image_alt>                    <created>1681481807</created>          <gmt_created>2023-04-14 14:16:47</gmt_created>          <changed>1681481807</changed>          <gmt_changed>2023-04-14 14:16:47</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="134"><![CDATA[Student and Faculty]]></category>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="135"><![CDATA[Research]]></category>      </categories>  <news_terms>          <term tid="134"><![CDATA[Student and Faculty]]></term>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="135"><![CDATA[Research]]></term>      </news_terms>  <keywords>          <keyword tid="192524"><![CDATA[ChatGPT]]></keyword>          <keyword tid="190591"><![CDATA[misinformation]]></keyword>          <keyword tid="89321"><![CDATA[Munmun De Choudhury]]></keyword>          <keyword tid="1027"><![CDATA[chi]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="666796">  <title><![CDATA[New Research Explores Using Generative AI Technology for Materials Discovery]]></title>  <uid>32045</uid>  <body><![CDATA[<p><span><span><span><span><span><span><span><span><span><span><span><span><span><span>With the explosive rise of popular artificial intelligence applications like ChatGPT and DALL-E, consumers are becoming more and more familiar with the world of generative models. While these fun, novel tools are helpful in our everyday lives, Georgia Tech researchers are using the same technology to make new scientific discoveries and solve complex engineering challenges.</span></span></span></span></span></span></span></span></span></span></span></span></span></span></p><p><span><span><span><span><span><span><span><span><span><span><span><span><span><span>One example of this is&nbsp;<strong>Victor Fung</strong>, an assistant professor with Georgia Tech’s School of Computational Science and Engineering (CSE). Fung recently led a research team that&nbsp;<a href="https://iopscience.iop.org/article/10.1088/2632-2153/aca1f7">developed a new, first-of-its-kind algorithm</a>&nbsp;that can reconstruct atomic structure in generative models.</span></span></span></span></span></span></span></span></span></span></span></span></span></span></p><p><span><span><span><span><span><span><span><span><span><span><span><span><span><span>A significant application Fung focuses this research toward is in the field of materials science and engineering. The algorithm could be key in developing further AI tools and new materials to the benefit of individual researchers and entire communities alike.</span></span></span></span></span></span></span></span></span></span></span></span></span></span></p><p><span><span><span><span><span><span><span><span><span><span><span><span><span><span>“Structural representations are a well-known concept people have used in other machine learning applications for chemistry and materials, like training models to predict energies and forces,” Fung said. “But this is really the first time that anyone has used this in generative models.”</span></span></span></span></span></span></span></span></span></span></span></span></span></span></p><p><span><span><span><span><span><span><span><span><span><span><span><span><span><span>Structure is a key property in a material design. For example, structure plays a role in determining superconductivity within electronics, biological viability in drugs, and catalyzation of certain chemical reactions.</span></span></span></span></span></span></span></span></span></span></span></span></span></span></p><p><span><span><span><span><span><span><span><span><span><span><span><span><span><span>Fung explained that using generative models to study atomic structure, and to design new materials, could be vital in climate remediation. This may include developing greener catalysts for use in fuel cells, designing better material for carbon capture, and discovering new light-absorbent molecules for application in solar panels.</span></span></span></span></span></span></span></span></span></span></span></span></span></span></p><p><span><span><span><span><span><span><span><span><span><span><span><span><span><span>The algorithm can help engineers create new materials with targeted properties by building models atom-by-atom, a concept called inverse design. The algorithm is a progressive step forward in allowing computer models to create new materials tailor-made with specific functions and characteristics in mind by designers.</span></span></span></span></span></span></span></span></span></span></span></span></span></span></p><p><span><span><span><span><span><span><span><span><span><span><span><span><span><span>Specifically, the algorithm allows materials scientists to know the exact structure of materials that exhibit a desired property, potentially making proposed material designs a reality.</span></span></span></span></span></span></span></span></span></span></span></span></span></span></p><p><span><span><span><span><span><span><span><span><span><span><span><span><span><span><span>“If we know the structure of material, we can be sure of what properties it has, and we will have a clear goal to try to synthesize it and develop applications,” Fung said. “We basically have the key to defining the material in the chemical space.”</span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></p><p><span><span><span><span><span><span><span><span><span><span><span><span><span><span><span>Fung’s paper is the first in a forthcoming series of studies to develop new generative models for atomic structure. He and his co-researchers think the series could result in new algorithms and models that yield commercial benefits, as well as solve large, scientific problems.</span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></p><p><span><span><span><span><span><span><span><span><span><span><span><span><span><span><span>As part of this campaign to share his research,&nbsp;</span>Fung is set to discuss the findings March 31 at&nbsp;<a href="https://research.gatech.edu/materials/imatsymposium">2023 Symposium on Materials Innovations</a>, hosted by Georgia Tech’s Institute for Materials (IMat).&nbsp;</span></span></span></span></span></span></span></span></span></span></span></span></span></span></p><p><span><span><span><span><span><span><span><span><span><span><span><span><span><span><span>School of CSE Ph.D. student&nbsp;<strong>Shuyi Jia</strong>&nbsp;worked with Fung to develop the algorithm and is a co-author on the paper. The pair partnered with Oak Ridge National Laboratory scientists&nbsp;<strong>Jiaxin Zhang</strong>,&nbsp;<strong>Junqi Yin</strong>, and&nbsp;<strong>Panchapakesan Ganesh</strong>&nbsp;through the study.</span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></p><p><span><span><span><span><span><span><span><span><span><span><span><span><span><span><span>Along with AI tools like ChatGPT and DALL-E, generative models are popularly used today in images, text, audio, and other types of information. They are not as common in overcoming scientific challenges due to their data-intensive nature, an obstacle that Fung’s algorithm helps overcome.</span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></p><p><span><span><span><span><span><span><span><span><span><span><span><span><span><span>In technical terms, the algorithm makes it possible for generative models to work with non-invertible structural representations, such as atom-centered symmetry functions.</span></span></span></span></span></span></span></span></span></span></span></span></span></span></p><p><span><span><span><span><span><span><span><span><span><span><span><span><span><span><span>Now that the group has learned how to use models to generate structure, they want to extend this to broader problems in materials design and discovery. This includes being able to generate structures with different chemical compositions as well.</span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></p><p><span><span><span><span><span><span><span><span><span><span><span><span><span><span><span>Here, their algorithm becomes a tested, verified method using generative models to understand and overcome complex engineering problems.</span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></p><p><span><span><span><span><span><span><span><span><span><span><span><span><span><span><span>“People who are interested in solving these kinds of problems in materials discovery, whether for specific applications, specific types of materials, or specific properties, can potentially use this approach, or at least take inspiration from it,” Fung said.</span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></p>]]></body>  <author>Ben Snedeker</author>  <status>1</status>  <created>1679664749</created>  <gmt_created>2023-03-24 13:32:29</gmt_created>  <changed>1680793408</changed>  <gmt_changed>2023-04-06 15:03:28</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Applications of a novel algorithm developed by School of CSE researchers may lead to the design of new climate-remediation materials.]]></teaser>  <type>news</type>  <sentence><![CDATA[Applications of a novel algorithm developed by School of CSE researchers may lead to the design of new climate-remediation materials.]]></sentence>  <summary><![CDATA[<p>School of Computational Science and Engineering Assistant Professor is presenting details about a first-of-its-kind algorithm for generative AI models at the&nbsp;<a href="https://research.gatech.edu/materials/imatsymposium">2023 Symposium on Materials Innovations</a>, being hosted by Georgia Tech’s Institute for Materials on March 31.</p>]]></summary>  <dateline>2023-03-24T00:00:00-04:00</dateline>  <iso_dateline>2023-03-24T00:00:00-04:00</iso_dateline>  <gmt_dateline>2023-03-24 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Bryant Wine, Communications Officer I<br /><a href="bryant.wine@cc.gatech.edu">bryant.wine@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>670465</item>      </media>  <hg_media>          <item>          <nid>670465</nid>          <type>image</type>          <title><![CDATA[Victor Fung CRNCH.jpeg]]></title>          <body><![CDATA[<p><strong>Victor Fung</strong>, an assistant professor with Georgia Tech’s School of Computational Science and Engineering, speaks during a panel discussion during a workshop on campus.</p>]]></body>                      <image_name><![CDATA[Victor Fung CRNCH.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/2023/04/06/Victor%20Fung%20CRNCH.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2023/04/06/Victor%20Fung%20CRNCH.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2023/04/06/Victor%2520Fung%2520CRNCH.jpeg?itok=AKrRhbv7]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Victor Fung, an assistant professor with Georgia Tech’s School of Computational Science and Engineering]]></image_alt>                    <created>1680793152</created>          <gmt_created>2023-04-06 14:59:12</gmt_created>          <changed>1680793152</changed>          <gmt_changed>2023-04-06 14:59:12</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>          <item>        <filename><![CDATA[School of CSE&#039;s Victor Fung]]></filename>        <filepath><![CDATA[/sites/default/files/2023/03/24/Victor%20Fung%20CRNCH.jpeg]]></filepath>        <filefullpath><![CDATA[http://hg.gatech.edu//sites/default/files/2023/03/24/Victor%20Fung%20CRNCH.jpeg]]></filefullpath>        <filemime><![CDATA[image/jpeg]]></filemime>        <filesize><![CDATA[35631]]></filesize>        <description><![CDATA[]]></description>      </item>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>      </groups>  <categories>          <category tid="134"><![CDATA[Student and Faculty]]></category>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="154"><![CDATA[Environment]]></category>      </categories>  <news_terms>          <term tid="134"><![CDATA[Student and Faculty]]></term>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="154"><![CDATA[Environment]]></term>      </news_terms>  <keywords>          <keyword tid="192390"><![CDATA[generative AI]]></keyword>          <keyword tid="187812"><![CDATA[artificial intelligence (AI)]]></keyword>          <keyword tid="84281"><![CDATA[advanced materials]]></keyword>      </keywords>  <core_research_areas>          <term tid="39471"><![CDATA[Materials]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="664618">  <title><![CDATA[Manufacturing, Finance Among Industries to Benefit from What's Next in AI for 2023]]></title>  <uid>32045</uid>  <body><![CDATA[<p>Artificial intelligence is already making headlines in the new year with the box office success of the movie&nbsp;<em>M3GAN</em>. Along with a TikTok dance craze and lots of laughs, the over-the-top horror movie/dark comedy about an AI-powered robot that runs amok is also inspiring discussion about the growing presence and impact of artificial intelligence in everyday life.</p><p>From the movie&nbsp;house to the warehouse&nbsp;to your house, AI seems like it&#39;s everywhere. That&#39;s because with a steady stream of new research and innovative applications reaching into nearly every industry and business sector, it&nbsp;is everywhere.&nbsp;Nevertheless, AI still holds enormous potential as the field continues to evolve.</p><p>To get a sense of what this evolution could look like in 2023, we turned to a small group of <a href="https://www.cc.gatech.edu/people/phd">Ph.D. students from the College of Computing</a> community that are currently pushing foundational and applied AI research forward in a broad spectrum of disciplines and fields.</p><p>The students shared their opinions on where AI might be headed in the new year, what some of the big tech stories could be, and why ethics in AI are so critically important.</p><h5>Where will artificial intelligence and machine learning have the most impact in 2023?</h5><p>&quot;Artificial intelligence and machine learning&nbsp;will continue to have a huge impact on manufacturing and warehouses with labor shortages and worker turnover continuing to be a concern as more manufacturing and operations jobs are brought back to the United States from overseas. Additionally, AI/ML will continue to help ensure that manufacturing and warehouse facilities are operating as efficiently as possible from energy and material savings to worker safety and parts quality.&quot; &ndash;&nbsp;<em><a href="https://www.researchgate.net/profile/Zoe-Klesmith">Zoe Klesmith Alexander</a>, computational science and engineering Ph.D. student</em></p><p>&quot;Right now, deep learning is on a trajectory to transform&nbsp;the creation space. Artwork and images, videos, data representation and storytelling, co-authoring, and summarizing documents... It&#39;s gotten really good.&quot; &ndash;&nbsp;<em><a href="https://www.linkedin.com/in/benhoov/">Ben Hoover</a>, machine learning Ph.D. student</em></p><p>&quot;I think machine learning and AI will keep playing a huge role&nbsp;in how the world and society will be shaped over the next decade in many ways. It will make many other fields more efficient through ML and AI tools we are developing. In 2023, I think ML and AI will have the most impact on social media platforms, helping reduce hate speech, rumor spread, etc.&quot; &ndash;&nbsp;<em><a href="https://www.linkedin.com/in/agam-shah/">Agam A. Shah</a>, machine learning Ph.D. student</em></p><p>&quot;One of the big impacts this year&nbsp;may be driverless cars&nbsp;being in your neighborhood. Otherwise, it will be a slow steady drip of GPT3 and other OpenAI models suffusing all applications, making programmers much faster, making journalists faster, making academic articles and lit reviews much faster. We&#39;re at a 4th grader level, and I hope by the end of this year we&#39;ll be at the 6th grader level. Also, indoor turn-by-turn navigation will be everywhere in 2023 as well.&quot; &ndash;&nbsp;<em><a href="https://www.linkedin.com/in/brandonkeithbiggs/">Brandon Biggs</a>, human-centered computing Ph.D. student</em></p><h5>What will be some of the big tech stories in 2023?</h5><p>&quot;ChatGPT and the GitHub Copilot lawsuit&nbsp;will keep making it into the news and cause more controversies. In general, AI ethics will become more important and get more focus as the technology keeps advancing.&quot; &ndash; <a href="https://fab1ano.github.io/">Fabian Fleischer</a>, cybersecurity, and privacy Ph.D. student</p><p>&quot;Driverless car fleets will be coming&nbsp;to a city near you.&nbsp;A new battery technology will allow phones to keep their charge for a week. Meta realizes virtual reality (VR) head-mounted displays are for a limited market and uses headphones and phones to provide VR experiences.&quot; &ndash; Brandon Biggs</p><h5>What&rsquo;s an issue or industry that you think could benefit from a computing solution?</h5><p>&quot;Our reinterpretation of modern deep learning&nbsp;as energy-based associative memories&nbsp;has the potential to transform any industry that relies on foundation models -- giant architectures that require models that are &quot;self-supervised&quot; (learn on their own from data).&quot; &ndash; Ben Hoover</p><p>&quot;Inclusion in everything.&nbsp;Over 90 percent of websites on the internet have elements that are inaccessible to 25 percent of the world&#39;s population who have disabilities. Inclusive design will be the most important area where technology can be redesigned and created to have multiple sensory modalities and be properly programmed.&quot; &ndash; Brandon Biggs</p><p>&quot;Currently, financial markets are far from efficient&nbsp;because they do not fully incorporate information available in large unstructured text data. With the latest development in natural language processing techniques, we can better understand the economy and therefore price financial markets better.&quot; &ndash; Agam A. Shah</p><h5>There&rsquo;s been increasing recognition of the vital role ethics should play in artificial intelligence. How do you see this issue evolving in the next year?</h5><p>&quot;Specifically in my research, I think explainable AI (XAI) is very important, especially if non-experts in ML will be using black-box ML solutions in a factory. It will be important for humans to trust and to understand the models especially if the models are being using to monitor quality on a safety-critical part.</p><p>&quot;Additionally, using XAI for human interaction with robots that utilize deep learning to make decisions will be increasingly important as technologies like collaborative robots (cobots) are integrated into factories. I think in my area of research that it is always important to use automation to aid humans in jobs that are safe for humans to do and not to replace them.&quot; &ndash; Zoe Klesmith Alexander</p><p>&quot;Big data is pretty much at its peak. Deep data, where your Alexa knows everything about you, or your phone knows everything about you, and rather than saying &#39;other people who watched this show liked this show,&#39; it&#39;s going to say, &#39;I know you liked these shows, I think you&#39;ll like this show because of these reasons, one of which is because other people who liked all these other shows liked this show.&#39; The ethical element will be how much of this data should these models use, and are people going to build a personal dataset that they can share with other apps, or is each app going to need to build their own dataset? The ethical question is who owns this data.&quot; &ndash; Brandon Biggs</p><p>&quot;I think ethics will become more and more important going forward. We are making huge breakthroughs in machine learning and artificial intelligence, but the systems we are creating are producing racist, sexist, and stereotypical results. For example, a recent system, Galactica, developed by Facebook (Meta) is powerful. It can produce research articles by just simply providing it with the title. It comes with some serious ethical concerns, in some cases, it produces racist, sexist text. So, as we will keep developing better models and making success in parallel, we need to always keep in mind the ethical implications of these models.&quot; &ndash; Agam A. Shah</p><h5>What research are you working on that you think people should know about or will have impact in 2023?</h5><p>&quot;Part of my research focuses on data-driven modeling of additive manufacturing processes&nbsp;to better control dimensional quality of the final part. Another part of my research focuses on detecting anomalies in real-time using computer vision and machine learning for both warehouses and manufacturing processes.&quot; &ndash; Zoe Klesmith Alexander</p><p>&quot;Right now, deep learning is built on feed-forward mathematical operations&nbsp;that have little resemblance to the brain. I am working on a physics inspired approach to deep learning built around recurrent networks and energy functions. These architectures have the same mathematical foundation as the famous, biologically plausible Hopfield Network.&quot; &ndash; Ben Hoover</p><p>&quot;I am currently working on two projects which, in my opinion, will have an impact in 2023. In one project, we are measuring the exposure of public firms to ongoing inflation. We are also understanding how inflation affects different firms differently based on the pricing power of the firm. As inflation is the highest in the last 40 years, our study is highly relevant now and in the coming years till we get inflation back in control.</p><p>&quot;The second work is related to the first work in some ways. As inflation is rising, to control the inflation Federal Reserve Bank is tightening its monetary policy. In our second work, we are measuring the stance of monetary policy (measuring hawkish vs dovish stance) of the Fed using state-of-the-art NLP models to see its impact in various financial markets (Treasury market, Stock market, Crypto market, etc.)&quot; &ndash; Agam A. Shah</p>]]></body>  <author>Ben Snedeker</author>  <status>1</status>  <created>1673380575</created>  <gmt_created>2023-01-10 19:56:15</gmt_created>  <changed>1673443197</changed>  <gmt_changed>2023-01-11 13:19:57</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A group of Ph.D. students from the GT Computing community share their opinions on what's next for artificial intelligence in the new year.]]></teaser>  <type>news</type>  <sentence><![CDATA[A group of Ph.D. students from the GT Computing community share their opinions on what's next for artificial intelligence in the new year.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2023-01-10T00:00:00-05:00</dateline>  <iso_dateline>2023-01-10T00:00:00-05:00</iso_dateline>  <gmt_dateline>2023-01-10 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[albert.snedeker@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Ben Snedeker, Comms. Mgr. II<br /><a href="mailto:albert.snedeker@cc.gatech.edu?subject=What's%20Next%20in%20AI%20for%202023">albert.snedeker@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>664620</item>      </media>  <hg_media>          <item>          <nid>664620</nid>          <type>image</type>          <title><![CDATA[ATL Skyline Reflected in Binary Bridge]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[ATL Skyline Reflection-Binary Bridge.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/ATL%20Skyline%20Reflection-Binary%20Bridge.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/ATL%20Skyline%20Reflection-Binary%20Bridge.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/ATL%2520Skyline%2520Reflection-Binary%2520Bridge.jpeg?itok=mRwU9DvN]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[ATL skyline reflected in Binary Bridge]]></image_alt>                    <created>1673381152</created>          <gmt_created>2023-01-10 20:05:52</gmt_created>          <changed>1673381152</changed>          <gmt_changed>2023-01-10 20:05:52</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="191885"><![CDATA[M3GAN]]></keyword>          <keyword tid="2835"><![CDATA[ai]]></keyword>          <keyword tid="46361"><![CDATA[GT computing]]></keyword>          <keyword tid="191886"><![CDATA[What&#039;s Next for 2023]]></keyword>          <keyword tid="122801"><![CDATA[ML]]></keyword>          <keyword tid="2556"><![CDATA[artificial intelligence]]></keyword>          <keyword tid="9167"><![CDATA[machine learning]]></keyword>          <keyword tid="180344"><![CDATA[nlp]]></keyword>          <keyword tid="23981"><![CDATA[natural language processing]]></keyword>          <keyword tid="109581"><![CDATA[deep learning]]></keyword>          <keyword tid="176999"><![CDATA[neural networks]]></keyword>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>      </keywords>  <core_research_areas>          <term tid="39461"><![CDATA[Manufacturing, Trade, and Logistics]]></term>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="662312">  <title><![CDATA[Research Paves Way for Home Robot that Can Tidy a House on Its Own]]></title>  <uid>32045</uid>  <body><![CDATA[<p>Struggling with keeping your home clean and organized? You may soon have an extra set of hands to help around the house.</p><p>Imagine a home robot that can keep a house tidy without being given any commands from its owner. Well, the next step in home robotics is here &mdash; at least virtually.</p><p>A group of doctoral and master&rsquo;s students from Georgia Tech&#39;s School of Interactive Computing, in collaboration with researchers from the University of Toronto, believe they have created the benchmark for a home robot that can keep an entire house tidy.</p><p>In their paper,&nbsp;<em>Housekeep: Tidying Virtual Households Using Commonsense Reasoning</em>, Georgia Tech doctoral candidates <strong>Harsh</strong> <strong>Agrawal</strong> and <strong>Andrew</strong> <strong>Szot</strong>, master&rsquo;s students <strong>Arun</strong> <strong>Ramachandran</strong> and <strong>Sriram</strong> <strong>Yenamandra</strong>, and <strong>Yash</strong> <strong>Kant</strong>, a former research visitor at Georgia Tech who is now a doctoral candidate at Toronto, set out to prove an embodied artificial intelligence (AI) could conduct simple housekeeping tasks without explicit instructions.</p><p>Using advanced natural language processing machine learning techniques, the students have successfully simulated the robot exploring a virtual household, identifying misplaced items, and putting them in their correct place.</p><p>Kant said most robots in embodied AI are given specific instructions for different functions, but the students wanted to be sure the robot could achieve task completion without instructions in simulation before moving on to real-world testing.</p><p>&ldquo;In the actual world, things are difficult,&rdquo; Kant said. &ldquo;Training robots in the real world &mdash; they move around slowly; they will bump into things and people. So, we do it in simulation because you can run things at a faster speed, and you can have multiple virtual robots running.&rdquo;</p><p><strong>Dhruv</strong> <strong>Batra</strong>, an associate professor in the School of Interactive Computing and a research scientist with Meta AI, and <strong>Igor</strong> <strong>Gilitschenski</strong>, an assistant professor of mathematical and computational sciences at Toronto, served as advisors on the paper, which was accepted to the 2022 European Conference on Computer Vision, Oct. 23-27 in Tel Aviv, Israel.</p><h4><a href="https://sites.gatech.edu/ml-eccv-2022/">[FULL COVERAGE: Georgia Tech at ECCV 2022]</a></h4><p>In the virtual simulation, the robot spawned in a random section of the house and immediately began looking for misplaced objects. It correctly identified a misplaced lunchbox in a kid&rsquo;s bedroom and moved it to the kitchen. It also located some toys left in the bathroom and moved them to the kid&rsquo;s bedroom.</p><p>Agrawal said the goal of the project from the beginning was to have the robot mimic commonsense reasoning that any human would have in tidying a house. Through surveys, the team collected rearrangement preferences for 1,799 objects in 585 placements in 105 rooms.</p><p>&ldquo;We collected human preferences data,&rdquo; Agrawal said. &ldquo;We asked people where they like to keep certain objects, and we wanted robots to have a similar notion of cleanliness in a tidy home.</p><p>&ldquo;You don&rsquo;t provide instructions when you ask the kids to clean up the house. It&rsquo;s commonsense. You know certain things go in certain places. You know Lego blocks don&rsquo;t belong in the bathroom. We thought it&rsquo;d be cool if it could clean up the house without specifying instructions. As humans, we can do a bunch of these tasks without being given specific instructions.&rdquo;</p><p>Creating the simulation had several challenges. These included getting the robot to use reason about the correct placement of new objects, getting the robot to adapt to new environments, and getting it to work through choices when there are multiple correct locations a misplaced object could go.</p><p>Szot said what attracted him to the project was the idea of creating a robot that didn&rsquo;t need to be told where to put something, whereas in his previous work, that&rsquo;s exactly what he had to do.</p><p>&ldquo;If you wanted it do something like clean up the house, you would have to tell it, &lsquo;Hey, robot, move that object to there,&rsquo;&rdquo; Szot said. &ldquo;It&rsquo;s very tedious to specify that. We took the first step of saying let&rsquo;s give the robot some commonsense reasoning. It might not be specific to a person; it might just be capturing more generally what people think, but it captures a lot of important situations. It&rsquo;s able to handle most of those situations in which people agree the object belongs there or the object doesn&rsquo;t belong there.&rdquo;</p><p>Using text from the internet, the team informed the AI that drives the robot by fine-tuning a large language model based on human preferences.</p><p>&ldquo;The way we approached solving this problem is we took this external source of knowledge from text on the internet and these language tasks, and so from natural language processing we took that information and used it to give our robot some idea of this common sense,&rdquo; Szot said. &ldquo;It wasn&rsquo;t purely from the house it learned how to do these things. From articles or texts online, it was able to distill this commonsense reasoning ability and then apply it.&rdquo;</p><p>Kant said using language models allows the AI to distinguish between objects and whether those objects should go together. He added that he thinks that the language model used to train the AI can be fine-tuned by extracting content from web articles related to housekeeping.</p><p>&ldquo;Language models have shown very promising results in trying to extract semantics, like whether two things &mdash; say an apple and fruit basket &mdash; go together in a household,&rdquo; Kant said.</p><p>The team is just at the tip of the iceberg, and the virtual simulation serves only as a proof of concept. It&rsquo;s a long-term project that will continue to explore new possibilities, which include creating a robot that can tidy a household according to specific user preferences.</p><p>But the successful use of NLP methods to inform a novel AI could break new barriers in the creation of new systems in which organization is the focus.</p><p>&ldquo;It&rsquo;s a benchmark for the rest of the community to use,&rdquo; Szot said. &ldquo;Hopefully this is something for people to gather behind to focus on this very realistic task setting of cleaning the house. We showed that you can create these embodied agents that can use this external knowledge and learn commonsense and use it in embodied robotic settings.&rdquo;</p><p>&ldquo;I think the data that we collected is pretty significant in the sense that we now have a few hundred annotations for where each object should go in houses and where they&rsquo;re likely to be found in untidy houses, and I think that information can guide a lot of systems,&rdquo; Agrawal added. &ldquo;I feel like we are starting to now see people saying all these annotations can be used for building their own systems and benchmarks.&rdquo;</p>]]></body>  <author>Ben Snedeker</author>  <status>1</status>  <created>1666192153</created>  <gmt_created>2022-10-19 15:09:13</gmt_created>  <changed>1666209761</changed>  <gmt_changed>2022-10-19 20:02:41</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A group of doctoral and master’s students from Georgia Tech's School of Interactive Computing believe they have created the benchmark for a home robot that can keep an entire house tidy.]]></teaser>  <type>news</type>  <sentence><![CDATA[A group of doctoral and master’s students from Georgia Tech's School of Interactive Computing believe they have created the benchmark for a home robot that can keep an entire house tidy.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2022-10-19T00:00:00-04:00</dateline>  <iso_dateline>2022-10-19T00:00:00-04:00</iso_dateline>  <gmt_dateline>2022-10-19 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[ndeen6@gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Nathan Deen, Communications Officer</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>662342</item>          <item>662343</item>      </media>  <hg_media>          <item>          <nid>662342</nid>          <type>image</type>          <title><![CDATA[Housekeep]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[housekeeping-algorithm.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/housekeeping-algorithm.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/housekeeping-algorithm.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/housekeeping-algorithm.jpeg?itok=MR9DSGw9]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Housekeep is a benchmark to evaluate commonsense reasoning in the home for embodied AI. I]]></image_alt>                    <created>1666204859</created>          <gmt_created>2022-10-19 18:40:59</gmt_created>          <changed>1666204859</changed>          <gmt_changed>2022-10-19 18:40:59</gmt_changed>      </item>          <item>          <nid>662343</nid>          <type>image</type>          <title><![CDATA[Housekeep research team collage]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[authors_housekeeping-bot-copy_v2.2.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/authors_housekeeping-bot-copy_v2.2.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/authors_housekeeping-bot-copy_v2.2.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/authors_housekeeping-bot-copy_v2.2.jpg?itok=X0x0LChP]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Housekeep research team collage]]></image_alt>                    <created>1666204942</created>          <gmt_created>2022-10-19 18:42:22</gmt_created>          <changed>1666204942</changed>          <gmt_changed>2022-10-19 18:42:22</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://sites.gatech.edu/ml-eccv-2022/]]></url>        <title><![CDATA[Georgia Tech at ECCV 2022]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="66442"><![CDATA[MS HCI]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="434391"><![CDATA[ECE M.S. Thesis Defenses]]></group>          <group id="434381"><![CDATA[ECE Ph.D. Dissertation Defenses]]></group>          <group id="434371"><![CDATA[ECE Ph.D. Proposal Oral Exams]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="1356"><![CDATA[robot]]></keyword>          <keyword tid="191487"><![CDATA[eccv]]></keyword>          <keyword tid="191488"><![CDATA[tidy]]></keyword>          <keyword tid="2483"><![CDATA[interactive computing]]></keyword>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>      </keywords>  <core_research_areas>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="659546">  <title><![CDATA[Georgia Tech Researchers Present New Machine Learning Methods and Applications at ICML 2022]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Georgia Tech researchers have new published research this week at the International Conference on Machine Learning (ICML), one of the leading international academic conferences in machine learning, the field of computer science that gives computer systems the ability to learn from data. ICML 2022 runs through Saturday in Baltimore, Maryland.</p><p>The research venue is globally renowned for presenting and publishing&nbsp;cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics.</p><p>Georgia Tech researchers are featured throughout the technical program for new contributions to ML methods and applications. There are 15 papers from Tech in the main program and workshops.</p><p>ICML&rsquo;s top research tier &ndash; oral papers &ndash; includes two works from Georgia Tech&rsquo;s H. Milton Stewart School of Industrial and Systems Engineering.</p><p>Tech researchers are presenting in the following sessions:</p><ul><li>Adaptive Experimental Design and Active Learning in the Real World (ReALML)</li><li>Beyond Bayes: Paths Towards Universal Reasoning Systems</li><li>Deep Learning: SSL/GNN</li><li>Deep Learning/Optimization</li><li>Deep Learning: Theory</li><li>New Frontiers in Adversarial Machine Learning</li><li>Optimization: Convex</li><li>PM: Monte Carlo and Sampling Methods</li><li>PM: Variational Inference/Bayesian Models and Methods</li><li>Stable Conformal Prediction Sets</li><li>T: Online Learning and Bandits</li><li>Theory/Social Aspects</li><li>Topology, Algebra, and Geometry in Machine Learning</li></ul><p>Details about the ICML research from Georgia Tech are at the links below. To learn more about the Machine Learning Center at Georgia Tech visit&nbsp;<a href="https://ml.gatech.edu/">https://ml.gatech.edu</a>.</p><h2><strong>Georgia Tech at ICML 2022</strong></h2><p><strong>ORALS</strong></p><p>Stable Conformal Prediction Sets<br /><a href="https://icml.cc/Conferences/2022/Schedule?showEvent=16842">MISC: General Machine Learning Techniques</a><br />Eugene Ndiaye</p><p>Theory/Social Aspects<br /><a href="https://icml.cc/Conferences/2022/Schedule?showEvent=16656">Federated Reinforcement Learning: Linear Speedup Under Markovian Sampling</a><br />Sajad Khodadadian, Pranay Sharma, Gauri Joshi, Siva Theja Maguluri</p><p><strong>PAPERS</strong></p><p>Deep Learning/Optimization<br /><a href="https://icml.cc/Conferences/2022/Schedule?showEvent=16096">NISPA: Neuro-Inspired Stability-Plasticity Adaptation for Continual Learning in Sparse Networks</a><br />Mustafa Burak Gurbuz, Constantine Dovrolis</p><p><strong>SPOTLIGHTS</strong></p><p>Applications<br /><a href="https://icml.cc/Conferences/2022/Schedule?showEvent=17018">PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance</a><br />Qingru Zhang, Simiao Zuo, Chen Liang, Alexander Bukharin, Pengcheng He, Weizhu Chen, Tuo Zhao</p><p>Deep Learning: SSL/GNN<br /><a href="https://icml.cc/Conferences/2022/Schedule?showEvent=16824">Variational Wasserstein gradient flow</a><br />Jiaojiao Fan, Qinsheng Zhang, Amirhossein Taghvaei, Yongxin Chen</p><p>DL: Theory<br /><a href="https://icml.cc/Conferences/2022/Schedule?showEvent=18120">Benefits of Overparameterized Convolutional Residual Networks: Function Approximation under Smoothness Constraint</a><br />Hao Liu, Minshuo Chen, Siawpeng Er, Wenjing Liao, Tong Zhang, Tuo Zhao</p><p>Optimization: Convex<br /><a href="https://icml.cc/Conferences/2022/Schedule?showEvent=18332">Active Sampling for Min-Max Fairness</a><br />Jacob Abernethy, Pranjal Awasthi, Matth&auml;us Kleindessner, Jamie Morgenstern, Chris Russell, Jie Zhang</p><p>PM: Monte Carlo and Sampling Methods<br /><a href="https://icml.cc/Conferences/2022/Schedule?showEvent=16596">Hessian-Free High-Resolution Nesterov Acceleration For Sampling</a><br />Ruilin Li, Hongyuan Zha, Molei Tao</p><p>PM: Variational Inference/Bayesian Models and Methods<br /><a href="https://icml.cc/Conferences/2022/Schedule?showEvent=16430">Variational Sparse Coding with Learned Thresholding</a><br />Kion Fallah, Christopher J. Rozell</p><p>T: Online Learning and Bandits<br /><a href="https://icml.cc/Conferences/2022/Schedule?showEvent=16806">Universal and data-adaptive algorithms for model selection in linear contextual bandits</a><br />Vidya Muthukumar, Akshay Krishnamurthy</p><p>Theory<br /><a href="https://icml.cc/Conferences/2022/Schedule?showEvent=16966">ActiveHedge: Hedge meets Active Learning</a><br />Bhuvesh Kumar, Jacob Abernethy, Venkatesh Saligrama</p><p><strong>WORKSHOPS</strong></p><p>Adaptive Experimental Design and Active Learning in the Real World (ReALML) workshop<br /><a href="https://arxiv.org/pdf/2206.10120.pdf">DECAL: DEployable Clinical Active Learning</a><br />Y. Logan, M. Prabhushankar and G. AlRegib</p><p>Beyond Bayes: Paths Towards Universal Reasoning Systems<br /><a href="https://arxiv.org/abs/2202.11838">Explanatory Paradigms in Neural Networks</a><br />Ghassan AlRegib, Mohit Prabhushankar</p><p>New Frontiers in Adversarial Machine Learning<br /><a href="https://arxiv.org/abs/2206.08255">Gradient-Based Adversarial and Out-of-Distribution Detection</a><br />Jinsol Lee, Mohit Prabhushankar, Ghassan AlRegib</p><p>Topology, Algebra, and Geometry in Machine Learning (Workshop)<br /><a href="https://arxiv.org/abs/2206.06563">Zeroth-Order Topological Insights into Iterative Magnitude Pruning</a><br />Aishwarya Balwani, Jakob Krzyston</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1658349862</created>  <gmt_created>2022-07-20 20:44:22</gmt_created>  <changed>1658350621</changed>  <gmt_changed>2022-07-20 20:57:01</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech researchers have new published research at the International Conference on Machine Learning (ICML), a leading academic conference in machine learning, the field of computer science that gives computer systems the ability to learn from data.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech researchers have new published research at the International Conference on Machine Learning (ICML), a leading academic conference in machine learning, the field of computer science that gives computer systems the ability to learn from data.]]></sentence>  <summary><![CDATA[<p>Georgia Tech researchers have new published research at the International Conference on Machine Learning (ICML), a leading academic conference in machine learning, the field of computer science that gives computer systems the ability to learn from data.</p>]]></summary>  <dateline>2022-07-18T00:00:00-04:00</dateline>  <iso_dateline>2022-07-18T00:00:00-04:00</iso_dateline>  <gmt_dateline>2022-07-18 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Joshua Preston<br /><a href="mailto:jpreston7@gatech.edu?subject=ICML%202022">Research Communications Manager</a><br />College of Computing</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>659547</item>      </media>  <hg_media>          <item>          <nid>659547</nid>          <type>image</type>          <title><![CDATA[ICML 2022]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[ICML22 people collage_final3.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/ICML22%20people%20collage_final3.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/ICML22%20people%20collage_final3.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/ICML22%2520people%2520collage_final3.png?itok=tWUnySyc]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1658349915</created>          <gmt_created>2022-07-20 20:45:15</gmt_created>          <changed>1658349915</changed>          <gmt_changed>2022-07-20 20:45:15</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39431"><![CDATA[Data Engineering and Science]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="659345">  <title><![CDATA[Georgia Tech Research In Natural Language Processing Derives Insight from Growing Volume of Digital Text]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Natural language processing (NLP) is a growing cornerstone of artificial intelligence and allows for people and machines to take action based on insights in digital text. New NLP research from Georgia Tech is allowing for patterns to be uncovered in this text and broaden the understanding of how to build better computer applications that derive value from written language.</p><p>Georgia Tech researchers are presenting their latest work at the annual conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2022), taking place this week, July 10-15. NAACL provides a regional focus for members of the Association for Computational Linguistics (ACL) in North America as well as in Central and South America and promotes cooperation and information exchange among related scientific and professional societies.</p><p>&ldquo;Recent advances in natural language processing &shy;&shy;&shy;&shy;&ndash; especially around big models &ndash; have enabled successful applications,&rdquo; said&nbsp;<strong>Diyi Yang</strong>, assistant professor in the School of Interactive Computing and researcher in NLP. &ldquo;At the same time, we see a growing amount of evidence and concern toward the negative aspects of NLP systems, such as the bias and fragility exhibited by these models, as well as the lack of input from users.&rdquo;</p><p>Yang&rsquo;s work in computational social science and NLP focuses on how to understand human communication in social context&nbsp;and build&nbsp;socially aware&nbsp;language technologies to support human-to-human and human-computer interaction.</p><p>Her SALT Lab has accrued an impressive number of innovations in the field over the past eight months, starting with research at November 2021&rsquo;s EMNLP conference. SALTers, as they are called, led Georgia Tech to become the top global contributor in computational social science and cultural analytics at that venue. The 60<sup>th</sup>&nbsp;Meeting of the ACL in Dublin followed in May with multiple SALT studies, including a best paper. Yang&rsquo;s group has six papers at this week&rsquo;s NAACL.</p><p>&ldquo;We hope to build NLP systems that are more user centric, more robust, and more aware of human factors,&rdquo; said Yang. &ldquo;Our NAACL works are in this direction, covering robustness, toxicity detection, and generalization to new settings.&rdquo;</p><p>Yang&rsquo;s aspirations for the field are shared by her GT peers, who together have work in the following tracks at NAACL:</p><ul><li>Ethics, Bias, Fairness</li><li>Information Extraction</li><li>Information Retrieval</li><li>Interpretability and Analysis of Models for NLP</li><li>Machine Learning</li><li>Machine Learning for NLP</li><li>Semantics: Sentence-level Semantics and Textual Inference</li></ul><p>Georgia Tech&rsquo;s research paper acceptances in the main track at NAACL are below. To learn more about NLP and machine learning research at Georgia Tech visit&nbsp;<a href="https://ml.gatech.edu/">https://ml.gatech.edu</a>.</p><p><strong>GEORGIA TECH RESEARCH AT NAACL 2022</strong>&nbsp;(main paper track)</p><p><strong>Ethics, Bias, Fairness</strong></p><p>Explaining Toxic Text via Knowledge Enhanced Text Generation<br /><em>Rohit Sridhar, Diyi Yang</em></p><p><strong>Information Extraction</strong></p><p>Self-Training with Differentiable Teacher<br /><em>Simiao Zuo, Yue Yu, Chen Liang, Haoming Jiang, Siawpeng Er, Chao Zhang, Tuo Zhao, Hongyuan Zha</em></p><p><strong>Information Retrieval</strong></p><p>CERES: Pretraining of Graph-Conditioned Transformer for Semi-Structured Session Data<br /><em>Rui Feng, Chen Luo, Qingyu Yin, Bing Yin, Tuo Zhao, Chao Zhang</em></p><p><strong>Interpretability and Analysis of Models for NLP</strong></p><p>Identifying and Mitigating Spurious Correlations for Improving Robustness in NLP Models<br /><em>Tianlu Wang, Rohit Sridhar, Diyi Yang, Xuezhi Wang</em></p><p>Measure and Improve Robustness in NLP Models: A Survey<br /><em>Xuezhi Wang, Haohan Wang, Diyi Yang</em></p><p>Reframing Human-AI Collaboration for Generating Free-Text Explanations<br /><em>Sarah Wiegreffe, Jack Hessel, Swabha Swayamdipta, Mark Riedl, Yejin Choi</em></p><p><strong>Machine Learning</strong></p><p>AcTune: Uncertainty-Aware Active Self-Training for Active Fine-Tuning of Pretrained Language Models<br /><em>Yue Yu, Lingkai Kong, Jieyu Zhang, Rongzhi Zhang, Chao Zhang</em></p><p>MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation<br /><em>Simiao Zuo, Qingru Zhang, Chen Liang, Pengcheng He, Tuo Zhao, Weizhu Chen</em></p><p><strong>Machine Learning for NLP</strong></p><p>TreeMix: Compositional Constituency-based Data Augmentation for Natural Language Understanding<br /><em>Le Zhang, Zichao Yang, Diyi Yang</em></p><p><strong>NLP Applications</strong></p><p>Cryptocoin Bubble Detection: A New Dataset, Task &amp; Hyperbolic Models<br /><em>Ramit Sawhney, Shivam Agarwal, Vivek Mittal, Paolo Rosso, Vikram Nanda, Sudheer Chava</em></p><p><strong>Semantics: Sentence-level Semantics and Textual Inference</strong></p><p>SEQZERO: Few-shot Compositional Semantic Parsing with Sequential Prompts and Zero-shot Models<br /><em>Jingfeng Yang, Haoming Jiang, Qingyu Yin, Danqing Zhang, Bing Yin, Diyi Yang</em></p><p>SUBS: Subtree Substitution for Compositional Semantic Parsing<br /><em>Jingfeng Yang, Le Zhang, Diyi Yang</em></p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1657563571</created>  <gmt_created>2022-07-11 18:19:31</gmt_created>  <changed>1657565471</changed>  <gmt_changed>2022-07-11 18:51:11</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[New NLP research from Georgia Tech is allowing for patterns to be uncovered in this text and broaden the understanding of how to build better computer applications that derive value from written language.]]></teaser>  <type>news</type>  <sentence><![CDATA[New NLP research from Georgia Tech is allowing for patterns to be uncovered in this text and broaden the understanding of how to build better computer applications that derive value from written language.]]></sentence>  <summary><![CDATA[<p>Natural language processing (NLP) is a growing cornerstone of artificial intelligence and allows for people and machines to take action based on insights in digital text. New NLP research from Georgia Tech is allowing for patterns to be uncovered in this text and broaden the understanding of how to build better computer applications that derive value from written language.</p>]]></summary>  <dateline>2022-07-11T00:00:00-04:00</dateline>  <iso_dateline>2022-07-11T00:00:00-04:00</iso_dateline>  <gmt_dateline>2022-07-11 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jpreston7@gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston7@gatech.edu?subject=NAACL%202022">Joshua Preston</a><br />Research Communications Manager<br />College of Computing</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>659346</item>      </media>  <hg_media>          <item>          <nid>659346</nid>          <type>image</type>          <title><![CDATA[NAACL 2022]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[main graphic_gt_naacl22_horiz copy.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/main%20graphic_gt_naacl22_horiz%20copy.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/main%20graphic_gt_naacl22_horiz%20copy.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/main%2520graphic_gt_naacl22_horiz%2520copy.jpg?itok=EzWA2hGw]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1657563706</created>          <gmt_created>2022-07-11 18:21:46</gmt_created>          <changed>1657563706</changed>          <gmt_changed>2022-07-11 18:21:46</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39431"><![CDATA[Data Engineering and Science]]></term>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="658913">  <title><![CDATA[Georgia Tech Presents Latest in Machine Learning Research at Computer Vision and Pattern Recognition Conference June 19-24]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Georgia Institute of Technology researchers will present new technical findings in artificial intelligence, machine learning, and computer vision research and applications at the Computer Vision and Pattern Recognition (CVPR) conference taking place from June 19-24, 2022, in New Orleans, Louisiana, and virtually.</p><p>The institute is a leading contributor in the technical program and researchers will present 11 papers in the following tracks:</p><ul><li>3D from multi-view and sensors</li><li>Datasets and evaluation</li><li>Navigation and autonomous driving</li><li>Recognition: detection, categorization, retrieval</li><li>Self-&amp; semi-&amp; meta- &amp; unsupervised learning</li><li>Vision + language</li><li>Vision applications and systems</li></ul><p>&ldquo;Researchers in the Machine Learning Center at Georgia Tech aim to research and develop innovative and sustainable technologies using machine learning and artificial intelligence that serve broader communities in socially and ethically responsible ways,&rdquo; said Irfan Essa, director of the center and senior associate dean in the College of Computing. &ldquo;The GT research at CVPR reflects this broader goal, and we are actively building pathways to connect our experts to explore the implications of this technology in the world.&rdquo;</p><p>Georgia Tech researchers at CVPR are collaborating in their current work with more than 100 peer authors from dozens of organizations that span industry, government, and academia.</p><p>The conference will draw leading authors, academics, and experts in key areas of artificial intelligence with an expected crowd of more than 7,500 attendees this year. Hosted by the IEEE Computer Society (IEEE CS) and the Computer Vision Foundation (CVF), CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses.</p><p>ML@GT has created an <a href="https://public.tableau.com/views/CVPR2022/Dashboard1?:showVizHome=no">interactive visual analysis</a> of the CVPR 2022 papers program to show current trends in the field. The analysis breaks down the number of papers and authors by research area and allows users to explore areas of interest, including oral and poster papers on a particular topic. Research can also be narrowed down to particular institutions.</p><p>To learn more about Georgia Tech work at CVPR, details and paper links are below.</p><p>&nbsp;</p><h2><strong>Georgia Tech Research at CVPR 2022</strong></h2><p>&nbsp;</p><p><strong>3D FROM MULTI-VIEW AND SENSORS</strong></p><p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Hruby_Learning_To_Solve_Hard_Minimal_Problems_CVPR_2022_paper.html"><strong>Learning To Solve Hard Minimal Problems</strong></a><br /><em>Petr Hruby, Timothy Duff, Anton Leykin, Tomas Pajdla</em></p><p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Kundu_Panoptic_Neural_Fields_A_Semantic_Object-Aware_Neural_Scene_Representation_CVPR_2022_paper.html"><strong>Panoptic Neural Fields: A Semantic Object-Aware Neural Scene Representation</strong></a><br /><em>Abhijit Kundu, Kyle Genova, Xiaoqi Yin, Alireza Fathi, Caroline Pantofaru, Leonidas J. Guibas, Andrea Tagliasacchi, Frank Dellaert, Thomas Funkhouser</em></p><p>&nbsp;</p><p><strong>DATASETS AND EVALUATION</strong></p><p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Grauman_Ego4D_Around_the_World_in_3000_Hours_of_Egocentric_Video_CVPR_2022_paper.html"><strong>Ego4D: Around the World in 3,000 Hours of Egocentric Video</strong></a><br /><em>Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Abrham Gebreselasie, Cristina Gonz&aacute;lez, James Hillis, Xuhua Huang, Yifei Huang, Wenqi Jia, Weslie Khoo, J&aacute;chym Kol&aacute;ř, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Ziwei Zhao, Yunyi Zhu, Pablo Arbel&aacute;ez, David Crandall, Dima Damen, Giovanni Maria Farinella, Christian Fuegen, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik</em></p><p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Bryant_Multi-Dimensional_Nuanced_and_Subjective_-_Measuring_the_Perception_of_Facial_CVPR_2022_paper.html"><strong>Multi-Dimensional, Nuanced and Subjective &ndash; Measuring the Perception of Facial Expressions</strong></a><br /><em>De&#39;Aira Bryant, Siqi Deng, Nashlie Sephus, Wei Xia, Pietro Perona</em></p><p>&nbsp;</p><p><strong>NAVIGATION AND AUTONOMOUS DRIVING</strong></p><p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Partsey_Is_Mapping_Necessary_for_Realistic_PointGoal_Navigation_CVPR_2022_paper.html"><strong>Is Mapping Necessary for Realistic PointGoal Navigation?</strong></a><br /><em>Ruslan Partsey, Erik Wijmans, Naoki Yokoyama, Oles Dobosevych, Dhruv Batra, Oleksandr Maksymets</em></p><p>&nbsp;</p><p><strong>RECOGNITION: DETECTION, CATEGORIZATION, RETRIEVAL</strong></p><p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Li_Cross-Domain_Adaptive_Teacher_for_Object_Detection_CVPR_2022_paper.html"><strong>Cross-Domain Adaptive Teacher for Object Detection</strong></a><br /><em>Yu-Jhe Li, Xiaoliang Dai, Chih-Yao Ma, Yen-Cheng Liu, Kan Chen, Bichen Wu, Zijian He, Kris Kitani, Peter Vajda</em></p><p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Zhang_Group_R-CNN_for_Weakly_Semi-Supervised_Object_Detection_With_Points_CVPR_2022_paper.html"><strong>Group R-CNN for Weakly Semi-Supervised Object Detection With Points</strong></a><br /><em>Shilong Zhang, Zhuoran Yu, Liyang Liu, Xinjiang Wang, Aojun Zhou, Kai Chen</em></p><p>&nbsp;</p><p><strong>SELF-&amp; SEMI-&amp; META- &amp; UNSUPERVISED LEARNING</strong></p><p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Liu_Unbiased_Teacher_v2_Semi-Supervised_Object_Detection_for_Anchor-Free_and_Anchor-Based_CVPR_2022_paper.html"><strong>Unbiased Teacher v2: Semi-Supervised Object Detection for Anchor-Free and Anchor-Based Detectors</strong></a><br /><em>Yen-Cheng Liu, Chih-Yao Ma, Zsolt Kira</em></p><p>&nbsp;</p><p><strong>VISION + LANGUAGE</strong></p><p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Kuo_Beyond_a_Pre-Trained_Object_Detector_Cross-Modal_Textual_and_Visual_Context_CVPR_2022_paper.html"><strong>Beyond a Pre-Trained Object Detector: Cross-Modal Textual and Visual Context for Image Captioning</strong></a><br /><em>Chia-Wen Kuo, Zsolt Kira</em></p><p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Ramrakhya_Habitat-Web_Learning_Embodied_Object-Search_Strategies_From_Human_Demonstrations_at_Scale_CVPR_2022_paper.html"><strong>Habitat-Web: Learning Embodied Object-Search Strategies From Human Demonstrations at Scale</strong></a><br /><em>Ram Ramrakhya, Eric Undersander, Dhruv Batra, Abhishek Das</em></p><p>&nbsp;</p><p><strong>VISION APPLICATIONS AND SYSTEMS</strong></p><p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Datta_Episodic_Memory_Question_Answering_CVPR_2022_paper.html"><strong>Episodic Memory Question Answering</strong></a><br /><em>Samyak Datta, Sameer Dharur, Vincent Cartillier, Ruta Desai, Mukul Khanna, Dhruv Batra, Devi Parikh</em></p><p>&nbsp;</p><p><strong>DEMO</strong></p><p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Vellaichamy_DetectorDetective_Investigating_the_Effects_of_Adversarial_Examples_on_Object_Detectors_CVPR_2022_paper.html"><strong>DetectorDetective: Investigating the Effects of Adversarial Examples on Object Detectors</strong></a><br /><em>Sivapriya Vellaichamy, Matthew Hull, Zijie J. Wang, Nilaksh Das, ShengYun Peng, Haekyu Park, Duen Horng (Polo) Chau</em></p><p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Lee_VisCUIT_Visual_Auditor_for_Bias_in_CNN_Image_Classifier_CVPR_2022_paper.html"><strong>VisCUIT: Visual Auditor for Bias in CNN Image Classifier</strong></a><br /><em>Seongmin Lee, Zijie J. Wang, Judy Hoffman, Duen Horng (Polo) Chau</em></p><p>&nbsp;</p><p><strong>WORKSHOP</strong></p><p><a href="https://sites.google.com/view/mabe22/">Multi-Agent Behavior: Representation, Modeling, Measurement, and Applications</a></p><p><strong>Learning Behavior Representations Through Multi-Timescale Bootstrapping</strong><br /><em>Mehdi Azabou, Michael Mendelson, Maks Sorokin, Shantanu Thakoor, Nauman Ahad, Carolina Urzay, Mohammad Gheshlaghi Azar, Eva L. Dyer</em></p><p>&nbsp;</p><p>&nbsp;</p><p>&nbsp;</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1655307657</created>  <gmt_created>2022-06-15 15:40:57</gmt_created>  <changed>1655307931</changed>  <gmt_changed>2022-06-15 15:45:31</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Institute of Technology researchers will present new technical findings in artificial intelligence, machine learning, and computer vision research and applications at the Computer Vision and Pattern Recognition (CVPR) conference June 19-24.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Institute of Technology researchers will present new technical findings in artificial intelligence, machine learning, and computer vision research and applications at the Computer Vision and Pattern Recognition (CVPR) conference June 19-24.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2022-06-15T00:00:00-04:00</dateline>  <iso_dateline>2022-06-15T00:00:00-04:00</iso_dateline>  <gmt_dateline>2022-06-15 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston7@gatech.edu?subject=CVPR%20news">Josh Preston</a><br />Research Communications Manager<br />College of Computing</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>658915</item>      </media>  <hg_media>          <item>          <nid>658915</nid>          <type>image</type>          <title><![CDATA[CVPR 2022 visual analysis]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Data Viz_CVPR.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Data%20Viz_CVPR.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Data%20Viz_CVPR.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Data%2520Viz_CVPR.jpg?itok=c78PSd1Z]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1655307891</created>          <gmt_created>2022-06-15 15:44:51</gmt_created>          <changed>1655307891</changed>          <gmt_changed>2022-06-15 15:44:51</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39431"><![CDATA[Data Engineering and Science]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="657942">  <title><![CDATA[New Framework for Cooperative Bots Mimics High-Functioning Human Teams, Decreases Risks from Unreliable Bots]]></title>  <uid>27592</uid>  <body><![CDATA[<p>A Georgia Institute of Technology&nbsp;research group in the School of Interactive Computing has developed a robotics system that exceeds existing standards for collaborative bots that work independently to achieve a shared goal.</p><p>The system intelligently increases the information shared among the bots and <a href="https://www.youtube.com/watch?v=rK_itCF9hPc">allows for improved cooperation</a>. The aim is to model high-functioning human teams. It also creates resiliency against bad or unreliable team bots that may hinder the overall programmed goal.</p><p>&ldquo;Intuitively, the idea behind our new framework &mdash; InfoPG &mdash;&nbsp;is that a robot agent goes back-and-forth on what it thinks it <em>should</em> do with their teammates, and then the teammates will update on what they think is <em>best</em> to do,&rdquo; said <strong>Esmaeil Seraj</strong>, Ph.D. student in the <a href="https://core-robotics.gatech.edu/">CORE Robotics Lab</a> and researcher on the project. &ldquo;They do this until the decision is deeply rationalized and reasoned about.&rdquo;</p><p>The work focuses on artificial agents on a decentralized team &mdash; in simulations or the real world &mdash;&nbsp;working in concert toward a specific task. Applications could include surgery, search and rescue, and disaster response, among others.</p><p>InfoPG facilitates communication between the artificial agents on an iterative basis and allows for actions and decisions that mimic human teams working at optimal levels.</p><p>&ldquo;This research is in fact inspired by how high-performing human teams act,&rdquo; said Seraj.</p><p>&ldquo;Humans normally use k-level thinking &mdash; such as, &lsquo;what I think you will do, what I think you think I will do, and so on&rsquo; &mdash; to rationalize their actions in a team,&rdquo; he said. &ldquo;The basic thought is that the more you know about your teammate&#39;s strategy, the easier it is for you to take the best action possible.&rdquo;</p><p>Using this approach, the researchers designed InfoPG to make one bot&rsquo;s decisions conditional on its teammates. They ran simulations using simple games like Pong, and complex games like StarCraft II.</p><p>In the latter &mdash; where the goal is for one team of agents to defeat another &mdash; the InfoPG architecture showed very advanced strategies. Seraj said agents in one case learned to form a triangle formation, sacrificing the front agent while the two other agents eliminated the enemy. Without InfoPG in play, an agent abandoned its team to save itself.</p><p>The new method also limits the disruption a bad bot on the team might cause.</p><p>&ldquo;Coordinating actions with such a fraudulent agent in a collaborative multi-agent setting can be detrimental,&rdquo; said <strong>Matthew Gombolay</strong>, assistant professor in the School of Interactive Computing and director of the CORE Robotics Lab. &ldquo;We need to ensure the integrity of robot teams in real-world applications where bots might be tasked to save lives or help people and organizations extend their capabilities.&rdquo;</p><p>Results of the work show InfoPG&rsquo;s performance exceeds various baselines in learning cooperative policies for multi-agent reinforcement learning. The researchers plan to move the system from simulation into real robots, such as controlling a swarm of drones to help surveil and fight wildfires.</p><p>The research is published in the 2022 Proceedings of the International Conference on Learning Representations. The paper, <em>Iterated Reasoning with Mutual Information in Cooperative and Byzantine Decentralized Teaming</em> is co-authored by computer science major <strong>Sachin G. Konan</strong>, Seraj, and Gombolay.</p><p>This work was sponsored by the Office of Naval Research under grant N00014-19-1-2076 and the Naval Research Lab (NRL) under the grant N00173-20-1-G009. The researchers&rsquo; views and statements are based on their findings and do not necessarily reflect those of the funding agencies.</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1651672383</created>  <gmt_created>2022-05-04 13:53:03</gmt_created>  <changed>1651774346</changed>  <gmt_changed>2022-05-05 18:12:26</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A Georgia Tech research group in the School of Interactive Computing has developed a robotics system that exceeds existing standards for collaborative bots that work independently to achieve a shared goal. ]]></teaser>  <type>news</type>  <sentence><![CDATA[A Georgia Tech research group in the School of Interactive Computing has developed a robotics system that exceeds existing standards for collaborative bots that work independently to achieve a shared goal. ]]></sentence>  <summary><![CDATA[<p>A Georgia Tech research group in the School of Interactive Computing has developed a robotics system that exceeds existing standards for collaborative bots that work independently to achieve a shared goal.</p>]]></summary>  <dateline>2022-05-04T00:00:00-04:00</dateline>  <iso_dateline>2022-05-04T00:00:00-04:00</iso_dateline>  <gmt_dateline>2022-05-04 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jpreston7@gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston7@gatech.edu">Joshua Preston</a><br />Research Communications Manager<br />College of Computing</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>657945</item>          <item>657946</item>      </media>  <hg_media>          <item>          <nid>657945</nid>          <type>image</type>          <title><![CDATA[Improving collaboration in decentralized teams of bots]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[promo_graphic_ICLR22_collab robots.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/promo_graphic_ICLR22_collab%20robots.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/promo_graphic_ICLR22_collab%20robots.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/promo_graphic_ICLR22_collab%2520robots.jpg?itok=W2PHnZBT]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1651672727</created>          <gmt_created>2022-05-04 13:58:47</gmt_created>          <changed>1651672727</changed>          <gmt_changed>2022-05-04 13:58:47</gmt_changed>      </item>          <item>          <nid>657946</nid>          <type>image</type>          <title><![CDATA[Multiwalker bots coordinating to carry object]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[robot cooperation in decentralized team.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/robot%20cooperation%20in%20decentralized%20team.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/robot%20cooperation%20in%20decentralized%20team.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/robot%2520cooperation%2520in%2520decentralized%2520team.png?itok=nb33E2El]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1651672836</created>          <gmt_created>2022-05-04 14:00:36</gmt_created>          <changed>1651672836</changed>          <gmt_changed>2022-05-04 14:00:36</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>      </keywords>  <core_research_areas>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="656188">  <title><![CDATA[Jing Li and Turgay Ayer Named Virginia C. and Joseph C. Mello Chairs in ISyE]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Georgia Tech&rsquo;s H. Milton School of Industrial Systems and Engineering announced two new appointments to the Virginia C. and Joseph C. Mello Chair. Professor <strong>Jing Li</strong> and Associate Professor <strong>Turgay Ayer</strong> both earned the designation earlier this year, which recognizes faculty leaders in the field of health care delivery operations.</p><p>Li, also core faculty in the <a href="http://ml.gatech.edu">Center for Machine Learning at Georgia Tech</a>, currently focuses on the developments of machine learning algorithms for precision medicine specifically with regards to the brain. She leverages collaborations with radiologists and neurologists to investigate brain diseases like cancer, Alzheimer&rsquo;s, and post-traumatic headache after brain injury.</p><p>Today, technology advances have produced more data than ever before, imaging, genomics, mobile health data, etc., which allow researchers to develop more personalized algorithms for diagnosis and prognosis.</p><p>&ldquo;What disease do they have? How severe is it? How will the disease change in the future? Are they on the track of recovery, or are they going to get worse,&rdquo; Li said.</p><p>Using the available datasets from imaging, genomics, clinical records, and mobile apps &amp; wearables, they are building personalized models for diagnosis and treatment in each of these areas that can lead to early detection and more effective outcomes.</p><p>Ayer, who also holds an appointment from Emory Medical School and serves as a senior advisor to CDC, focuses on health care analytics and socially responsible business analytics with an emphasis on practice-focused research. In recent work, he has attempted to build up more robust and effective virtual trials for medical screening, diagnosis and treatment using large-scale mathematical models.</p><p>If you look at the gold standard in medicine and clinical science &ndash; randomized control trials &ndash; it generally utilizes A/B testing strategies. But what if there are thousands of strategies to compare, not just Strategy A and Strategy B?</p><p>&ldquo;In a recent study, we looked at multi-modality cancer screening strategies for cancer detection in gene mutation carriers,&rdquo; Ayer said. &ldquo;You ask questions like: Should you use ultrasound screening or MRI screening? How about mammography screening? Or maybe mammography plus ultrasound screening? At what age should you start &ndash; 25 to 30? Or 35?</p><p>&ldquo;At what age should you transfer from a less intensive screening to another? Is that cost effective? And what if we are solving this problem for the United States versus sub-Saharan Africa where resources are more limited? There are millions and billions of scenarios, and you can&rsquo;t design a randomized control trial that would effectively compare those.&rdquo;</p><p>Ayer&rsquo;s work has spanned long-term chronic disease, both communicable and non-communicable &ndash; diseases like COVID-19 for the former and different cancers for the latter.</p><p>Both Li and Ayer said the chair appointment would assist in their work.</p><p>&ldquo;It&rsquo;s a great honor and recognition,&rdquo; Li said. &ldquo;I think going forward this will help me to pursue bigger efforts and initiatives, engaging people with a variety of expertise. This research needs collaboration across different descriptions. &rdquo;</p><p>&ldquo;It helps to bring more visibility to the work,&rdquo; Ayer echoed. &ldquo;This will also help us scale up the resources that we have within our communities and reach out to more collaborators.&rdquo;</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1646861813</created>  <gmt_created>2022-03-09 21:36:53</gmt_created>  <changed>1647460659</changed>  <gmt_changed>2022-03-16 19:57:39</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The Virginia C. and Joseph C. Mello Chairs recognize faculty leaders in the field of health care delivery operations.]]></teaser>  <type>news</type>  <sentence><![CDATA[The Virginia C. and Joseph C. Mello Chairs recognize faculty leaders in the field of health care delivery operations.]]></sentence>  <summary><![CDATA[<p>Professor <strong>Jing Li</strong> and Associate Professor <strong>Turgay Ayer</strong> both earned the designation earlier this year, which recognizes faculty leaders in the field of health care delivery operations.</p>]]></summary>  <dateline>2022-03-09T00:00:00-05:00</dateline>  <iso_dateline>2022-03-09T00:00:00-05:00</iso_dateline>  <gmt_dateline>2022-03-09 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Manager</p><p><a href="mailto:david.mitchell@isye.gatech.edu">david.mitchell@isye.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>656187</item>      </media>  <hg_media>          <item>          <nid>656187</nid>          <type>image</type>          <title><![CDATA[Jing Li and Turgay Ayer]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Jing Li and Turgay Ayer.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Jing%20Li%20and%20Turgay%20Ayer.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Jing%20Li%20and%20Turgay%20Ayer.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Jing%2520Li%2520and%2520Turgay%2520Ayer.png?itok=OFldn4R6]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Jing Li and Turgay Ayer]]></image_alt>                    <created>1646861401</created>          <gmt_created>2022-03-09 21:30:01</gmt_created>          <changed>1646861401</changed>          <gmt_changed>2022-03-09 21:30:01</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="1242"><![CDATA[School of Industrial and Systems Engineering (ISYE)]]></group>          <group id="1250"><![CDATA[Center for Health and Humanitarian Systems (CHHS)]]></group>          <group id="1243"><![CDATA[The Supply Chain and Logistics Institute (SCL)]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="190142"><![CDATA[ISYE; college of engineering; engineering; health care; Virginia c. and Joseph c. Mello; faculty]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="654779">  <title><![CDATA[NSF Grant Could Lead to Better Computational Tools for Human Fact Checkers]]></title>  <uid>27592</uid>  <body><![CDATA[<p>The spread of misinformation online on social media platforms has presented immediate challenges with potentially dire consequences, particularly around topics like public health and election integrity.</p><p>A $750,000 grant from the National Science Foundation could help give human fact checkers the computational tools needed to accurately and efficiently push back against this growing swell.</p><p>As part of a larger $21 million investment made by the NSF, Georgia Tech researchers and collaborators at the University of Wisconsin and Washington State University will examine the spread of misinformation, how that is fact checked, the challenges that prevent accurate and efficient correction, and the messaging within those corrections that may impact public trust. The team will comprise computer scientists, journalists, mass communications experts, psychologists, and political scientists.</p><p>&ldquo;Right now, fact checkers look at trending articles, do a deep dive, and spend hours to determine whether or not it is misinformation,&rdquo; said School of Computational Science and Engineering Assistant Professor&nbsp;<strong>Srijan</strong>&nbsp;<strong>Kumar</strong>, a co-principal investigator on the project. &ldquo;Within this are a number of biases: popularity biases in that only popular articles are examined, fact checking is slow, and it is language and region restricted. That&rsquo;s where we want to bring a computational approach to improve the process.&rdquo;</p><p>The Georgia Tech team, which also includes co-PI&nbsp;<strong>Munmun</strong>&nbsp;<strong>De</strong>&nbsp;<strong>Choudhury</strong>&nbsp;of the School of Interactive Computing, will focus on the computational element of the project by developing machine learning and artificial intelligence-based detectors. They will create algorithms that look not just at a piece of information, but also patterns like who is sharing, how it spreads through the network, and other attributes that can be fused together to create more efficient detection models.</p><p>&ldquo;We want to empower fact checkers to be able to do their jobs more efficiently,&rdquo; De Choudhury said. &ldquo;How can they detect misinformation, provide corrections, and do so in a way that encourages public trust? That involves people of many different fields as you navigate complex ecosystems of values, ideologies, and beliefs.&rdquo;</p><p>The project will look at misinformation in two of the most pressing areas: public health, including Covid-19 and vaccination misinformation, and election integrity. Information gleaned from this project and corresponding solutions, the researchers say, could be applied to additional areas of misinformation in the future.</p><p>The project is titled&nbsp;<em>How Large-Scale Identification and Intervention Can Empower Professional Fact-Checkers to Improve Democracy and Public Health</em>. The project grows out of a previous grant from the NSF for Rapid Research Response (RAPID), titled&nbsp;<em>Tackling the Psychological Impact of the COVID-19 Crisis</em>, which was also led by De Choudhury and Kumar.</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1643212196</created>  <gmt_created>2022-01-26 15:49:56</gmt_created>  <changed>1643212507</changed>  <gmt_changed>2022-01-26 15:55:07</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A $750,000 grant from the National Science Foundation could help give human fact checkers the computational tools needed to accurately and efficiently push back against this growing swell.]]></teaser>  <type>news</type>  <sentence><![CDATA[A $750,000 grant from the National Science Foundation could help give human fact checkers the computational tools needed to accurately and efficiently push back against this growing swell.]]></sentence>  <summary><![CDATA[<p>A $750,000 grant from the National Science Foundation could help give human fact checkers the computational tools needed to accurately and efficiently push back against this growing swell.</p>]]></summary>  <dateline>2022-01-25T00:00:00-05:00</dateline>  <iso_dateline>2022-01-25T00:00:00-05:00</iso_dateline>  <gmt_dateline>2022-01-25 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell<br />Communications Officer</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>654781</item>      </media>  <hg_media>          <item>          <nid>654781</nid>          <type>image</type>          <title><![CDATA[NSF project Co-PIs]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[hg image_kumar and de choudhury.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/hg%20image_kumar%20and%20de%20choudhury.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/hg%20image_kumar%20and%20de%20choudhury.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/hg%2520image_kumar%2520and%2520de%2520choudhury.png?itok=FR4gvobg]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1643212407</created>          <gmt_created>2022-01-26 15:53:27</gmt_created>          <changed>1643212407</changed>          <gmt_changed>2022-01-26 15:53:27</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="654268">  <title><![CDATA[Modeling Water-cleansing Wetlands in Extreme Weather]]></title>  <uid>27592</uid>  <body><![CDATA[<p>The cycle of rising temperatures leads to increases in precipitation as well as droughts. &nbsp;But what impact will these weather extremes, especially heavier precipitation, have on the earth&rsquo;s most effective water cleansers &ndash; wetland sediments? &nbsp;</p><p>That question is driving a new $1 million, three-year grant awarded to a Georgia Institute of Technology interdisciplinary research team of geochemistry, biology and applied mechanics experts.</p><p>The award is part of the Department of Energy&rsquo;s <a href="https://www.energy.gov/science/articles/department-energy-announces-77-million-earth-environmental-systems-modeling">$7.7 million funding </a>of 11 studies to improve the understanding of Earth system predictability and the Department&rsquo;s Energy Exascale Earth System Model, a state-of the-science climate model. The researchers intend to develop a new scalable model that can analyze and ultimately predict where and when sediment disruptions are most likely to occur.&nbsp;</p><p><strong>Wetlands &ndash; Where Water and Land Meet</strong></p><p>Found at the boundary between land and water, wetlands function as natural sponges that trap, cleanse, and slowly release surface water &ndash; they also serve as a natural climate change buffer, since they act as carbon &ldquo;sinks,&rdquo; storing vast amounts of carbon and methane in the ground. Swamps, marshes, and bogs are all examples of wetlands. What isn&rsquo;t known is if wetlands that become damaged or degraded from excess water will still absorb carbon at the same level. &nbsp;</p><p>By better understanding how wetlands work, Georgia Tech hopes to shed light on how wetlands will function with more frequent and more intense rainstorms. &nbsp;&nbsp;</p><p>&ldquo;A lot of work has been done in polar regions where there has been melting because of global warming, which has been shown to release a lot of methane. That&rsquo;s the main motivation behind the work we&rsquo;re going to do,&rdquo; said the project&rsquo;s principal investigator, Martial Taillefert, a geochemist and professor in the <a href="https://eas.gatech.edu/">School of Earth and Atmospheric Sciences</a>.&nbsp;</p><p>As water levels rise, below ground oxygen is consumed very quickly, he explained. Then microbial processes take over, leading to methane forming as well as carbon dioxide, that can escape to the atmosphere.</p><p>In this project Taillefert will characterize the physical and chemical processes taking place in a wetland, mainly using electrochemical sensors deployed at different locations in the wetland. Taillefert will be able to follow the chemical response to microbial processes and study how perturbations of the water cycle affect the release of greenhouse gases. This data will then be used to fine tune the models that will predict greenhouse gas emissions.</p><p><strong>Micro to Macro Scale</strong><br />Initial studies will involve samples on the scale of a few grains of soil, but the researchers hope to eventually run simulations on the scale of a riverbed or watershed (where surface water drains into a common stream channel or other body of water).</p><p>&ldquo;The goal is twofold &ndash; first, to satisfy our scientific curiosity and understand how those microbial processes can actually change the level of oxygen and trigger greenhouse gas emissions, and second, to develop a model that can predict what processes will be in the next cycle to better prepare and perhaps reduce carbon emissions in some cases,&rdquo; said project collaborator Chlo&eacute; Arson, associate professor of <a href="https://ce.gatech.edu/academics/groups/geosystems">Geosystems Engineering</a> in the <a href="https://ce.gatech.edu/">School of Civil and Environmental Engineering</a>.&nbsp;</p><p>While Taillefert focuses on the chemistry component and Arson on the mathematical modeling, collaborator Thomas DiChristina serves as the microbe expert.</p><p>&ldquo;My lab looks at what kind of hidden microbial processes are going on that we can&#39;t detect with the sensors because the methane is getting recycled so fast in the ground,&rdquo; said DiChristina, professor in the <a href="https://biosciences.gatech.edu/">School of Biological Sciences</a>.&nbsp;</p><p>DiChristina will be looking at multiple gene expressions without having to grow the bacteria in a laboratory.&nbsp;</p><p>&ldquo;Genomics allows you to deduce expression of metabolic potential. For example, which gene is producing methane, and which gene is inhibiting methane production,&rdquo; he said.</p><p>Since methane won&rsquo;t release into the atmosphere unless a certain condition occurs, the model will enable researchers to predict under what conditions methane would pour out of the sediments versus being retained and recycled, DiChristina explained.</p><p>The calculations that predict how much methane and carbon dioxide go into the atmosphere depend on an accurate description of what&#39;s happening in the subsurface -- in the sediment and in groundwater, Taillefert added.&nbsp;</p><p>&ldquo;We cannot yet quantify that really well. We think using our approach will enable us to get more data and a better understanding of how the process works and translate that knowledge into the models,&rdquo; he said.</p><p>Taillefert and DiChristina have been working on improving Georgia Tech&rsquo;s models for predicting these processes for over three decades. &nbsp;With this latest award, they hope to better understand and model the processes of oxidation and reduction that change the microstructure of sediments during cycles of flood.</p><p><strong>New Research Thrust &ndash; AI and Machine Learning &nbsp;</strong></p><p>Arson is most interested in predicting the changes in the size, shape, and arrangement of the grains of soil to understand how the porous space between the grains is affected by bio-chemical reactions.&nbsp;</p><p>&ldquo;Understanding the evolution of the porous space will help predict transport properties within the sediments, and the expected emissions of greenhouse gases,&rdquo; said Arson.&nbsp;</p><p>An expert in applied mechanics, she will use AI to build a model that can single out dominant reactions within the soil microstructure and disregard those that have minimal impact. Such insight will help simplify the model and allow it to more quickly correlate certain criteria that leads to spikes in greenhouse gases.&nbsp;</p><p>&ldquo;If you have a predictive model that actually attempts to explain the processes, as well as predicting them, then you have a more versatile approach that can be transferred to many other sites or environments,&rdquo; she said. &ldquo;I also could envision using this model and the machine learning algorithm to map locations where you expect higher emissions, and identify sites as risky, moderately risky or safe.&rdquo;</p><p>Georgia Tech is partnering with two Department of Energy (DOE) national laboratories: <a href="https://srnl.doe.gov/">Savannah River National Laboratory</a> (SRNL) in Aiken, SC, and <a href="https://www.anl.gov/">Argonne National Laboratory</a> in Chicago, IL.</p><p>&ldquo;Georgia Tech has a unique capability here that we don&#39;t have, and that capability is this combination of using state-of-the-art genomics capabilities, along with state-of-the-art electrochemistry, two attributes that Georgia Tech is internationally known for,&rdquo; said Daniel Kaplan, senior research fellow with SRNL, which will serve as the study site.</p><p>Kaplan noted that Georgia Tech&rsquo;s research fits perfectly with the DOE&rsquo;s goal to better understand how wetlands function, enabling scientists to better understand their role in controlling water quality.</p><p>&ldquo;Wetlands do a great job of cleaning out all the impurities and getting rid of a lot of the contaminants to clean the water up as it moves through a watershed,&rdquo; said Kaplan.&nbsp;</p><p><strong>Atomic-scale Analysis &nbsp;</strong></p><p>Argonne National Laboratory plans to take Georgia Tech&rsquo;s sediment samples and examine them at the atomic scale of individual atoms and electrons using the Advanced Photon Source (APS), a football-field-sized synchrotron that produces x-rays 10 billion times clearer than what is produced at a doctor&rsquo;s office.</p><p>&ldquo;The fundamental reactions that are controlling the quality of the water happen at the microorganism or nano scale,&rdquo; said Kenneth Kemner, senior physicist and group leader of the Molecular Environmental Science Group at the Argonne National Lab. &ldquo;By bringing all the different ways of looking at wetlands together, we&#39;ll actually have a much deeper understanding of how they function.&rdquo;</p><p>From one of several x-ray ports operated 24x7, the APS can capture images of single microorganisms about 100 times smaller than the diameter of the human hair. In fact, when the APS first came online, it successfully analyzed hair strands of Ludwig van Beethoven, with the analysis deducing that the great German composer suffered from lead poisoning.</p><p>Kemner acknowledged that Georgia Tech brings unique capabilities to the wetlands research effort. He explained that answering the hard questions such as those posed by climate change will require this transdisciplinary and integrated problem-solving approach.&nbsp;</p><p><em>Additional unfunded collaborators for this study include&nbsp;Christa Pennacchio, PMO Lead with the Joint Genome Institute (JGI) at the Lawrence Berkeley National Laboratory (JGI), and Stephen Callister, scientist with the Environmental Molecular Sciences Laboratory (EMSL), a U.S. DOE national scientific user facility managed by Pacific Northwest National Laboratory.</em><strong>&nbsp;</strong>&nbsp;&nbsp;</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1641914136</created>  <gmt_created>2022-01-11 15:15:36</gmt_created>  <changed>1641914784</changed>  <gmt_changed>2022-01-11 15:26:24</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech is partnering with two Department of Energy (DOE) national laboratories to better understand how wetlands function, enabling scientists to better understand their role in controlling water quality.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech is partnering with two Department of Energy (DOE) national laboratories to better understand how wetlands function, enabling scientists to better understand their role in controlling water quality.]]></sentence>  <summary><![CDATA[<p>Using AI and data modeling, Georgia Tech&nbsp;is researching&nbsp;how wetlands work and hoping to shed light on how they&nbsp;will function with more frequent and more intense rainstorms. &nbsp;&nbsp;</p>]]></summary>  <dateline>2021-11-03T00:00:00-04:00</dateline>  <iso_dateline>2021-11-03T00:00:00-04:00</iso_dateline>  <gmt_dateline>2021-11-03 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[Part of a $1 million grant, Georgia Tech will analyze wetlands to better predict disruptions that could intensify greenhouse gas releases]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[asargent7@gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p><strong>Media Relations Contact and Writer: </strong>Anne Wainscott-Sargent (404-435-5784)</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>652394</item>          <item>652395</item>          <item>652398</item>      </media>  <hg_media>          <item>          <nid>652394</nid>          <type>image</type>          <title><![CDATA[Researchers by campus wetlands]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Photo 1 - Researchers.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Photo%201%20-%20Researchers.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Photo%201%20-%20Researchers.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Photo%25201%2520-%2520Researchers.png?itok=hbO3hT88]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1635941730</created>          <gmt_created>2021-11-03 12:15:30</gmt_created>          <changed>1635941730</changed>          <gmt_changed>2021-11-03 12:15:30</gmt_changed>      </item>          <item>          <nid>652395</nid>          <type>image</type>          <title><![CDATA[SRNL environments]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Photo 2 - wetlands in the Savannah River Nat&#039;l Laboratory.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Photo%202%20-%20wetlands%20in%20the%20Savannah%20River%20Nat%27l%20Laboratory.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Photo%202%20-%20wetlands%20in%20the%20Savannah%20River%20Nat%27l%20Laboratory.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Photo%25202%2520-%2520wetlands%2520in%2520the%2520Savannah%2520River%2520Nat%2527l%2520Laboratory.jpg?itok=RIMbpvRP]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1635941852</created>          <gmt_created>2021-11-03 12:17:32</gmt_created>          <changed>1635941852</changed>          <gmt_changed>2021-11-03 12:17:32</gmt_changed>      </item>          <item>          <nid>652398</nid>          <type>image</type>          <title><![CDATA[Sediment sample]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Photo 3 - Martial with sediment sample.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Photo%203%20-%20Martial%20with%20sediment%20sample.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Photo%203%20-%20Martial%20with%20sediment%20sample.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Photo%25203%2520-%2520Martial%2520with%2520sediment%2520sample.jpg?itok=pNUStMto]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1635947683</created>          <gmt_created>2021-11-03 13:54:43</gmt_created>          <changed>1635947683</changed>          <gmt_changed>2021-11-03 13:54:43</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>          <category tid="154"><![CDATA[Environment]]></category>      </categories>  <news_terms>          <term tid="154"><![CDATA[Environment]]></term>      </news_terms>  <keywords>          <keyword tid="179077"><![CDATA[wetlands]]></keyword>          <keyword tid="831"><![CDATA[climate change]]></keyword>          <keyword tid="105821"><![CDATA[extreme weather]]></keyword>          <keyword tid="189257"><![CDATA[climate model]]></keyword>          <keyword tid="189258"><![CDATA[sediments]]></keyword>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>          <keyword tid="166882"><![CDATA[School of Biological Sciences]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>          <topic tid="71911"><![CDATA[Earth and Environment]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="651571">  <title><![CDATA[Pandarinath Wins NIH New Innovator Award for AI-Powered Brain-Machine Interfaces]]></title>  <uid>27592</uid>  <body><![CDATA[<p>The seemingly simple act of reaching for a cup isn&rsquo;t really simple at all. In truth, our brains issue hundreds, maybe thousands of instructions to make that happen: positioning your body just right, maybe leaning forward a bit, actually lifting your arm and reaching out, grasping the cup with your fingers, and a whole host of tiny movements and adjustments along the way.</p><p>Scientists can record all of the neural activity related to the movement, but it&rsquo;s complicated and messy. &ldquo;Seemingly random and noisy,&rdquo; is the way <a href="/bme/faculty/Chethan-Pandarinath">Chethan Pandarinath</a> describes it. So how do you pick out the signal from the noise, to identify the activity that controls all those movements and says to the body, &ldquo;pick up the cup&rdquo;?</p><p>Pandarinath thinks he has a way, using new artificial intelligence tools and our growing understanding of neuroscience at a system-wide level. It&rsquo;s an approach <a href="http://snel.gatech.edu/">he&rsquo;s been building toward for years</a>, and he&rsquo;s after a goal that could be nothing short of revolutionary: creating brain-machine interfaces that can decode in just milliseconds, and with unprecedented accuracy, what the brain is telling the body to do. The hope is to reconnect the brain and the body for patients who are paralyzed from strokes, spinal cord injuries, or ALS &mdash; amyotrophic lateral sclerosis, or Lou Gehrig&rsquo;s disease.</p><p>The National Institutes of Health has recognized the exceptional creativity of Pandarinath&rsquo;s approach &mdash; and its transformative potential &mdash; with a <a href="https://reporter.nih.gov/project-details/10246037">2021 Director&rsquo;s New Innovator Award</a>, the agency&rsquo;s most prestigious program for early career researchers.</p><p>&ldquo;What NIH is looking for in this mechanism is ideas that they think are transformative &mdash; it&#39;s a little bit hard to predict how it will go, but the idea has the potential to really change an entire field,&rdquo; said Pandarinath, assistant professor in the Wallace H. Coulter Department of Biomedical Engineering at Emory University and Georgia Tech. &ldquo;It&rsquo;s wonderful recognition that they think my proposal is significant enough,&rdquo;</p><p>Part of the NIH&rsquo;s <a href="https://commonfund.nih.gov/newinnovator">High-Risk, High-Reward Research program</a>, Pandarinath&rsquo;s $2.4 million grant will support his team&rsquo;s launch of a clinical trial this fall, implanting sensors into the brains of ALS patients. He&rsquo;ll work closely with Emory neurosurgeons <a href="https://www.med.emory.edu/directory/profile/?u=NAUYONG">Nicholas Au Yong</a> and <a href="https://www.med.emory.edu/directory/profile/?u=RGROSS">Robert Gross</a> and neurologist <a href="https://www.med.emory.edu/directory/profile/?u=JGLAS03">Jonathan Glass</a>, who&rsquo;s also director of the <a href="https://med.emory.edu/departments/neurology/programs_centers/emory_als_center/index.html">Emory ALS Center</a>.</p><p>&ldquo;To move this toward a clinical trial, that really is a collaboration between BME and Neurosurgery and Neurology. That&#39;s pretty exciting. That&#39;s the only way we can make clinical impact,&rdquo; said Pandarinath, who also is a faculty member in the Emory Neuromodulation Technology Innovation Center, or ENTICe.</p><p>&ldquo;It is exciting to see this project coming together as a result of the ingenuity and efforts of this extraordinarily talented team of engineers and clinician-scientists,&rdquo; said Gross, the MBNA Bowman Chair in Neurosurgery and founder and director of ENTICe. &ldquo;It moves us closer toward our goal, in partnership with Georgia Tech, to improve the lives of patients disabled by ALS and other severe neurological disorders with groundbreaking innovations and discovery.&rdquo;</p><h3>Decoding the Brain in Real Time</h3><p>Ideally, the AI-powered brain-machine interfaces Pandarinath proposes would work almost &ldquo;out of the box&rdquo; for any patient, without significant calibration. It&rsquo;s a lofty target, considering the challenges. The interface has to:</p><ul><li>Identify the electrical activity that corresponds with specific movements, and do it in real time (patients can&rsquo;t be repeatedly thinking about a movement for a full minute to try to activate the interface).</li><li>Account for the day-to-day changes in data we record within one person&rsquo;s brain.</li><li>Work despite the slight differences between brains.</li></ul><p>So, how will Pandarinath&rsquo;s team tackle the seeming mountain of challenges ahead of them? It hinges on a concept in machine learning called &ldquo;unsupervised&rdquo; or &ldquo;self-supervised&rdquo; learning. Rather than starting with a movement and trying to map it to specific brain activity, Pandarinath&rsquo;s algorithms start with the brain data.</p><p>&ldquo;We don&#39;t worry about what the person was trying to do. If we just focus on that, we&#39;re going to miss a lot of the structure of the activity. If we can just understand the data better first, without biasing it by what we think the pattern meant, it ends up leading to better what we call &lsquo;decoding,&rsquo;&rdquo; he said.</p><p>These artificial intelligence tools have been reshaping other fields &mdash; for example, computer vision for autonomous vehicles, where AI has to understand the surrounding environment, or teaching computers to play chess or complicated video games. Pandarinath has been working to apply unsupervised learning techniques to neuroscience and uncover what the brain is doing.</p><p>&ldquo;That&#39;s not something that other people have done before &mdash; at least not like what we&#39;re doing. That&#39;s kind of our secret sauce,&rdquo; he said. &ldquo;We know these tools are changing the game in so many other AI applications. We&#39;re showing how they can apply in brain-machine interfaces and impact people&#39;s health.&rdquo;The key is a pair of discoveries that suggested there are identifiable brain patterns related to movement. First, other researchers demonstrated that brain activity related to movement in monkeys is consistent across individuals. Then Pandarinath showed the patterns are likewise similar in humans, even in people who are paralyzed. He said these fundamental signatures seem to apply across different types of movement, too.</p><p>&ldquo;We&#39;re saying, there are some things that are consistent across brains &mdash; we&#39;re even seeing it across species,&rdquo; he said. &ldquo;That&rsquo;s why we think this technology can really have broad application, because we&#39;re able to tap into the this underlying consistency.&rdquo;</p><p>The team will start with trying to restore patients&rsquo; ability to communicate, Pandarinath said, like interacting with a computer: &ldquo;Computers are such a huge part of our everyday lives now that if you&#39;re paralyzed and can&#39;t surf the web, or type emails, that&#39;s a pretty big deal. You&#39;ve lost a lot of what we do.&rdquo;</p><p>When the team moves into clinical trials, Pandarinath&rsquo;s AI tools will be paired with existing implantable brain sensors to test how well they work for patients. The implants themselves are the kind of devices already used for deep brain stimulation for Parkinson&rsquo;s patients, for example. The technology the team is developing is independent of the sensor &mdash; it&rsquo;s all about making the best use of the data recorded in the brain. As better sensors come along (<a href="https://www.cnbc.com/2021/07/30/elon-musks-neuralink-backed-by-google-ventures-peter-thiel-sam-altman.html">and people like Elon Musk are working on exactly that</a>), Pandarinath said his tools will work even better.</p><p>&ldquo;Our focus is the underlying approach, these AI algorithms and what can they do for us. The application of communication, we think, is just one way to demonstrate the technology,&rdquo; Pandarinath said. &ldquo;We think the same concepts can be useful if you&#39;re controlling a robotic arm, reaching out and grabbing that coffee cup, or maybe driving implanted muscle stimulators to help somebody with spinal cord injury to control their own arm. The application of the technology is the same: figuring out what the brain is doing and what does the person want to do?&rdquo;</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1633961069</created>  <gmt_created>2021-10-11 14:04:29</gmt_created>  <changed>1633961265</changed>  <gmt_changed>2021-10-11 14:07:45</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Artificial intelligence could be the key to faster, universal interfaces for paralyzed patients]]></teaser>  <type>news</type>  <sentence><![CDATA[Artificial intelligence could be the key to faster, universal interfaces for paralyzed patients]]></sentence>  <summary><![CDATA[<p>Artificial intelligence could be the key to faster, universal interfaces for paralyzed patients</p>]]></summary>  <dateline>2021-10-05T00:00:00-04:00</dateline>  <iso_dateline>2021-10-05T00:00:00-04:00</iso_dateline>  <gmt_dateline>2021-10-05 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jstewart@gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jstewart@gatech.edu">Joshua Stewart</a></p><p>Communications</p><p>Wallace H. Coulter Department of Biomedical Engineering</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>651380</item>          <item>651381</item>      </media>  <hg_media>          <item>          <nid>651380</nid>          <type>image</type>          <title><![CDATA[Chethan Pandarinath in lab]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Pandarinath-Chethan-Lab-at-Computer-by-Jack-Kearse-h.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Pandarinath-Chethan-Lab-at-Computer-by-Jack-Kearse-h.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Pandarinath-Chethan-Lab-at-Computer-by-Jack-Kearse-h.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Pandarinath-Chethan-Lab-at-Computer-by-Jack-Kearse-h.jpg?itok=lgpMNYcM]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Chethan Pandarinath has won a Director's New Innovator Award from the National Institutes of Health to use artificial intelligence tools to create brain-machine interfaces that function with unprecedented speed and accuracy, decoding in real-time what the brain is the telling the body to do. The aim is to reconnect the brain and body for patients paralyzed from strokes, spinal cord injuries, or ALS — amyotrophic lateral sclerosis, or Lou Gehrig’s disease. (Photo: Jack Kearse)]]></image_alt>                    <created>1633440341</created>          <gmt_created>2021-10-05 13:25:41</gmt_created>          <changed>1633440341</changed>          <gmt_changed>2021-10-05 13:25:41</gmt_changed>      </item>          <item>          <nid>651381</nid>          <type>image</type>          <title><![CDATA[Robert Gross, Chethan Pandarinath, Jonathan Glass & Nicholas Au Yong]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Gross,Robert-Glass,Jonathan-Pandarinath,Chethan-AuYong,Nicholas-by-Jack-Kearse-v.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Gross%2CRobert-Glass%2CJonathan-Pandarinath%2CChethan-AuYong%2CNicholas-by-Jack-Kearse-v.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Gross%2CRobert-Glass%2CJonathan-Pandarinath%2CChethan-AuYong%2CNicholas-by-Jack-Kearse-v.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Gross%252CRobert-Glass%252CJonathan-Pandarinath%252CChethan-AuYong%252CNicholas-by-Jack-Kearse-v.jpg?itok=CsYDGhWr]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[This group of clinicians and scientists will collaborate with Chethan Pandarinath, front right, on a $2.4 million, five-year project using artificial intelligence tools to revolutionize the speed and accuracy of brain-machine interfaces. The team includes Robert Gross, front left, a functional neurosurgeon; Jonathan Glass, back left, a neurologist and director of Emory's ALS Center; and Nicholas Au Yong, also a functional neurosurgeon. (Photo: Jack Kearse)]]></image_alt>                    <created>1633440496</created>          <gmt_created>2021-10-05 13:28:16</gmt_created>          <changed>1633440496</changed>          <gmt_changed>2021-10-05 13:28:16</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://reporter.nih.gov/project-details/10246037]]></url>        <title><![CDATA[Project Details: Fusing motor neuroscience and artificial intelligence to create  next-generation neural prostheses]]></title>      </link>          <link>        <url><![CDATA[https://commonfund.nih.gov/newinnovator]]></url>        <title><![CDATA[NIH Director's New Innovator Award]]></title>      </link>          <link>        <url><![CDATA[https://snel.gatech.edu/]]></url>        <title><![CDATA[Pandarinath's Systems Neural Engineering Lab]]></title>      </link>          <link>        <url><![CDATA[https://med.emory.edu/departments/neurology/programs_centers/emory_als_center/index.html]]></url>        <title><![CDATA[Emory ALS Center]]></title>      </link>          <link>        <url><![CDATA[https://www.emoryhealthcare.org/centers-programs/brain-health-center/index.html]]></url>        <title><![CDATA[Emory Brain Health Center]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="2556"><![CDATA[artificial intelligence]]></keyword>          <keyword tid="176083"><![CDATA[brain machine interfaces]]></keyword>          <keyword tid="5443"><![CDATA[Neuroengineering]]></keyword>          <keyword tid="188989"><![CDATA[Chethan Pandarinath]]></keyword>          <keyword tid="2270"><![CDATA[National Institutes of Health]]></keyword>          <keyword tid="10833"><![CDATA[NIH Director&#039;s New Innovator Award]]></keyword>          <keyword tid="170979"><![CDATA[spinal cord injuries]]></keyword>          <keyword tid="170899"><![CDATA[spinal cord injury]]></keyword>          <keyword tid="121361"><![CDATA[amyotrophic lateral sclerosis]]></keyword>          <keyword tid="187423"><![CDATA[go-bio]]></keyword>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="649636">  <title><![CDATA[Associate Professor Elected SIGCHI President]]></title>  <uid>33939</uid>  <body><![CDATA[<p>School of Interactive Computing joint Associate Professor <strong>Neha Kumar</strong> was elected president of the <a href="https://sigchi.org/">Special Interest Group on Computer-Human Interaction</a> (SIGCHI) for 2021-22. She will serve a three-year term for the group, which is the premier international society for professionals and academics interested in human-computer interaction.</p><p>SIGCHI sponsors numerous conferences, publications, web sites, and other services that advance HCI through workshops and outreach. <a href="https://medium.com/sigchi/thank-you-sigchi-dae601d883bb">In a blog post for SIGCHI</a>, Kumar said that she and the other incoming executive committee members aim to continue the long history of advancing the group&rsquo;s key missions.</p><p>&ldquo;We hope to continue to expand the excellent work that our many colleagues in this (executive committee) have done, with their commitment (among other things) to accessibility, equity and inclusion, to the safety of our community, global community building, and a #SIGCHI4ALL,&rdquo; she wrote. &ldquo;Together the six of us represent a wide range of perspectives; our hope is that this representation with ensure that we remain answerable to our entire global membership as we work towards supporting and fostering participation and growth locally and globally.&rdquo;</p><p>Kumar&rsquo;s research at Georgia Tech lies at the intersection of human-centered computing and global development. She has produced research that improves technology design for historically underserved communities. Her <a href="http://www.tandem.gatech.edu/">TanDEm Lab</a> &ndash; short for Technology and Design towards &lsquo;Empowerment&rsquo; &ndash; has focused on health and wellbeing on the margins, centering topics such as gender, stigma, and knowledge production.</p><p>Kumar has received other honors, such as the National Science Foundation&rsquo;s CAREER Award, and also chairs the <a href="https://www.acm.org/fca#:~:text=The%20ACM%20Future%20of%20Computing,next%20generation%20of%20computing%20professionals.&amp;text=The%20ACM%20FCA%20aspires%20to,of%20computing%20into%20the%20future.">Association of Computing Machinery&rsquo;s Future of Computing Academy</a>.</p><p>Georgia Tech Ph.D. graduate <strong>Tamara Clegg</strong> is also on the SIGCHI executive committee, serving as the vice president of membership and communication.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1628787031</created>  <gmt_created>2021-08-12 16:50:31</gmt_created>  <changed>1628787031</changed>  <gmt_changed>2021-08-12 16:50:31</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Neha Kumar will serve a three-year term for the group, which is the premier international society for professionals and academics interested in human-computer interaction.]]></teaser>  <type>news</type>  <sentence><![CDATA[Neha Kumar will serve a three-year term for the group, which is the premier international society for professionals and academics interested in human-computer interaction.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2021-08-12T00:00:00-04:00</dateline>  <iso_dateline>2021-08-12T00:00:00-04:00</iso_dateline>  <gmt_dateline>2021-08-12 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>507851</item>      </media>  <hg_media>          <item>          <nid>507851</nid>          <type>image</type>          <title><![CDATA[Neha Kumar]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[neha.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/neha_0.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/neha_0.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/neha_0.jpeg?itok=7IV4SSE7]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Neha Kumar]]></image_alt>                    <created>1457114400</created>          <gmt_created>2016-03-04 18:00:00</gmt_created>          <changed>1475895270</changed>          <gmt_changed>2016-10-08 02:54:30</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="649635">  <title><![CDATA[Assistant Professor Named 2021 Microsoft Research Faculty Fellow]]></title>  <uid>33939</uid>  <body><![CDATA[<p>School of Interactive Computing Assistant Professor <strong>Diyi Yang</strong> was named one of five <a href="https://www.microsoft.com/en-us/research/academic-program/faculty-fellowship/#!fellows">2021 Microsoft Research Faculty Fellows</a> earlier this summer. The two-year fellowship recognizes innovative and promising early-career professors in the Americas who are exploring breakthrough research in computer science or a related field.</p><p>Yang was recognized for her work leading the <a href="https://www.cc.gatech.edu/~dyang888/group.html">Social and Language Technologies Lab</a>, concentrating on research across fields of natural language processing, machine learning, and computational social science. Yang&rsquo;s research works to understand social aspects of language and build responsible NLP systems with social intelligence.</p><p>&ldquo;We live in an era where many aspects of our daily activities are recorded as textual data,&rdquo; Yang said in her proposal to Microsoft Research. &ldquo;Over the last few decades, NLP has dramatically improved performance and produced industrial applications like personal assistants. Despite being sufficient to enable these applications, current NLP systems largely ignore the social part of language.&rdquo;</p><p>This ignorance limits the functionality of the programs, Yang said. This research examines what is said, who says it, in what context and for what goals in hopes of developing systems to facilitate human-human and human-machine communication. So far, her team has produced projects on mitigating bias in text, detecting mental health issues, improving support in online support groups, and more.</p><p>According to Microsoft Research&rsquo;s website, Yang is the first Georgia Tech faculty member to be named a Microsoft Research Faculty Fellow since 2011 and only the third overall. Yang has earned a number of other awards and recognitions, such as Forbes 30 Under 30 in Science and IEEE AI 10 to Watch.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1628786655</created>  <gmt_created>2021-08-12 16:44:15</gmt_created>  <changed>1628786655</changed>  <gmt_changed>2021-08-12 16:44:15</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The two-year fellowship recognizes innovative and promising early-career professors in the Americas who are exploring breakthrough research in computer science or a related field.]]></teaser>  <type>news</type>  <sentence><![CDATA[The two-year fellowship recognizes innovative and promising early-career professors in the Americas who are exploring breakthrough research in computer science or a related field.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2021-08-12T00:00:00-04:00</dateline>  <iso_dateline>2021-08-12T00:00:00-04:00</iso_dateline>  <gmt_dateline>2021-08-12 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>630588</item>      </media>  <hg_media>          <item>          <nid>630588</nid>          <type>image</type>          <title><![CDATA[Diyi Yang 2020]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Diyi_Yang.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Diyi_Yang.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Diyi_Yang.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Diyi_Yang.jpg?itok=6jzP0Yjh]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1578338255</created>          <gmt_created>2020-01-06 19:17:35</gmt_created>          <changed>1578338255</changed>          <gmt_changed>2020-01-06 19:17:35</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="649137">  <title><![CDATA[Georgia Tech Will Help Bring Critical Advancements to Online Learning as Part of Multimillion Dollar NSF Grant]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Georgia Tech is a major partner in a new <a href="https://www.nsf.gov/">National Science Foundation</a> (NSF) <a href="https://www.nsf.gov/funding/pgm_summ.jsp?pims_id=505686">Artificial Intelligence Research Institute</a> focused on adult learning in online education, it was announced today. Led by the Georgia Research Alliance, the National AI Institute for Adult Learning in Online Education (ALOE) is one of 11 new NSF institutes created as part of an investment totaling $220 million.</p><p>The ALOE Institute will develop new AI theories and techniques for enhancing the quality of online education for lifelong learning and workforce development. According to some projections, about 100 million American workers will need to be reskilled or upskilled over the next decade. With the increase of AI and automation, said Co-Principal Investigator and Georgia Tech lead Professor <strong>Ashok Goel</strong>, many jobs will be redefined.</p><p>&ldquo;There will be some loss of jobs, but mostly we will see individuals needing to learn a new skill to get a new job or to advance their career,&rdquo; said Goel, a professor of computer science and human-centered computing in Georgia Tech&rsquo;s <a href="http://ic.gatech.edu">School of Interactive Computing</a> (IC) and the chief scientist with the <a href="https://c21u.gatech.edu/">Center for 21<sup>st</sup> Century Universities</a> (C21U). &ldquo;So, how do you help 100 million workers reskill or upskill in 10 years? Because AI is in part responsible for this need, it is our belief it should also be responsible for finding a solution.&rdquo;</p><p>That is the goal of this project, which will be led by principal investigator <strong>Myk Garn</strong>, assistant vice chancellor for New Models of Learning at the University System of Georgia and senior advisor to the <a href="https://gra.org/">Georgia Research Alliance</a> (GRA).</p><p>&ldquo;Online education for adults has enormous implications for tomorrow&rsquo;s workforce,&rdquo; Garn said. &ldquo;Yet, serious questions remain about the quality of online learning and how best to teach adults online. Artificial intelligence offers a powerful technology for dramatically improving the quality of online learning and adult education.&rdquo;</p><p>To do that successfully, the education must be personalized and scaled to unprecedented levels. Educating 100 million people in online environments will, of course, require far more time and energy than in-person educators can offer their students. That is where AI comes into play.</p><p>Researchers will build new AI techniques that can adequately and efficiently train <em>other</em> AI agents to interact with humans in a classroom setting, similar to the virtual teaching assistant Jill Watson that Goel has used in his online computer science classes for the past five years. This will help satisfy the scalability requirement.</p><p>&ldquo;That&rsquo;s the fundamental advancement in AI,&rdquo; Goel said. &ldquo;A human can train an AI agent in just a few hours how to teach other AI agents on how to interact with humans on various subjects.&rdquo;</p><p>To satisfy the need for personalized AI, researchers will train machines to have a mutual theory of mind with their human counterparts. In other words, there will be a greater understanding by both machine and human of the others&rsquo; needs, knowledge, and expectations.</p><p>&ldquo;Our vision is to develop AI agents that achieve a mutual understanding of learning expectations, outcomes, and methods between students and teachers,&rdquo; said Alex Endert, an assistant professor in Georgia Tech&rsquo;s <a href="http://cc.gatech.edu">College of Computing</a> who will help the team analyze and understand data from the project. &ldquo;Along with my students, I look forward to developing visual analytic interfaces that serve that purpose to foster trust and interpretability of AI for this domain.&rdquo;</p><p>Ultimately, the hope is that education becomes more available, affordable, achievable, and, thereby, equitable. Such an expansive project, understandably, requires the expertise of many kinds from many people. In addition to Endert and Goel, who will be executive director of the ALOE Institute, there will be a host of faculty at Georgia Tech will participate.</p><p>Senior Georgia Tech members of the ALOE team include <strong>Stephen Harmon</strong> (Industrial Design and C21U), <strong>Michael Hoffmann</strong> (Public Policy), <strong>David Joyner</strong> (Online Master of Science in Computer Science), <strong>Ruth Kanfer</strong> (Psychology), <strong>Brian Magerko</strong> (Language, Media, and Culture), <strong>Keith McGreggor</strong> (IC and VentureLab), <strong>Chaohua Ou</strong> (Center for Teaching and Learning), and <strong>Spencer Rugaber</strong> (Computer Science).</p><p>Other partners in the ALOE Institute include Arizona State University, Drexel University, Georgia State University, Harvard University, the Technical College System of Georgia, the University of North Carolina at Greensboro, IMS Global, Boeing, IBM, and Wiley.</p><p><a href="https://research.gatech.edu/georgia-tech-joins-us-national-science-foundation-advance-ai-research-and-education">Georgia Tech is a key partner in two additional institutes</a> in partnership with the U.S. Department of Agriculture, the National Institute of Food and Agricultures, the U.S. Department of Homeland Security Science &amp; Technology Directorate, and the U.S. Department of Transportation Federal Highway Administration. Georgia Tech will lead the AI Institute for Advances in Optimization (AI4Opt) and the AI Institute for Collaborative Assistance and Responsiveness Interaction for Networked Groups (AI-CARING), the latter of which is led by College of Computing Associate Professor Sonia Chernova to support aging-related issues.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1627572498</created>  <gmt_created>2021-07-29 15:28:18</gmt_created>  <changed>1627572498</changed>  <gmt_changed>2021-07-29 15:28:18</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Led by the Georgia Research Alliance, the National AI Institute for Adult Learning in Online Education (ALOE) is one of 11 new NSF institutes created as part of an investment totaling $220 million.]]></teaser>  <type>news</type>  <sentence><![CDATA[Led by the Georgia Research Alliance, the National AI Institute for Adult Learning in Online Education (ALOE) is one of 11 new NSF institutes created as part of an investment totaling $220 million.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2021-07-29T00:00:00-04:00</dateline>  <iso_dateline>2021-07-29T00:00:00-04:00</iso_dateline>  <gmt_dateline>2021-07-29 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>611004</item>      </media>  <hg_media>          <item>          <nid>611004</nid>          <type>image</type>          <title><![CDATA[Online learning stock]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[online learning.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/online%20learning.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/online%20learning.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/online%2520learning.jpg?itok=iJjWc-lh]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Fingers typing on a laptop keyboard]]></image_alt>                    <created>1536259875</created>          <gmt_created>2018-09-06 18:51:15</gmt_created>          <changed>1536259875</changed>          <gmt_changed>2018-09-06 18:51:15</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://research.gatech.edu/georgia-tech-joins-us-national-science-foundation-advance-ai-research-and-education]]></url>        <title><![CDATA[Georgia Tech Joins the U.S. National Science Foundation to Advance AI Research and Education]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="648905">  <title><![CDATA[Georgia Tech Top Contributor to Research at International Conference on Machine Learning]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Georgia Tech researchers in the College of Engineering and College of Computing are presenting their work at the International Conference on Machine Learning (IMCL), which runs through Saturday.</p><p>ICML is the leading international academic conference in machine learning. Along with NeurIPS and ICLR, it is one of the three primary conferences of high impact in machine learning and artificial intelligence research. It is supported by the International Machine Learning Society (IMLS).</p><p>Explore Georgia Tech people, research abstracts, and when authors will present (Tues-Thurs) in an interactive data graphic of <a href="https://public.tableau.com/views/GeorgiaTechatICML2021/Dashboard1?:language=en-US&amp;:display_count=n&amp;:origin=viz_share_link"><strong>Georgia Tech at IMCL 2021</strong></a>. Also explore the whole program in a second data graphic: <a href="https://public.tableau.com/views/ICML2021/Dashboard12?:showVizHome=no"><strong>Who&rsquo;s Who at ICML 2021</strong></a>.</p><p>Georgia Tech&rsquo;s work is represented in 2% of the program with 22 papers in a range of topics including (asterisk denotes a single paper):</p><ul><li>Applications (CV and NLP)*</li><li>Applications (NLP)*</li><li>Deep Learning Algorithms*</li><li>Deep Learning Theory *</li><li>Deep Reinforcement Learning*</li><li>Learning Theory &ndash; 2 papers</li><li>Optimal Transport &ndash; 2 papers</li><li>Optimization (Convex)*</li><li>Optimization and Algorithms &ndash; 2 papers</li><li>Privacy *</li><li>Reinforcement Learning &ndash; 2 papers</li><li>Reinforcement Learning and Optimization*</li><li>Reinforcement Learning and Planning*</li><li>Reinforcement Learning Theory*</li><li>Time Series &ndash; 4 papers</li></ul>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1626787202</created>  <gmt_created>2021-07-20 13:20:02</gmt_created>  <changed>1626843640</changed>  <gmt_changed>2021-07-21 05:00:40</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech researchers in the College of Engineering and College of Computing are presenting their work at the International Conference on Machine Learning (IMCL), which runs through Saturday.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech researchers in the College of Engineering and College of Computing are presenting their work at the International Conference on Machine Learning (IMCL), which runs through Saturday.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2021-07-20T00:00:00-04:00</dateline>  <iso_dateline>2021-07-20T00:00:00-04:00</iso_dateline>  <gmt_dateline>2021-07-20 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Josh Preston</p><p><a href="mailto:jpreston@cc.gatech.edu">jpreston@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>648904</item>      </media>  <hg_media>          <item>          <nid>648904</nid>          <type>image</type>          <title><![CDATA[ICML 2021]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[ICML2021.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/ICML2021.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/ICML2021.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/ICML2021.jpeg?itok=yYBmKmCm]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1626787175</created>          <gmt_created>2021-07-20 13:19:35</gmt_created>          <changed>1626787175</changed>          <gmt_changed>2021-07-20 13:19:35</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="648864">  <title><![CDATA[Georgia Tech Faculty Hold Workshop to Improve Integration of Ethics into Courses]]></title>  <uid>33939</uid>  <body><![CDATA[<p>As computer science becomes more ingrained into various areas of study and, indeed, our daily lives, an eye on the implications of innovation is needed, experts at Georgia Tech say.</p><p>To help students begin thinking about ethics with regards to research, faculty at Georgia Tech &ndash; in conjunction with Mozilla &ndash; held the first workshop on integrating ethics and responsible computing into courses this summer.</p><p>The workshop was a collaboration between faculty researchers at Georgia Tech in both the Ethics, Technology, and Human Interaction Center (ETHICx) and Computing and Society, as well as Mozilla. The workshop received a strong response, which organizers say indicates a growing desire for ethics at the center of computer science courses.</p><p>Members of the College of Computing&rsquo;s Division of Computing Instruction, the Schools of Interactive Computing, Computational Science and Engineering, Computer Science, and Electrical and Computer Engineering, along with attendees from Georgia State all participated in the online workshop.</p><p>&ldquo;It&rsquo;s really gratifying to have broad representation because it demonstrates the desire for people from so many different areas to think more deeply about the role of ethics in our education,&rdquo; said <strong>Ellen Zegura</strong>, professor in the School of Computer Science and Fleming Chair in Telecommunications.</p><p>The goal of the workshop was to help instructors consider ways in which to implement ethics as a central piece in courses not just later in a student&rsquo;s study, but from the very beginning. There&rsquo;s an issue of urgency, Zegura said, that needed to be considered.</p><p>&ldquo;Computing has reached a point where it is being used for critical decision making that really affects people&rsquo;s lives,&rdquo; she said. &ldquo;The need to use computing responsibly has moved up incredibly. And if we don&rsquo;t talk about ethics early in the curriculum, we&rsquo;re sending a message that it&rsquo;s not important. If you only hear about it in one course and it&rsquo;s later in your career, then what does that say about the importance? Students see that.&rdquo;</p><p>While official plans aren&rsquo;t currently in place to continue the program, Zegura said the idea is to continue this as a series of activities that are responsive to what people&rsquo;s needs are, specifically those who want to do a better job of embedding ethics into their computer science curriculum.</p><p>Georgia Tech graduate <strong>Kathy Pham (CS &rsquo;07, MS CS &rsquo;09)</strong>, now at Mozilla, has been instrumental in engaging the computer science community from 15-20 universities on focusing on ethics, Zegura said.</p><p><a href="https://www.youtube.com/playlist?list=PLF0CYxpffvKx5W-y_xJ9xhrGapmeF70Og">Portions of the workshop can be viewed on YouTube here.</a></p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1626700580</created>  <gmt_created>2021-07-19 13:16:20</gmt_created>  <changed>1626700580</changed>  <gmt_changed>2021-07-19 13:16:20</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[To help students begin thinking about ethics with regards to research, faculty at Georgia Tech – in conjunction with Mozilla – held the first workshop on integrating ethics and responsible computing into courses this summer.]]></teaser>  <type>news</type>  <sentence><![CDATA[To help students begin thinking about ethics with regards to research, faculty at Georgia Tech – in conjunction with Mozilla – held the first workshop on integrating ethics and responsible computing into courses this summer.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2021-07-19T00:00:00-04:00</dateline>  <iso_dateline>2021-07-19T00:00:00-04:00</iso_dateline>  <gmt_dateline>2021-07-19 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>644759</item>      </media>  <hg_media>          <item>          <nid>644759</nid>          <type>image</type>          <title><![CDATA[Ethics stock image]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[AdobeStock_117212757.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/AdobeStock_117212757.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/AdobeStock_117212757.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/AdobeStock_117212757.jpeg?itok=N567OjVZ]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1614365518</created>          <gmt_created>2021-02-26 18:51:58</gmt_created>          <changed>1614365518</changed>          <gmt_changed>2021-02-26 18:51:58</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="647781">  <title><![CDATA[Covid Seed Grant Yields Data Mining Discoveries]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1622127140</created>  <gmt_created>2021-05-27 14:52:20</gmt_created>  <changed>1622127140</changed>  <gmt_changed>2021-05-27 14:52:20</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[School of Biomedical Engineering]]></publication>  <article_dateline>2021-05-27T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-05-27T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-05-27T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bme.gatech.edu/bme/covid-seed-grant-yields-data-mining-discoveries]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="647766">  <title><![CDATA[Why we need to build a virus early warning system]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1622123420</created>  <gmt_created>2021-05-27 13:50:20</gmt_created>  <changed>1622123420</changed>  <gmt_changed>2021-05-27 13:50:20</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[implicit bias training]]></publication>  <article_dateline>2021-05-27T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-05-27T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-05-27T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://thehill.com/opinion/healthcare/552856-why-we-need-to-build-a-virus-early-warning-system?rnd=1620748166&amp;rl=1]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="647689">  <title><![CDATA[Healthcare Business #11: AI, Machine learning and healthcare diagnosis]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1621867749</created>  <gmt_created>2021-05-24 14:49:09</gmt_created>  <changed>1621867973</changed>  <gmt_changed>2021-05-24 14:52:53</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[American Society for Biochemistry and Molecular Biology ASBMB]]></publication>  <article_dateline>2021-05-24T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-05-24T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-05-24T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://imperialbizpodcast.podbean.com/e/healthcare-business-11-ai-machine-learning-and-healthcare-diagnosis/]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="647594">  <title><![CDATA[Taming AI’s Can/Should Problem]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1621431710</created>  <gmt_created>2021-05-19 13:41:50</gmt_created>  <changed>1621431710</changed>  <gmt_changed>2021-05-19 13:41:50</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[armed conflict]]></publication>  <article_dateline>2021-05-18T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-05-18T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-05-18T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://sloanreview.mit.edu/article/taming-ais-can-should-problem/?og=Home+Editors+Picks]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="647283">  <title><![CDATA[The Machine Learning Center Awards Inaugural ML@GT Fellows]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1620655370</created>  <gmt_created>2021-05-10 14:02:50</gmt_created>  <changed>1620655394</changed>  <gmt_changed>2021-05-10 14:03:14</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[security dynamics]]></publication>  <article_dateline>2021-05-10T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-05-10T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-05-10T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/3faFV6y]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="37041"><![CDATA[Computational Science and Engineering]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="647256">  <title><![CDATA[The Machine Learning Center Celebrates Its First Class of Graduates]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1620394411</created>  <gmt_created>2021-05-07 13:33:31</gmt_created>  <changed>1620394411</changed>  <gmt_changed>2021-05-07 13:33:31</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[security dynamics]]></publication>  <article_dateline>2021-05-07T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-05-07T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-05-07T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/3etT4c6]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="37041"><![CDATA[Computational Science and Engineering]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="647179">  <title><![CDATA[Machine Learning to Present Seven Papers at Prestigious Deep Learning Conference]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1620219968</created>  <gmt_created>2021-05-05 13:06:08</gmt_created>  <changed>1620219968</changed>  <gmt_changed>2021-05-05 13:06:08</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Machine Learning to Present Seven Papers at Prestigious Deep Learning Conference]]></publication>  <article_dateline>2021-05-05T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-05-05T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-05-05T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/3b63dtn]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="37041"><![CDATA[Computational Science and Engineering]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="647140">  <title><![CDATA[Meet ML@GT: Ziyi Wang is Solving Optimization Problems on Complex Issues]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1620133874</created>  <gmt_created>2021-05-04 13:11:14</gmt_created>  <changed>1620133874</changed>  <gmt_changed>2021-05-04 13:11:14</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[security dynamics]]></publication>  <article_dateline>2021-05-04T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-05-04T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-05-04T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/3eQPCqH]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="647069">  <title><![CDATA[ML Student Earns Prestigious IBM Ph.D. Fellowship Award]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1619722524</created>  <gmt_created>2021-04-29 18:55:24</gmt_created>  <changed>1619722824</changed>  <gmt_changed>2021-04-29 19:00:24</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[ML Student Earns Prestigious IBM Ph.D. Fellowship Award]]></publication>  <article_dateline>2021-04-28T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-04-28T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-04-28T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/3xEUwzZ]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="37041"><![CDATA[Computational Science and Engineering]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="646917">  <title><![CDATA[Bayesian Learning Applied to Semiconductor Packaging]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1619527042</created>  <gmt_created>2021-04-27 12:37:22</gmt_created>  <changed>1619527042</changed>  <gmt_changed>2021-04-27 12:37:22</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[LA Program]]></publication>  <article_dateline>2021-04-27T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-04-27T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-04-27T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://ien.gatech.edu/news/bayesian-learning-applied-semiconductor-packaging]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="646872">  <title><![CDATA[This New Tool Can Track the Environmental Cost of Your Machine Learning Model]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1619444586</created>  <gmt_created>2021-04-26 13:43:06</gmt_created>  <changed>1619444586</changed>  <gmt_changed>2021-04-26 13:43:06</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[This New Tool Can Track the Environmental Cost of Your Machine Learning Model]]></publication>  <article_dateline>2021-04-26T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-04-26T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-04-26T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.cc.gatech.edu/news/646832/new-tool-can-track-environmental-cost-your-machine-learning-model]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="646767">  <title><![CDATA[Machine Learning Student Earns Electrical Engineering Award for a Second Time]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1619108270</created>  <gmt_created>2021-04-22 16:17:50</gmt_created>  <changed>1619108270</changed>  <gmt_changed>2021-04-22 16:17:50</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[security dynamics]]></publication>  <article_dateline>2021-04-22T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-04-22T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-04-22T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/3tpU1qQ]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="646766">  <title><![CDATA[ML Student Awarded Fellowship to Enhance AI in Fintech Transparency]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1619108071</created>  <gmt_created>2021-04-22 16:14:31</gmt_created>  <changed>1619108071</changed>  <gmt_changed>2021-04-22 16:14:31</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[ML Student Awarded Fellowship to Enhance AI in Fintech Transparency]]></publication>  <article_dateline>2021-04-22T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-04-22T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-04-22T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.cc.gatech.edu/news/646748/cse-student-awarded-fellowship-enhance-ai-fintech-transparency]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="646590">  <title><![CDATA[Machine Learning Student Earns Electrical Engineering Award for a Second Time]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1618839175</created>  <gmt_created>2021-04-19 13:32:55</gmt_created>  <changed>1618839175</changed>  <gmt_changed>2021-04-19 13:32:55</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[security dynamics]]></publication>  <article_dateline>2021-04-19T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-04-19T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-04-19T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/3tpU1qQ]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="646526">  <title><![CDATA[Two Ph.D. Students Recognized as 2021 Apple Scholars – A First for Georgia Tech]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1618579295</created>  <gmt_created>2021-04-16 13:21:35</gmt_created>  <changed>1618579673</changed>  <gmt_changed>2021-04-16 13:27:53</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Two Ph.D. Students Recognized as 2021 Apple Scholars – A First for Georgia Tech]]></publication>  <article_dateline>2021-04-16T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-04-16T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-04-16T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/32k4icb]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="134"><![CDATA[Student and Faculty]]></category>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="646438">  <title><![CDATA[Researchers Present Eight Papers at Top Machine Learning and Statistics Conference]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1618339624</created>  <gmt_created>2021-04-13 18:47:04</gmt_created>  <changed>1618339624</changed>  <gmt_changed>2021-04-13 18:47:04</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Researchers Present Eight Papers at Top Machine Learning and Statistics Conference]]></publication>  <article_dateline>2021-04-12T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-04-12T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-04-12T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/3s53PVu]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="646437">  <title><![CDATA[Nine Machine Learning Faculty Members Win Teaching Awards]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1618339363</created>  <gmt_created>2021-04-13 18:42:43</gmt_created>  <changed>1618339363</changed>  <gmt_changed>2021-04-13 18:42:43</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Nine Machine Learning Faculty Members Win Teaching Awards]]></publication>  <article_dateline>2021-04-13T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-04-13T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-04-13T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/3mINXqM]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="646206">  <title><![CDATA[Two Machine Learning Students Named 2021 NSF Graduate Research Fellows]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1617805279</created>  <gmt_created>2021-04-07 14:21:19</gmt_created>  <changed>1617805279</changed>  <gmt_changed>2021-04-07 14:21:19</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Two Machine Learning Students Named 2021 NSF Graduate Research Fellows]]></publication>  <article_dateline>2021-04-07T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-04-07T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-04-07T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/3wApJ6u]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="646163">  <title><![CDATA[Meet ML@GT: Woody Zhu Uses Algorithms to Help Atlanta Police Departments Better Serve their Communities]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1617713730</created>  <gmt_created>2021-04-06 12:55:30</gmt_created>  <changed>1617713730</changed>  <gmt_changed>2021-04-06 12:55:30</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[security dynamics]]></publication>  <article_dateline>2021-04-06T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-04-06T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-04-06T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/3rPxBgV]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="646060">  <title><![CDATA[What Can an Algorithm Actually Know?]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1617387007</created>  <gmt_created>2021-04-02 18:10:07</gmt_created>  <changed>1617387007</changed>  <gmt_changed>2021-04-02 18:10:07</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Infrastructural Membranes]]></publication>  <article_dateline>2021-04-02T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-04-02T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-04-02T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.morningbrew.com/emerging-tech/stories/2021/04/02/event-recap-can-algorithm-actually-know]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="646033">  <title><![CDATA[Two ML@GT Faculty Recognized as ACM Fellows]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1617367021</created>  <gmt_created>2021-04-02 12:37:01</gmt_created>  <changed>1617367021</changed>  <gmt_changed>2021-04-02 12:37:01</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Two ML@GT Faculty Recognized as ACM Fellows]]></publication>  <article_dateline>2021-04-02T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-04-02T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-04-02T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://cse.gatech.edu/news/646017/two-cse-faculty-recognized-acm-fellows]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="646000">  <title><![CDATA[College Rolls Out Virtual Red Carpet for Annual Awards]]></title>  <uid>32045</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[<p>The College is rolling out the virtual red carpet in April for the <a href="https://bit.ly/2021GTComputingAwards">winners of the 30th Annual College of Computing Awards</a>. Each year, the awards spotlight the dedication and accomplishments of the GT Computing community. We&#39;re celebrating this month by announcing a different set of winners &ndash; graduate students, undergraduate students, faculty, and staff &ndash; each Wednesday.&nbsp;&nbsp;</p>]]></body>  <author>Ben Snedeker</author>  <status>1</status>  <created>1617291484</created>  <gmt_created>2021-04-01 15:38:04</gmt_created>  <changed>1617295357</changed>  <gmt_changed>2021-04-01 16:42:37</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[wireless health monitoring]]></publication>  <article_dateline>2021-04-01T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-04-01T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-04-01T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/2021GTComputingAwards]]></article_url>  <media>          <item><![CDATA[646001]]></item>      </media>  <hg_media>          <item>          <nid>646001</nid>          <type>image</type>          <title><![CDATA[30th annual college of computing awards]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[2021 GTComputingAwardshero.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/2021%20GTComputingAwardshero.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/2021%20GTComputingAwardshero.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/2021%2520GTComputingAwardshero.jpg?itok=Fc1rjhtV]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[30th annual college of computing at georgia tech awards virtual celebration]]></image_alt>                              <created>1617291539</created>          <gmt_created>2021-04-01 15:38:59</gmt_created>          <changed>1617291539</changed>          <gmt_changed>2021-04-01 15:38:59</gmt_changed>      </item>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="37041"><![CDATA[Computational Science and Engineering]]></group>          <group id="576491"><![CDATA[CRNCH]]></group>          <group id="545781"><![CDATA[Institute for Data Engineering and Science]]></group>          <group id="430601"><![CDATA[Institute for Information Security and Privacy]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="66442"><![CDATA[MS HCI]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>          <keyword tid="187451"><![CDATA[30th annual GT Computing Awards]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="645999">  <title><![CDATA[George Lan Promoted to Professor]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1617290729</created>  <gmt_created>2021-04-01 15:25:29</gmt_created>  <changed>1617290729</changed>  <gmt_changed>2021-04-01 15:25:29</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Robert Louis Stevenson]]></publication>  <article_dateline>2021-04-01T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-04-01T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-04-01T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.isye.gatech.edu/news/george-lan-promoted-professor]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="626947">  <title><![CDATA[ML@GT Announces First Ph.D. Fellowship Program]]></title>  <uid>34773</uid>  <body><![CDATA[<p>Beginning in January 2020, the <a href="http://ml.gatech.edu/">Machine Learning Center at Georgia Tech (ML@GT)</a> will be supporting select Ph.D. students through a new <a href="http://ml.gatech.edu/content/ml-fellowship-program">ML@GT Fellows Program.</a> Created with the intent to foster novel Ph.D. research in machine learning (ML) or artificial intelligence (AI), the program expands Georgia Tech&rsquo;s rapidly growing presence in the field.</p><p>&ldquo;ML and AI is an increasingly important field in all aspects of life. As a center that works to train the next generation of leaders in socially and ethically responsible ways, we hope that this program will allow more students who are interested in the field to pursue their education,&rdquo; said ML@GT Director <strong>Irfan Essa. </strong></p><p>The center anticipates selecting four students who will receive roughly half of a graduate research assistant (GRA) appointment. While ML is a collaborative field with many subfields, the focus of projects must be on advancing machine learning and artificial intelligence methods that enable applications rather than creating the applications themselves.</p><p>The program is open to any Georgia Tech Ph.D. student who has a mentor or advisor that is <a href="http://ml.gatech.edu/people">affiliated with the center</a>. Preference will also be given to students who are not already supported by a fellowship.</p><p>Applications for the fellowship are due on Tuesday, Oct. 15, 2019 at 12 p.m. with award notices being sent by early January 2020.</p><p>Visit the <a href="https://gatech.infoready4.com/CompetitionSpace/#competitionDetail/1795782">application portal</a> for more information and to apply.</p><h3>About the Machine Learning Center at Georgia Tech</h3><p>The Machine Learning Center at Georgia Tech is an interdisciplinary research center bringing together more than 190 faculty members and 60 machine learning Ph.D. students from across the institute for meaningful collaboration and innovation in machine learning and artificial intelligence. Students and faculty are experts in areas including, but not limited to computer vision, natural language processing, robotics, deep learning, ethics and fairness, computational finance, information security, and logistics and manufacturing. For more information, visit www.ml.gatech.edu</p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1569936700</created>  <gmt_created>2019-10-01 13:31:40</gmt_created>  <changed>1617048017</changed>  <gmt_changed>2021-03-29 20:00:17</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Applications for a Ph.D. fellowship program through the Machine Learning Center are now open through Oct. 15. ]]></teaser>  <type>news</type>  <sentence><![CDATA[Applications for a Ph.D. fellowship program through the Machine Learning Center are now open through Oct. 15. ]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-10-01T00:00:00-04:00</dateline>  <iso_dateline>2019-10-01T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-10-01 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[allie.mcfadden@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allie.mcfadden@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>626946</item>      </media>  <hg_media>          <item>          <nid>626946</nid>          <type>image</type>          <title><![CDATA[Tech Tower at Georgia Tech]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[13C10000-P14-013 (1).jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/13C10000-P14-013%20%281%29.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/13C10000-P14-013%20%281%29.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/13C10000-P14-013%2520%25281%2529.jpg?itok=Pr9tl3RJ]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1569936680</created>          <gmt_created>2019-10-01 13:31:20</gmt_created>          <changed>1569936680</changed>          <gmt_changed>2019-10-01 13:31:20</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="47223"><![CDATA[College of Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="645832">  <title><![CDATA[Assistant Professor Earns 2020 Salesforce AI Research Grant]]></title>  <uid>33939</uid>  <body><![CDATA[<p>School of Interactive Computing Assistant Professor <strong>Diyi Yang</strong> was named a <a href="https://blog.einstein.ai/celebrating-the-winners-of-the-third-annual-salesforce-ai-research-grant/">Salesforce AI Research Grant Winner for 2020</a>. One of seven winners of the award, she will receive a $50,000 grant to advance her work. It is the third year the grant has been provided by Salesforce.</p><p>Yang&rsquo;s research, which is being led by her Ph.D. student <strong>Jiaao Chen</strong>, aims to alleviate dependence of supervised models on labeled data via data augmentation approaches. Supervised learning is the machine learning task of learning a function that maps an input to an output based on example pairs, inferring the function from training data that has been tagged with identifying properties or characteristics (labeled data).</p><p>The hope is that they may improve upon the ability to transfer models from one setting to another despite the relative lack of intensive training examples.</p><p>&ldquo;In the era of deep learning, natural language processing (NLP) has achieved extremely good performances in most data-intensive settings,&rdquo; Yang said. &ldquo;However, when there are only one or a few training examples, supervised deep learning models often fail. This strong dependence on labeled data largely prevents neural network models from being applied to new settings or real-world situations.&rdquo;</p><p>Yang&rsquo;s group has published a couple of papers in this field already, and she said the Salesforce grant will further support efforts to extend it to broader contexts, especially when NLP tasks involve complicated outputs.</p><p>&ldquo;These examples might include performing named entity recognition that finds the important information in a text, or semantic parsing that converts a natural language sentence into a structured command,&rdquo; she said.</p><p>You can read previous papers on the subject at the links below:</p><ul><li><a href="https://www.cc.gatech.edu/~dyang888/docs/mixtext_acl_2020.pdf"><em>MixText: Linguistically-Informed Interpolation of Hidden Space for Semi-Supervised Text Classification (Jiaao Chen, Zichao Yang, Diyi Yang)</em></a></li><li><a href="https://arxiv.org/pdf/2010.01677.pdf"><em>Local Additivity Based Data Augmentation for Semi-supervised NER (Jiaao Chen, Zhenghui Wang, Ran Tian, Zichao Yang, Diyi Yang)</em></a></li></ul><p>Yang was chosen from a group of over 180 quality proposals from more than 30 countries.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1617028943</created>  <gmt_created>2021-03-29 14:42:23</gmt_created>  <changed>1617028943</changed>  <gmt_changed>2021-03-29 14:42:23</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Yang’s research, which is being led by her Ph.D. student Jiaao Chen, aims to alleviate dependence of supervised models on labeled data via data augmentation approaches.]]></teaser>  <type>news</type>  <sentence><![CDATA[Yang’s research, which is being led by her Ph.D. student Jiaao Chen, aims to alleviate dependence of supervised models on labeled data via data augmentation approaches.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2021-03-29T00:00:00-04:00</dateline>  <iso_dateline>2021-03-29T00:00:00-04:00</iso_dateline>  <gmt_dateline>2021-03-29 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>630588</item>      </media>  <hg_media>          <item>          <nid>630588</nid>          <type>image</type>          <title><![CDATA[Diyi Yang 2020]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Diyi_Yang.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Diyi_Yang.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Diyi_Yang.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Diyi_Yang.jpg?itok=6jzP0Yjh]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1578338255</created>          <gmt_created>2020-01-06 19:17:35</gmt_created>          <changed>1578338255</changed>          <gmt_changed>2020-01-06 19:17:35</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="645531">  <title><![CDATA[Georgia Tech Receives $2.2M in Toyota Research Institute Robotics Funding]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1616160376</created>  <gmt_created>2021-03-19 13:26:16</gmt_created>  <changed>1616160376</changed>  <gmt_changed>2021-03-19 13:26:16</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Chair’s Professorship for Teaching Excellence]]></publication>  <article_dateline>2021-03-19T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-03-19T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-03-19T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bme.gatech.edu/bme/georgia-tech-receives-22m-toyota-research-institute-robotics-funding]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="645473">  <title><![CDATA[Sigma Xi Honors Voit, Mitchell for Impactful Research with 2021 Awards]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1616004361</created>  <gmt_created>2021-03-17 18:06:01</gmt_created>  <changed>1616004361</changed>  <gmt_changed>2021-03-17 18:06:01</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Maric Cimilluca]]></publication>  <article_dateline>2021-03-17T00:00:00-04:00</article_dateline>  <iso_article_dateline>2021-03-17T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2021-03-17T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.bme.gatech.edu/bme/sigma-xi-honors-voit-mitchell-impactful-research-2021-awards]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="645181">  <title><![CDATA[Celebrating Women in Computing]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1615385723</created>  <gmt_created>2021-03-10 14:15:23</gmt_created>  <changed>1615385723</changed>  <gmt_changed>2021-03-10 14:15:23</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Celebrating Women in Computing]]></publication>  <article_dateline>2021-03-10T00:00:00-05:00</article_dateline>  <iso_article_dateline>2021-03-10T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2021-03-10T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://womenshistorymonth.cc.gatech.edu/]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="644764">  <title><![CDATA[Robotics Ph.D. Student Named 2021 Adobe Research Fellow]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1614371474</created>  <gmt_created>2021-02-26 20:31:14</gmt_created>  <changed>1614371474</changed>  <gmt_changed>2021-02-26 20:31:14</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Robotics Ph.D. Student Named 2021 Adobe Research Fellow]]></publication>  <article_dateline>2021-02-26T00:00:00-05:00</article_dateline>  <iso_article_dateline>2021-02-26T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2021-02-26T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[http://bit.ly/3ks9dQs]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="644694">  <title><![CDATA[An Algorithm Is Helping a Community Detect Lead Pipes]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1614281526</created>  <gmt_created>2021-02-25 19:32:06</gmt_created>  <changed>1614281526</changed>  <gmt_changed>2021-02-25 19:32:06</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[in solidarity]]></publication>  <article_dateline>2021-02-25T00:00:00-05:00</article_dateline>  <iso_article_dateline>2021-02-25T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2021-02-25T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://www.wired.com/story/algorithm-helping-community-detect-lead-pipes/]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="644630">  <title><![CDATA[Learning Machines: Swati Gupta Explains Optimization]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1614184292</created>  <gmt_created>2021-02-24 16:31:32</gmt_created>  <changed>1614184292</changed>  <gmt_changed>2021-02-24 16:31:32</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[security dynamics]]></publication>  <article_dateline>2021-02-24T00:00:00-05:00</article_dateline>  <iso_article_dateline>2021-02-24T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2021-02-24T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/3pCtHa2]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="644496">  <title><![CDATA[Creating the Next Generation]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1613750799</created>  <gmt_created>2021-02-19 16:06:39</gmt_created>  <changed>1613750799</changed>  <gmt_changed>2021-02-19 16:06:39</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Creating the Next Generation]]></publication>  <article_dateline>2021-02-19T00:00:00-05:00</article_dateline>  <iso_article_dateline>2021-02-19T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2021-02-19T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/GTComputingBlackHistoryMonth]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="37041"><![CDATA[Computational Science and Engineering]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="644380">  <title><![CDATA[Ph.D. Student Earns 2021 Focus Fellowship from Georgia Tech's Office of Minority Educational Development]]></title>  <uid>33939</uid>  <body><![CDATA[<p>School of Interactive Computing (IC) Ph.D. student <strong>Kantwon Rogers</strong> was awarded a 2021 Focus Fellowship by Georgia Tech&rsquo;s <a href="https://omed.gatech.edu/">Office of Minority Educational Development</a> (OMED).</p><p>The award recognizes participants in the <a href="https://focus.gatech.edu/">Focus Program</a> who have demonstrated academic excellence, community leadership, and been granted admittance to a graduate program. The Focus Program aims to introduce minority students to graduate school in hopes of increasing the number who pursue higher degrees.</p><p>Rogers attended the Focus Program five years ago as an undergraduate student at Georgia Tech.</p><p>&ldquo;It helped me learn about grad school and set me up for success,&rdquo; Rogers said of the program.</p><p>The award, which carries a prize of up to $2,500 per student based on funds available and number of awardees, is not based on specific research but recognizes overall accomplishments. In an application essay, Rogers shared how OMED was pivotal to is success at Georgia Tech.</p><p>As an undergraduate, he participated in the <a href="https://omed.gatech.edu/programs/challenge">Challenge Program</a>, a five-week academic residential program for incoming first-year students. Later, he became a counselor in the same program, an OMED tutor, a Focus participant, a Focus panelist, and last summer a computer science (CS) instructor in the Challenge program.</p><p>&ldquo;It was really spooky because I was teaching the new Challenge students in the exact same room that I sat in when I was learning CS for the first time in Challenge a decade ago,&rdquo; Rogers said. &ldquo;Truly full circle. OMED has truly been a foundation for me here at Georgia Tech, and I am eternally grateful.&rdquo;</p><p>Rogers&rsquo; research focuses on human-robot interaction, investigating the effects that intelligent agent verbal deception has on human interaction.</p><p>&ldquo;Animals deceive. Humans deceive. Should robots and AI deceive?&rdquo; Rogers poses in his research tagline.</p><p>Additionally, the work aims to provide AI systems the ability to autonomously produce contextually meaningful and successfully deceptive utterances while determining when it is appropriate to verbally deceive humans.</p><p>He is advised by IC Chair <strong>Ayanna Howard</strong>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1613581028</created>  <gmt_created>2021-02-17 16:57:08</gmt_created>  <changed>1613581728</changed>  <gmt_changed>2021-02-17 17:08:48</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The award recognizes participants in the Focus Program who have demonstrated academic excellence, community leadership, and been granted admittance to a graduate program.]]></teaser>  <type>news</type>  <sentence><![CDATA[The award recognizes participants in the Focus Program who have demonstrated academic excellence, community leadership, and been granted admittance to a graduate program.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2021-02-17T00:00:00-05:00</dateline>  <iso_dateline>2021-02-17T00:00:00-05:00</iso_dateline>  <gmt_dateline>2021-02-17 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>585962</item>      </media>  <hg_media>          <item>          <nid>585962</nid>          <type>image</type>          <title><![CDATA[Kantwon Rogers 2]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[_MG_4285.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/_MG_4285.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/_MG_4285.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/_MG_4285.jpg?itok=rz5dBUez]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1484253211</created>          <gmt_created>2017-01-12 20:33:31</gmt_created>          <changed>1484253211</changed>          <gmt_changed>2017-01-12 20:33:31</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="644225">  <title><![CDATA[Professor Ghassan AlRegib Discusses Deep Learning ]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1613397575</created>  <gmt_created>2021-02-15 13:59:35</gmt_created>  <changed>1613397575</changed>  <gmt_changed>2021-02-15 13:59:35</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Alexis Martinez]]></publication>  <article_dateline>2021-02-15T00:00:00-05:00</article_dateline>  <iso_article_dateline>2021-02-15T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2021-02-15T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://open.spotify.com/episode/4lj0OPCTbMwopH4TBdFRD9]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="643962">  <title><![CDATA[Jan Shi Awarded ASQ Shewhart Medal]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1612792116</created>  <gmt_created>2021-02-08 13:48:36</gmt_created>  <changed>1612792116</changed>  <gmt_changed>2021-02-08 13:48:36</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Robert Louis Stevenson]]></publication>  <article_dateline>2021-02-08T00:00:00-05:00</article_dateline>  <iso_article_dateline>2021-02-08T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2021-02-08T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://www.isye.gatech.edu/news/jan-shi-awarded-asq-shewhart-medal]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="643813">  <title><![CDATA[GA Tech, Facebook partner to engage Black, Latino students in AI education]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1612361187</created>  <gmt_created>2021-02-03 14:06:27</gmt_created>  <changed>1612361187</changed>  <gmt_changed>2021-02-03 14:06:27</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[information fair]]></publication>  <article_dateline>2021-02-03T00:00:00-05:00</article_dateline>  <iso_article_dateline>2021-02-03T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2021-02-03T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://techtransfercentral.com/2021/02/02/ga-tech-facebook-partner-to-engage-black-latino-students-in-ai-education/]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="643757">  <title><![CDATA[Meet ML@GT: Joanne Truong is Developing Robots for Complex, Real World Situations]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1612273938</created>  <gmt_created>2021-02-02 13:52:18</gmt_created>  <changed>1612273938</changed>  <gmt_changed>2021-02-02 13:52:18</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[security dynamics]]></publication>  <article_dateline>2021-02-02T00:00:00-05:00</article_dateline>  <iso_article_dateline>2021-02-02T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2021-02-02T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[http://bit.ly/3tgFYnM]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="643612">  <title><![CDATA[Georgia Tech Research Highlights Premier Artificial Intelligence Conference]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Georgia Tech faculty and student researchers will figure prominently into the proceedings of the <a href="https://aaai.org/Conferences/AAAI-21/">35<sup>th</sup> AAAI Conference on Artificial Intelligence</a>, being held virtually from Feb. 2-9.</p><p>Twenty-three members of the Georgia Tech community contributed to 11 papers that will be presented at the conference, while two longtime contributors will join the ranks of the prestigious AAAI Fellows program.</p><p><a href="http://ic.gatech.edu/">School of Interactive Computing</a> Chair <strong>Ayanna Howard</strong> and Professor <strong>Ashok Goel</strong> join <a href="http://cc.gatech.edu/">College of Computing</a> Dean <strong>Charles Isbell</strong> (elected in 2019) and Regents&rsquo; Professor Emerita <strong>Janet Kolodner</strong> (elected in 1992) are 2021 inductees to the fellowship, giving the Institute four members. The program recognizes individuals who have made significant, sustained contributions to the field of artificial intelligence.</p><p>[<strong>Related news:</strong> <a href="https://www.cc.gatech.edu/news/643355/ic-professors-howard-goel-named-2021-aaai-fellows">IC Professors Howard, Goel Named 2021 AAAI Fellows</a>]</p><p>Notable research among the eight papers accepted to AAAI 2021 includes work from a multi-institution team working to understand and improve forecasting models of influenza-like illnesses like Covid-19. Effective forecasting is even more challenging amidst the current pandemic, when counts are affected by various factors such as symptomatic similarities.</p><p>The approach in this paper steers historical forecasting models to new scenarios where the flu and Covid-19 co-exist, demonstrating success in adaptation without sacrificing overall performance.</p><p>Georgia Tech&rsquo;s <strong>Alexander Rodr&iacute;guez</strong> and <strong>B. Aditya Prakash</strong> are co-authors on the paper, along with <strong>Nikhil Muralidhar</strong>, <strong>Anika Tabassum</strong>, and <strong>Naren Ramakrishnan</strong> of Virginia Tech, and <strong>Bijaya Adhikari </strong>of the University of Iowa.</p><p>[<strong>Related news:</strong> <a href="https://www.cc.gatech.edu/news/642638/research-team-wins-two-covid-19-challenges-one-week">Research Team Wins Two Covid-19 Challenges in One Week</a>]</p><p>Explore Georgia Tech&rsquo;s presence in this visualization and view a list of papers below.</p><p><a href="https://public.tableau.com/views/AAAI2021-GeorgiaTechAIresearch/Dashboard1?:language=en&amp;:display_count=y&amp;:origin=viz_share_link:showVizHome=no">INTERACTIVE VISUALIZATION: Georgia Tech @ AAAI 20201</a></p><ul><li><a href="https://www.medrxiv.org/content/10.1101/2020.09.28.20203109v2">DeepCOVID: An Operational Deep Learning-driven Framework for Explainable Real-time COVID-19 Forecasting</a> (Alexander Rodr&iacute;guez, Anika Tabassum, Jiaming Cui, Jiajia Xie, Javen Ho, Pulak Agarwal, Bijaya Adhikari, B. Aditya Prakash)<br />&nbsp;</li><li><a href="https://www.medrxiv.org/content/10.1101/2020.09.28.20203109v2">Semantic MapNet: Building Alocentric SemanticMaps and Representations from Egocentric Views</a> (Vincent Cartillier, Zhile Ren, Neha Jain, Stefan Lee, Irfan Essa, Dhruv Batra)<br />&nbsp;</li><li><a href="https://arxiv.org/pdf/2009.11407.pdf">Steering a Historical Disease Forecasting Model Under a Pandemic: Case of Flu and COVID-19</a> (Alexander Rodr&iacute;guez, Nikhil Muralidhar, Bijaya Adhikari, Anika Tabassum, Naren Ramakrishnan, B. Aditya Prakash)<br />&nbsp;</li><li><a href="https://arxiv.org/pdf/2009.11407.pdf">Bias and Variance of Post-processing in Differential Privacy</a> (Keyu Zhu, Pascal Van Hentenryck, Ferdinando Fioretto)<br />&nbsp;</li><li>Branch and Price for Bus Driver Scheduling with Complex Break Constraints (Lucas Kletzander, Nysret Musliu, Pascal Van Hentenryck)<br />&nbsp;</li><li>Detecting and Adapting to Novelty in Games (Xiangyu Peng, Jonathan Balloch, Mark Riedl)<br />&nbsp;</li><li><a href="https://arxiv.org/pdf/2009.12562.pdf">Differentially Private and Fair Deep Learning: A Lagrangian Dual Approach</a> (Cuong Tran, Ferdinando Fioretto, Pascal Van Hentenryck)<br />&nbsp;</li><li><a href="https://arxiv.org/pdf/2010.00685.pdf">How to Motivate Your Dragon: Teaching Goal-Driven Agents to Speak and Act in Fantasy Worlds</a>&nbsp;(Prithviraj Ammanabrolu, Jack Urbanek, Margaret Li, Arthur Szlam, Tim Rocktaschel, Jason Weston)<br />&nbsp;</li><li><a href="https://arxiv.org/pdf/2009.00829.pdf">Automated Storytelling via Causal, Commonsense Plot Ordering</a>&nbsp;(Prithviraj Ammanabrolu, Wesley Cheung, William Broniec, Mark Riedl)<br />&nbsp;</li><li><a href="https://arxiv.org/pdf/1902.06007.pdf">Encoding Human Domain Knowledge to Warm Start Reinforcement Learning</a>&nbsp;(Andrew Silva, Matthew Gombolay)</li><li>&nbsp;</li><li><a href="https://arxiv.org/abs/2101.06351">Weakly-Supervised Hierarchical Models for Predicting Persuasive Strategies in Good-faith Textual Requests</a> (Jiaao Chen, Diyi Yang)</li></ul>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1611926692</created>  <gmt_created>2021-01-29 13:24:52</gmt_created>  <changed>1612194510</changed>  <gmt_changed>2021-02-01 15:48:30</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Eighteen members of the Georgia Tech community contributed to eight papers that will be presented virtually at AAAI 2021, while two longtime contributors will join the ranks of the prestigious AAAI Fellows program.]]></teaser>  <type>news</type>  <sentence><![CDATA[Eighteen members of the Georgia Tech community contributed to eight papers that will be presented virtually at AAAI 2021, while two longtime contributors will join the ranks of the prestigious AAAI Fellows program.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2021-01-29T00:00:00-05:00</dateline>  <iso_dateline>2021-01-29T00:00:00-05:00</iso_dateline>  <gmt_dateline>2021-01-29 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>643611</item>          <item>643694</item>      </media>  <hg_media>          <item>          <nid>643611</nid>          <type>image</type>          <title><![CDATA[Artificial Intelligence]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[artificial-intelligence-4469138_1280.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/artificial-intelligence-4469138_1280.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/artificial-intelligence-4469138_1280.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/artificial-intelligence-4469138_1280.jpg?itok=wYW4x4S2]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Artificial Intelligence]]></image_alt>                    <created>1611926616</created>          <gmt_created>2021-01-29 13:23:36</gmt_created>          <changed>1611926616</changed>          <gmt_changed>2021-01-29 13:23:36</gmt_changed>      </item>          <item>          <nid>643694</nid>          <type>image</type>          <title><![CDATA[AAAI 2021 Visualization]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[aaai_viz.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/aaai_viz.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/aaai_viz.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/aaai_viz.jpg?itok=7AzuYYWQ]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Georgia Tech at AAAI 2021]]></image_alt>                    <created>1612194422</created>          <gmt_created>2021-02-01 15:47:02</gmt_created>          <changed>1612194422</changed>          <gmt_changed>2021-02-01 15:47:02</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="643420">  <title><![CDATA[National Science Foundation Funds Three-Year Project to Study Gene Expression in Single Cells ]]></title>  <uid>34540</uid>  <body><![CDATA[<p>Proteins play an essential role in determining structural components of body tissue, enzymes, and antibodies. Understanding how cells determine which proteins to produce is the key to preventing disease, cellular mutations, and more.&nbsp;</p><p>A gene must be first expressed before its protein product is produced. Now, with advancements in computational science, researchers are asking if we can computationally identify the mechanism which decides which genes express themselves in a cell.</p><p>The answer to this billion-dollar question lies in predicting genetic regulatory networks using large-scale single cell gene-expression data.</p><p>School of Computational Science and Engineering (CSE) Assistant Professor&nbsp;<strong>Xiuwei Zhang&nbsp;</strong>is the recipient of a&nbsp;<a href="https://www.nsf.gov/awardsearch/showAward?AWD_ID=2019771">$400,000 National Science Foundation grant</a>&nbsp;supporting the creation of new computing methods that aims to do just that.</p><p>&ldquo;We know that if we detect expression for a gene then it is likely that its proteins are also present. Since experimentally measuring protein abundance in cells is very difficult,&nbsp;researchers look to gene regulatory networks to understand which proteins are present instead,&rdquo; she said.</p><p>A gene regulatory network is a directing graph which shows, out of tens of thousands of genes, which genes are controlling other genes.</p><p>&ldquo;A common theory people use about molecular biology is that one gene corresponds to one mRNA and then corresponds to one protein. And most of the existing work to learn the gene regulatory networks also use this theory. However, this theory is over-simplified, and the fact is that one gene can correspond to multiple mRNAs, thus multiple proteins,&rdquo; said Zhang.</p><p>This is where Zhang&rsquo;s research breaks from traditional approaches and considers this one-to-many relationship in its gene regulatory networks.&nbsp;</p><p>&ldquo;Now since one gene corresponds to multiple isoforms, in our gene regulatory networks, the nodes are isoforms instead of genes, which can provide a more accurate representation of the actual regulatory mechanism in cells,&rdquo; she said.</p><p>According to Zhang, recent advances in single cell RNA-sequencing technology have introduced new opportunities to infer high-quality regulatory networks at this level, but also pose new computational challenges</p><p>In response to these challenges, a method for developing a transcript assembler that can quantify the expression level an isoform is needed to build an accurate and scalable regulatory network. This part of the work is led by Zhang&rsquo;s collaborator,&nbsp;Pennsylvania&nbsp;State University Assistant Professor&nbsp;<strong>Mingfu Shao</strong>.&nbsp;</p><p>Another challenge for the researchers to access network accuracy has to do with cell ordering which plays a major role in inferring an accurate network. Depending on the level of error, cell ordering will determine whether a regulatory network&rsquo;s predictions are accurate. To ensure this new network&rsquo;s predictions are accurate, Zhang has the robust goal to create a method that can perform cell ordering and network inference simultaneously.&nbsp;</p><p>Ultimately, the new methods will be used in the field of immunology to study cellular mechanisms in steroid-producing cells with collaborators at&nbsp;Cambridge University.&nbsp;</p><p>&ldquo;This is very important for many biological events such as if disease happens during embryo development or to the immune system. It is our goal to be able to see from data that the level of an expression of a certain gene is not normal and then trace the problem through the regulator network. Once this is done, we can begin targeting the upstream genes for drug or vaccine development,&rdquo; said Zhang.</p>]]></body>  <author>Kristen Perez</author>  <status>1</status>  <created>1611609154</created>  <gmt_created>2021-01-25 21:12:34</gmt_created>  <changed>1611929568</changed>  <gmt_changed>2021-01-29 14:12:48</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Associate Professor Xiuwei Zhang was awarded an NSF grant to understand genetic expression at the isoform level.]]></teaser>  <type>news</type>  <sentence><![CDATA[Associate Professor Xiuwei Zhang was awarded an NSF grant to understand genetic expression at the isoform level.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2021-01-25T00:00:00-05:00</dateline>  <iso_dateline>2021-01-25T00:00:00-05:00</iso_dateline>  <gmt_dateline>2021-01-25 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[kristen.perez@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Kristen Perez</p><p>Communications Officer</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>643419</item>      </media>  <hg_media>          <item>          <nid>643419</nid>          <type>image</type>          <title><![CDATA[Network Inference]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Protein_ESR1_PDB_1a52.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Protein_ESR1_PDB_1a52.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Protein_ESR1_PDB_1a52.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Protein_ESR1_PDB_1a52.png?itok=WhqnGaTc]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[A protein 3D structure]]></image_alt>                    <created>1611608873</created>          <gmt_created>2021-01-25 21:07:53</gmt_created>          <changed>1611609377</changed>          <gmt_changed>2021-01-25 21:16:17</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="186817"><![CDATA[isoform]]></keyword>          <keyword tid="3003"><![CDATA[protein]]></keyword>          <keyword tid="76231"><![CDATA[Computational Science and Engineering]]></keyword>          <keyword tid="4305"><![CDATA[cse]]></keyword>          <keyword tid="186818"><![CDATA[Computational Bioscience]]></keyword>      </keywords>  <core_research_areas>          <term tid="39441"><![CDATA[Bioengineering and Bioscience]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="643450">  <title><![CDATA[Georgia Tech team develops AI to read EV charging station reviews to find infrastructure gaps]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1611684965</created>  <gmt_created>2021-01-26 18:16:05</gmt_created>  <changed>1611684965</changed>  <gmt_changed>2021-01-26 18:16:05</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[glacial ice melt]]></publication>  <article_dateline>2021-01-26T00:00:00-05:00</article_dateline>  <iso_article_dateline>2021-01-26T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2021-01-26T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://www.greencarcongress.com/2021/01/20210125-gatech.html]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="643355">  <title><![CDATA[IC Professors Howard, Goel Named 2021 AAAI Fellows]]></title>  <uid>33939</uid>  <body><![CDATA[<p>School of Interactive Computing Chair <strong>Ayanna Howard</strong> and Professor <strong>Ashok Goel</strong> were both named <a href="https://www.aaai.org/Awards/fellows.php">2021 Fellows by the Association for the Advancement of Artificial Intelligence</a> (AAAI).</p><p>The AAAI Fellows program recognizes individuals who have made significant, sustained contributions &ndash; usually over at least a 10-year period &ndash; to the field of artificial intelligence (AI).</p><p>Goel&rsquo;s research, which spans about 35 years, has connected fields of AI, cognitive science, and human cognition. Increasingly, it has merged the fields of AI and education, culminating in his lab&rsquo;s groundbreaking work on <a href="https://emprize.gatech.edu/">Jill Watson</a>, a virtual teaching assistant that can answer student questions in discussion forums for online classes. This trailblazing work has been recognized by numerous media outlets across the globe and has enormous long-term implications for the future of education.</p><p>&ldquo;This is an exciting time for AI research into cognitive systems,&rdquo; Goel said. &ldquo;In one direction, my research uses the needs of human learning to ground and inspire novel AI techniques and tools. In the other, it uses AI theories and methods to provide new insights into human cognition and behavior.&rdquo;</p><p>The team responsible for the advancement of Jill Watson and additional AI techniques for education, called emPrize, <a href="https://www.cc.gatech.edu/news/631981/team-makes-semifinals-global-ai-competition">advanced to the semifinals of the international XPrize AI competition in 2020</a>.</p><p>Howard, <a href="https://www.cc.gatech.edu/news/641685/renowned-roboticist-departing-georgia-tech-new-position">who was recently named the next Dean of Engineering at The Ohio State University</a>, has performed similarly impactful research over her time in the field. As the director of the Human-Automation Systems Lab (HumAnS) at Georgia Tech, she has led research in conceptualizing humanized intelligence, the process of embedding human cognitive capability into the control path of autonomous systems.</p><p>Specifically, the lab studies how human-inspired techniques, such as soft computing methodologies, sensing, and learning can be used to enhance the autonomous capabilities of intelligent systems. This has impact in both virtual AI and robotics, and has led to enterprises like <a href="http://zyrobotics.com/">Zyrobotics</a>, the company Howard co-founded that produces mobile therapy and educational products for children with differing needs.</p><p>Additionally, she has been a spokesperson for the importance of ethical research in the field.</p><p>&ldquo;We&rsquo;re at such a critical moment in the development of artificial intelligence,&rdquo; Howard said. &ldquo;There is incredible possibility, but equally daunting challenges. It&rsquo;s an honor to be recognized for the work we are doing in this field, but it&rsquo;s far from over. My hope is that I can inspire future researchers to pursue impactful and ethical advancements in the field.&rdquo;</p><p>Eight others aside from Goel and Howard were also selected to the fellowship program for 2021 and will be recognized at the <a href="https://aaai.org/Conferences/AAAI-21/">2021 AAAI conference</a>, being held virtually Feb. 2-9.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1611340903</created>  <gmt_created>2021-01-22 18:41:43</gmt_created>  <changed>1611340903</changed>  <gmt_changed>2021-01-22 18:41:43</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The AAAI Fellows program recognizes individuals who have made significant, sustained contributions to the field of artificial intelligence (AI).]]></teaser>  <type>news</type>  <sentence><![CDATA[The AAAI Fellows program recognizes individuals who have made significant, sustained contributions to the field of artificial intelligence (AI).]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2021-01-22T00:00:00-05:00</dateline>  <iso_dateline>2021-01-22T00:00:00-05:00</iso_dateline>  <gmt_dateline>2021-01-22 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>643352</item>      </media>  <hg_media>          <item>          <nid>643352</nid>          <type>image</type>          <title><![CDATA[Ashok Goel and Ayanna Howard]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Ashok Goel and Ayanna Howard.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Ashok%20Goel%20and%20Ayanna%20Howard.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Ashok%20Goel%20and%20Ayanna%20Howard.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Ashok%2520Goel%2520and%2520Ayanna%2520Howard.png?itok=UtUtQ-xW]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Ashok Goel and Ayanna Howard]]></image_alt>                    <created>1611340547</created>          <gmt_created>2021-01-22 18:35:47</gmt_created>          <changed>1611340547</changed>          <gmt_changed>2021-01-22 18:35:47</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="181920"><![CDATA[cc-research; ic-ai-ml; ic-robotics]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="643307">  <title><![CDATA[IC Associate Professor Wins 2021 ACM-W Rising Star Award]]></title>  <uid>33939</uid>  <body><![CDATA[<p><a href="http://ic.gatech.edu/">School of Interactive Computing</a> Associate Professor <strong>Munmun De Choudhury</strong> was named a winner of the <a href="https://women.acm.org/awards/rising-star-award/">2021 ACM-W Rising Star Award</a>.</p><p>The award, bestowed by the Association for Computing Machinery, recognizes a woman whose early-career research has had a significant impact on the computing discipline, as measured by factors like society impact, frequent citation of work, or creation of a new research area.</p><p>De Choudhury will receive a framed certificate and a $1,000 stipend for the recognition, which is in its first year of existence and will be given out annually. She will be recognized for the award at a research conference to be named later.</p><p>&ldquo;I feel deeply honored for this recognition and owe my successes to my wonderful students and collaborators, as well as the intellectual freedom provided by Georgia Tech&rsquo;s College of Computing that has helped trailblaze interdisciplinary research in computing, like mine, for years,&rdquo; she said.</p><p>De Choudhury&rsquo;s work leverages large-scale online social data and advances in machine learning to help answer fundamental questions relating to our social lives. Chief among them lie within the field of mental health care &ndash; understanding mental health, improving access to care, and more. Her work has been recognized by a number of other awards, including 13 best paper and honorable mention paper awards from the ACM and AAAI, as well as publications such as the New York Times, BBC, NPR, and others.</p><p>In addition to the personal appreciation, De Choudhury stressed the importance of recognizing the work of under-represented researchers in the computing field.</p><p>&ldquo;I&rsquo;d like to commend the efforts of ACM-W for creating this new opportunity to celebrate the research of a group under-represented in the computing field,&rdquo; she said. &ldquo;There is a long way to go when it comes to computing making significant positive impact on a pervasive societal problem like mental health. Still, this award serves as a valuable encouragement for the next frontier of my research program.&rdquo;</p><p>De Choudhury leads the <a href="http://socweb.cc.gatech.edu/">Social Dynamics and Wellbeing Lab</a>. Research from the lab, both past and current, can be explored in more detail on its website.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1611259070</created>  <gmt_created>2021-01-21 19:57:50</gmt_created>  <changed>1611259070</changed>  <gmt_changed>2021-01-21 19:57:50</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The award recognizes a woman whose early-career research has had a significant impact on the computing discipline.]]></teaser>  <type>news</type>  <sentence><![CDATA[The award recognizes a woman whose early-career research has had a significant impact on the computing discipline.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2021-01-21T00:00:00-05:00</dateline>  <iso_dateline>2021-01-21T00:00:00-05:00</iso_dateline>  <gmt_dateline>2021-01-21 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>587685</item>      </media>  <hg_media>          <item>          <nid>587685</nid>          <type>image</type>          <title><![CDATA[Munmun De Choudhury]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[munmun portrait_horz.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/munmun%20portrait_horz.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/munmun%20portrait_horz.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/munmun%2520portrait_horz.jpg?itok=wrtogdb-]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Georgia Tech Assistant Professor Munmun De Choudhury]]></image_alt>                    <created>1487686001</created>          <gmt_created>2017-02-21 14:06:41</gmt_created>          <changed>1487783642</changed>          <gmt_changed>2017-02-22 17:14:02</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="182015"><![CDATA[cc-research; ic-ai-ml; ic-hcc; ic-social-computing]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="643200">  <title><![CDATA[Cassie Mitchell Has Made a Medal-Winning Career Out of Adapting to Changes]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1611170628</created>  <gmt_created>2021-01-20 19:23:48</gmt_created>  <changed>1611170628</changed>  <gmt_changed>2021-01-20 19:23:48</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[John R. Ragazzini Education Award]]></publication>  <article_dateline>2021-01-20T00:00:00-05:00</article_dateline>  <iso_article_dateline>2021-01-20T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2021-01-20T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://www.teamusa.org/USParaTrackandField/Features/2021/January/17/Cassie-Mitchell-Has-Made-A-Medal-Winning-Career-Out-Of-Adapting-To-Changes]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="643133">  <title><![CDATA[Undergraduates Design Covid Forecasting Dashboard]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1611155895</created>  <gmt_created>2021-01-20 15:18:15</gmt_created>  <changed>1611155895</changed>  <gmt_changed>2021-01-20 15:18:15</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Undergraduates Design Covid Forecasting Dashboard]]></publication>  <article_dateline>2021-01-20T00:00:00-05:00</article_dateline>  <iso_article_dateline>2021-01-20T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2021-01-20T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://www.cc.gatech.edu/news/643068/undergraduates-design-covid-forecasting-dashboard]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="642995">  <title><![CDATA[We Should Have Known SolarWinds Would Be a Target]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1610731917</created>  <gmt_created>2021-01-15 17:31:57</gmt_created>  <changed>1610731917</changed>  <gmt_changed>2021-01-15 17:31:57</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Creating Helpful Incentives to Produce Semiconductors for America and Foundries Act]]></publication>  <article_dateline>2021-01-15T00:00:00-05:00</article_dateline>  <iso_article_dateline>2021-01-15T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2021-01-15T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://www.cfr.org/blog/we-should-have-known-solarwinds-would-be-target]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="642944">  <title><![CDATA[Assistant Professor Named to IEEE’s AI’s 10 to Watch List]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1610650734</created>  <gmt_created>2021-01-14 18:58:54</gmt_created>  <changed>1610650734</changed>  <gmt_changed>2021-01-14 18:58:54</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[security dynamics]]></publication>  <article_dateline>2021-01-14T00:00:00-05:00</article_dateline>  <iso_article_dateline>2021-01-14T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2021-01-14T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/3byNVOS]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="37041"><![CDATA[Computational Science and Engineering]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="589608"><![CDATA[Machine Learning]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="134"><![CDATA[Student and Faculty]]></category>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="642592">  <title><![CDATA[Malcolm Gladwell Chats with Dean During a &#039;Return to the 404&#039;]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1609945376</created>  <gmt_created>2021-01-06 15:02:56</gmt_created>  <changed>1609945376</changed>  <gmt_changed>2021-01-06 15:02:56</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Malcolm Gladwell Chats with Dean During a &#039;Return to the 404&#039;]]></publication>  <article_dateline>2021-01-06T00:00:00-05:00</article_dateline>  <iso_article_dateline>2021-01-06T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2021-01-06T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://www.cc.gatech.edu/news/642583/outliers-author-chats-dean-during-return-404]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="642538">  <title><![CDATA[Meet ML@GT: Xinshi Chen Seeks to Bridge Connections Between Deep Learning Models and Traditional Algorithms]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1609865711</created>  <gmt_created>2021-01-05 16:55:11</gmt_created>  <changed>1609865711</changed>  <gmt_changed>2021-01-05 16:55:11</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[security dynamics]]></publication>  <article_dateline>2021-01-05T00:00:00-05:00</article_dateline>  <iso_article_dateline>2021-01-05T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2021-01-05T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/38dS8oS]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="37041"><![CDATA[Computational Science and Engineering]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="642217">  <title><![CDATA[Detection of eye contact with deep neural networks is as accurate as human experts]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1608231248</created>  <gmt_created>2020-12-17 18:54:08</gmt_created>  <changed>1608231248</changed>  <gmt_changed>2020-12-17 18:54:08</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[John Peatman]]></publication>  <article_dateline>2020-12-17T00:00:00-05:00</article_dateline>  <iso_article_dateline>2020-12-17T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2020-12-17T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://www.nature.com/articles/s41467-020-19712-x]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="642198">  <title><![CDATA[ML@GT Awards First-Ever Doctorate in Machine Learning from Georgia Tech]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1608224451</created>  <gmt_created>2020-12-17 17:00:51</gmt_created>  <changed>1608224451</changed>  <gmt_changed>2020-12-17 17:00:51</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[security dynamics]]></publication>  <article_dateline>2020-12-17T00:00:00-05:00</article_dateline>  <iso_article_dateline>2020-12-17T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2020-12-17T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[http://bit.ly/38cOMBs]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="37041"><![CDATA[Computational Science and Engineering]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="589608"><![CDATA[Machine Learning]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="129"><![CDATA[Institute and Campus]]></category>          <category tid="42911"><![CDATA[Education]]></category>          <category tid="134"><![CDATA[Student and Faculty]]></category>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="642143">  <title><![CDATA[Q&A: De'Aira Bryant Discusses Her Experience Programming a Robot for the Movie Superintelligence]]></title>  <uid>33939</uid>  <body><![CDATA[<p><strong>De&rsquo;Aira Bryant</strong> didn&rsquo;t come to Georgia Tech to work in the movie industry. Her interests lie within the field of robotics, where she works on projects that will increase the quality of human life.</p><p>Being in the heart of Atlanta, however, the burgeoning heart of the film industry, comes with a few perks. Last year, Bryant was able to take advantage of one when she was contacted by representatives from the production crew of <em>Superintelligence</em>. The movie stars Melissa McCarthy as a woman who must prove to an artificial intelligence that humanity is worth saving and was recently released on HBO Max.</p><p>For the movie, Bryant was asked to program a Nao, a humanoid robot she uses in the <a href="https://humanslab.ece.gatech.edu/">Human-Automation Systems (HumAnS) Lab</a> run by her advisor, School of Interactive Computing Chair Ayanna Howard. Read about Bryant&rsquo;s experience programming the biggest star on the set.</p><p><strong>How did this opportunity to work with <em>Superintelligence</em> come about, and what was the experience like?</strong></p><p>The production team reached out to the College of Computing. They were interested in having a robot for a scene and needed someone who could program the Nao to match the scene they had written. They reached out to Dr. Howard because they knew she had that type of robot, and she reached out to me because I&rsquo;m the person who does most of the customized programming for this particular robot. If there&rsquo;s a script or movements or whatever, I&rsquo;m the choreographer.</p><p>It was exciting. I was like, &ldquo;Oh my goodness, this is for a movie.&rdquo; I had no idea what it was about, but I was just excited to be a part of it. They asked if their ideas were possible and the production team was like, &ldquo;We don&rsquo;t know what it can do, but we think it looks cool. Can you make it do this?&rdquo; We talked on the phone, and then I went to work.</p><p><strong>How long did you have to program it?</strong></p><p>I had about a week to get it ready. I had this idea of what they wanted, and I just tried to program it as best as I could.</p><p><strong>So, tell me about the day of. What was it like being on set?</strong></p><p>I took the robot to the Klaus Advanced Computing Building. They were filming in there. It was so exciting to see everything. I had to tell the robot to go on their cue, so I was sitting right behind the camera. I got to meet Melissa McCarthy and some of the other stars, and I got a few pictures with them that I&rsquo;m excited to finally be able to share with everyone. Everyone was so welcoming and understanding that the robot needed some time. I like to say that the robot was the biggest superstar on the set. It had its moments where it was like, &ldquo;I&rsquo;m not ready yet. My joint isn&rsquo;t quite ready to do this movement.&rdquo; They were understanding and eager to learn. They wanted their own pictures with the robot and everything, and had their own questions that I was excited to answer.</p><p><strong>A lot of non-roboticists or AI researchers&rsquo; first experiences with robots is in mass media like movies or TV shows, and normally its some dystopian or disaster scenario. How seriously did you take that responsibility or opportunity to portray the lighter, more realistic side?</strong></p><p>I think for a lot of people, robots &ndash; especially these humanoid ones &ndash; have been largely portrayed negatively. They focus on disaster cases they may never happen in the next 100 years, if ever. There hasn&rsquo;t been a lot of mass media attention that focuses on more positive use cases. I take that very seriously in our work, just knowing that we focus on people, on children that can benefit from the technology and have it improve their quality of life. It&rsquo;s important to show those cases to affect the narrative. But we also want to highlight the concerns that are just. Things like bias and ethics of using robotics in certain domains. Those are real things that people are working to mitigate now, so we can bring people closer to what the field actually looks like by highlighting both.</p><p>Every time I teach kids or teach a class, I start out by showing what robots can actually do. I show videos of them falling over or something like that to illustrate that those terminators or killer robots, that doesn&rsquo;t happen right now. But there are some other issues that are real and current and pressing, and here&rsquo;s how we address them.</p><p><strong>Being at Georgia Tech with movies filmed nearby has offered these kinds of neat opportunities. How neat is it to have this platform?</strong></p><p>My friends think it&rsquo;s so much cooler that I helped work on a movie that is going to be on HBO Max than for me to have some paper published at this really prestigious conference. The movie resonates with them more, so it&rsquo;s an opportunity to have a connection. They can relate to the technology in a way that is natural to them and ask questions, and I can share more about robotics and my work. That&rsquo;s how we get people interested in the field.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1608073783</created>  <gmt_created>2020-12-15 23:09:43</gmt_created>  <changed>1608073783</changed>  <gmt_changed>2020-12-15 23:09:43</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[De'Aira Bryant programmed a robot for a scene in the movie Superintelligence. She discusses her experience in this Q&A.]]></teaser>  <type>news</type>  <sentence><![CDATA[De'Aira Bryant programmed a robot for a scene in the movie Superintelligence. She discusses her experience in this Q&A.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-12-15T00:00:00-05:00</dateline>  <iso_dateline>2020-12-15T00:00:00-05:00</iso_dateline>  <gmt_dateline>2020-12-15 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>642140</item>      </media>  <hg_media>          <item>          <nid>642140</nid>          <type>image</type>          <title><![CDATA[De'Aira Bryant Superintelligence]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[BryantSuperintelligence2.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/BryantSuperintelligence2.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/BryantSuperintelligence2.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/BryantSuperintelligence2.jpg?itok=ioVEhMje]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[De'Aira Bryant works on the set of the movie Superintelligence]]></image_alt>                    <created>1608072918</created>          <gmt_created>2020-12-15 22:55:18</gmt_created>          <changed>1608072918</changed>          <gmt_changed>2020-12-15 22:55:18</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="182940"><![CDATA[cc-research; ic-ai-ml; ic-robotics; ic-hcc]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="642142">  <title><![CDATA[Sehoon Ha Part of $500k Grant to Make Safer, More Deployable Robots]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Safety is arguably the biggest barrier to large-scale deployability of humanoid assistive robots.</p><p>Large, heavy, and with the potential to suddenly fall over all mean that the risk to humans has remained too high to place this technology in homes, hospitals, retail spaces, or care facilities.</p><p>In 2016, however, researchers at UCLA posed a solution: What if we made robots that just couldn&rsquo;t fall down? Now, researchers at Georgia Tech, in collaboration with UCLA and the University of Southern California, are working to develop a new class of locomotion systems that could enable this technology to become a larger part of our daily lives.</p><p>&ldquo;We have lots of robots,&rdquo; said Sehoon Ha, an assistant professor in Georgia Tech&rsquo;s School of Interactive Computing and a co-principal investigator on the project. &ldquo;But they aren&rsquo;t in our house or in our stores. It&rsquo;s mainly because of safety. I have a young daughter. I wouldn&rsquo;t be comfortable with a full-sized humanoid robot in my house.&rdquo;</p><p>Previously, UCLA developed a new class of robots called &ldquo;buoyancy-assisted robots.&rdquo; Instead of the human-like hardware that was bulky, heavy, and subject to the pitfalls of gravity, these legged robots remained erect thanks to a body made of helium balloons.</p><p>&ldquo;Even though there is some mechanical or motor error, it never falls,&rdquo; Ha said. &ldquo;It never breaks. It&rsquo;s super light. Even if it might collide with you, it doesn&rsquo;t fall and it can&rsquo;t hurt you.&rdquo;</p><p>Creating a new class of locomotion systems has a couple of challenges: designing a new hardware that is cheap and safe and developing an algorithm that supports locomotion and collaboration. This grant will support development of novel frameworks that address a fundamentally new family of legged robots and empower them with reliable locomotion skills.</p><p>&ldquo;The main philosophy is to deploy the reinforcement learning on real hardware,&rdquo; Ha said. &ldquo;This buoyancy-assisted robot is subject to a relatively larger magnitude of drag forces. It&rsquo;s hard to simulate it. There&rsquo;s a discrepancy between simulation and the real world. We want to collect real-world experience and limit the reality gap.&rdquo;</p><p>The technology could help carry out a search and rescue in a disaster relief zone or answer a question in a retail space. The new project, funded by a $500,000 grant from the National Science Foundation&rsquo;s National Robotics Initiative, will help create new locomotion control systems using reinforcement learning to improve the state of this technology.</p><p>Already cheaper than its bulkier counterparts, these robots could be as inexpensive as a couple hundred dollars produced at scale, Ha said.</p><p>&ldquo;Now you might imagine a scenario where you could drop 1,000 of these into a disaster area to carry our search and rescue missions,&rdquo; he said.</p><p>The grant runs for four years and research from the project will be open-source to encourage additional collaboration. The grant will also support a competition for middle and high school students using the low-cost platforms to foster student interest in the field.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1608073378</created>  <gmt_created>2020-12-15 23:02:58</gmt_created>  <changed>1608073378</changed>  <gmt_changed>2020-12-15 23:02:58</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Researchers at Georgia Tech, in collaboration with UCLA and the University of Southern California, are working to develop a new class of locomotion systems that could enable buoyancy-assisted robots to become a larger part of our daily lives.]]></teaser>  <type>news</type>  <sentence><![CDATA[Researchers at Georgia Tech, in collaboration with UCLA and the University of Southern California, are working to develop a new class of locomotion systems that could enable buoyancy-assisted robots to become a larger part of our daily lives.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-12-15T00:00:00-05:00</dateline>  <iso_dateline>2020-12-15T00:00:00-05:00</iso_dateline>  <gmt_dateline>2020-12-15 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>642141</item>      </media>  <hg_media>          <item>          <nid>642141</nid>          <type>image</type>          <title><![CDATA[Sehoon Ha]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[sehoon.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/sehoon.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/sehoon.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/sehoon.jpg?itok=vnZXPkeU]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Sehoon Ha]]></image_alt>                    <created>1608073322</created>          <gmt_created>2020-12-15 23:02:02</gmt_created>          <changed>1608073322</changed>          <gmt_changed>2020-12-15 23:02:02</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="181920"><![CDATA[cc-research; ic-ai-ml; ic-robotics]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="642075">  <title><![CDATA[Sex, Race, and Robots author Ayanna Howard discusses how to identify, fight bias]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1607715309</created>  <gmt_created>2020-12-11 19:35:09</gmt_created>  <changed>1607715309</changed>  <gmt_changed>2020-12-11 19:35:09</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Homer Rice Sports Performance]]></publication>  <article_dateline>2020-12-11T00:00:00-05:00</article_dateline>  <iso_article_dateline>2020-12-11T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2020-12-11T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://www.therobotreport.com/howard-discusses-sex-race-robotics-how-fight-bias-ai/]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="642074">  <title><![CDATA[Borodovsky-Boguslavsky&#039;s Gift: Georgia Tech Couple Funds Prize for Bioinformatics]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1607715227</created>  <gmt_created>2020-12-11 19:33:47</gmt_created>  <changed>1607715227</changed>  <gmt_changed>2020-12-11 19:33:47</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Jean-Francois Louf]]></publication>  <article_dateline>2020-12-11T00:00:00-05:00</article_dateline>  <iso_article_dateline>2020-12-11T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2020-12-11T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://cos.gatech.edu/news/borodovsky-boguslavskys-gift-georgia-tech-couple-funds-prize-bioinformatics]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="642072">  <title><![CDATA[Preparing for Emergency Response with Partial Network Information]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1607712958</created>  <gmt_created>2020-12-11 18:55:58</gmt_created>  <changed>1607712958</changed>  <gmt_changed>2020-12-11 18:55:58</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Preparing for Emergency Response with Partial Network Information]]></publication>  <article_dateline>2020-12-11T00:00:00-05:00</article_dateline>  <iso_article_dateline>2020-12-11T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2020-12-11T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://www.cc.gatech.edu/news/642071/preparing-emergency-response-partial-network-information]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="641965">  <title><![CDATA[This Robot Can Rap — Really]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1607460115</created>  <gmt_created>2020-12-08 20:41:55</gmt_created>  <changed>1607540463</changed>  <gmt_changed>2020-12-09 19:01:03</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Birney Robert]]></publication>  <article_dateline>2020-12-08T00:00:00-05:00</article_dateline>  <iso_article_dateline>2020-12-08T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2020-12-08T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://www.scientificamerican.com/article/this-robot-can-rap-really/]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="641889">  <title><![CDATA[New Chair Leads the School of Computer Science into a Collaborative Future]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1607350020</created>  <gmt_created>2020-12-07 14:07:00</gmt_created>  <changed>1607350020</changed>  <gmt_changed>2020-12-07 14:07:00</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[New Chair Leads the School of Computer Science into a Collaborative Future]]></publication>  <article_dateline>2020-12-07T00:00:00-05:00</article_dateline>  <iso_article_dateline>2020-12-07T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2020-12-07T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://www.cc.gatech.edu/news/641816/new-chair-leads-school-computer-science-collaborative-future]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="633156">  <title><![CDATA[Meet ML@GT: Abhishek Das Wants to Stop Climate Change and Develop AI Agents with Human-Level Skillsets]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1583153745</created>  <gmt_created>2020-03-02 12:55:45</gmt_created>  <changed>1607096683</changed>  <gmt_changed>2020-12-04 15:44:43</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Tokyo Smart City studio]]></publication>  <article_dateline>2020-03-02T00:00:00-05:00</article_dateline>  <iso_article_dateline>2020-03-02T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2020-03-02T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[http://bit.ly/3a4Otbg]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="455941"><![CDATA[School of Awesome]]></group>      </groups>  <categories>      </categories>  <keywords>          <keyword tid="186389"><![CDATA[Student Highlights]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="633699">  <title><![CDATA[Meet ML@GT: Cusuh Ham, a World Traveler Focused on Understanding Uncertainty in Machine Learning]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1584719385</created>  <gmt_created>2020-03-20 15:49:45</gmt_created>  <changed>1607096650</changed>  <gmt_changed>2020-12-04 15:44:10</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Tokyo Smart City studio]]></publication>  <article_dateline>2020-03-20T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-03-20T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-03-20T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[http://bit.ly/3b9XiAU]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="455941"><![CDATA[School of Awesome]]></group>      </groups>  <categories>      </categories>  <keywords>          <keyword tid="186388"><![CDATA[Student Highlight]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="641849">  <title><![CDATA[Coronavirus Vaccine Approval Will Launch Unprecedented Public Health Initiative]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1607024899</created>  <gmt_created>2020-12-03 19:48:19</gmt_created>  <changed>1607024899</changed>  <gmt_changed>2020-12-03 19:48:19</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[similiac]]></publication>  <article_dateline>2020-12-03T00:00:00-05:00</article_dateline>  <iso_article_dateline>2020-12-03T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2020-12-03T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://rh.gatech.edu/news/641702/coronavirus-vaccine-approval-will-launch-unprecedented-public-health-initiative]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="641848">  <title><![CDATA[Coronavirus Vaccine Approval Will Launch Unprecedented Public Health Initiative]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1607024888</created>  <gmt_created>2020-12-03 19:48:08</gmt_created>  <changed>1607024888</changed>  <gmt_changed>2020-12-03 19:48:08</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[similiac]]></publication>  <article_dateline>2020-12-03T00:00:00-05:00</article_dateline>  <iso_article_dateline>2020-12-03T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2020-12-03T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://rh.gatech.edu/news/641702/coronavirus-vaccine-approval-will-launch-unprecedented-public-health-initiative]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="641778">  <title><![CDATA[Isbell to Present Keynote at Premier Neural Processing Conference]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1606918565</created>  <gmt_created>2020-12-02 14:16:05</gmt_created>  <changed>1606918565</changed>  <gmt_changed>2020-12-02 14:16:05</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Isbell to Present Keynote at Premier Neural Processing Conference]]></publication>  <article_dateline>2020-12-02T00:00:00-05:00</article_dateline>  <iso_article_dateline>2020-12-02T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2020-12-02T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://b.gatech.edu/2UYu16c]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="37041"><![CDATA[Computational Science and Engineering]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="641768">  <title><![CDATA[Meet ML@GT: Daniel Scarafoni Makes Industrial Tasks Safer]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1606846446</created>  <gmt_created>2020-12-01 18:14:06</gmt_created>  <changed>1606846446</changed>  <gmt_changed>2020-12-01 18:14:06</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Meet ML@GT: Daniel Scarafoni Makes Industrial Tasks Safer]]></publication>  <article_dateline>2020-12-01T00:00:00-05:00</article_dateline>  <iso_article_dateline>2020-12-01T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2020-12-01T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/39y40TQ]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="641767">  <title><![CDATA[Yang Named to Forbes 30 Under 30 in Science Class of 2021 ]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1606846386</created>  <gmt_created>2020-12-01 18:13:06</gmt_created>  <changed>1606846386</changed>  <gmt_changed>2020-12-01 18:13:06</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Kausik Chakrabarti]]></publication>  <article_dateline>2020-12-01T00:00:00-05:00</article_dateline>  <iso_article_dateline>2020-12-01T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2020-12-01T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://www.forbes.com/profile/diyi-yang/?list=30under30-science&amp;sh=28925e5060d1]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="641689">  <title><![CDATA[Teaching in the Time of Covid-19]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1606750002</created>  <gmt_created>2020-11-30 15:26:42</gmt_created>  <changed>1606750002</changed>  <gmt_changed>2020-11-30 15:26:42</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[target localization]]></publication>  <article_dateline>2020-11-30T00:00:00-05:00</article_dateline>  <iso_article_dateline>2020-11-30T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2020-11-30T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[http://coe.gatech.edu/news/2020/11/teaching-time-covid-19]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="641459">  <title><![CDATA[Find the Right Lab for You at ML@GT’s Lab Lightning Talks]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1605828959</created>  <gmt_created>2020-11-19 23:35:59</gmt_created>  <changed>1605828959</changed>  <gmt_changed>2020-11-19 23:35:59</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[security dynamics]]></publication>  <article_dateline>2020-11-19T00:00:00-05:00</article_dateline>  <iso_article_dateline>2020-11-19T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2020-11-19T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/3pHQP8k]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="641381">  <title><![CDATA[Need a Note Taker? This AI Can Help.]]></title>  <uid>33939</uid>  <body><![CDATA[<p>A new tool that uses artificial intelligence is bringing notetaking up to speed and may help future digital assistants ease fears of ever missing a meeting again.</p><p>It&rsquo;s an age-old problem: We are inundated with informal forms of communication like phone calls, remote video conferences, text conversations on group messaging platforms like Slack or Microsoft Teams. Remembering key points of each discussion can at times be overwhelming, not to mention the stress caused by missing a meeting or seeing a couple hundred messages stack up while you were out for lunch.</p><p>This digital solution, developed by Georgia Tech researchers and being presented in a paper this week at the <a href="https://2020.emnlp.org/">2020 Conference on Empirical Methods in Natural Language Processing</a>, can assuage those concerns by generating summaries of informal conversations. Using a subset of machine learning called natural language processing, the method identifies conversational structure using particular keywords.</p><p>&ldquo;Think about informal conversational structure: It has an opening, problem statements, discussions, a conclusion,&rdquo; said <strong>Diyi Yang</strong>, an assistant professor in the School of Interactive Computing and a co-author on the paper. &ldquo;We want to mine those structures to teach the model what may be informative within the conversation for generating better summaries.&rdquo;</p><p>Words like any variation of &ldquo;hello&rdquo; or &ldquo;good,&rdquo; for example, might indicate that it is a greeting. Other action words likely indicate some kind of intention, and dates or times a discussion and conclusion on plans. Knowing this, the model can represent the unstructured conversation better to craft an accurate summary.</p><p>These types of summaries are more important now than ever. More individuals all over the world are working or attending school remotely. More discussions are being handled over the phone or video conferencing, plans being made through applications like Microsoft Teams. Previous research on the subject has focused on formal content like books, papers, or news articles, but the existing body of work on informal language is relatively sparse.</p><p>&ldquo;This is applicable now more than ever because of where we are,&rdquo; Yang said. &ldquo;There&rsquo;s so much online and text conversation, and we have way too much information. We need help storing it in a shorter and more structured way. If you&rsquo;re away from your laptop for 30 minutes, it&rsquo;s important to be able to get a quick summary of what you missed.&rdquo;</p><p>Challenges still exist. There are problems with referral in the conversation, or calling back to a previous discussion point later in a meeting. There are also typos or slang, repetition, interruption or changes in role, language changes that can interfere with the model&rsquo;s ability to determine structure. These are items Yang and her collaborator are continuing to address moving forward.</p><p>&ldquo;This is a great starting point,&rdquo; Yang said.</p><p>The work is presented in the paper <a href="https://arxiv.org/pdf/2010.01672.pdf"><em>Multi-View Sequence-to-Sequence Models with Conversational Structure for Abstractive Dialogue Summarization</em></a>. The paper is co-authored by Yang and <strong>Jiaao Chen</strong>, a second-year Ph.D. student in the School of Interactive Computing.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1605632628</created>  <gmt_created>2020-11-17 17:03:48</gmt_created>  <changed>1605632628</changed>  <gmt_changed>2020-11-17 17:03:48</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A new AI tool that summarizes unstructured conversational language could help future digital assistants ease fears of ever missing a meeting again.]]></teaser>  <type>news</type>  <sentence><![CDATA[A new AI tool that summarizes unstructured conversational language could help future digital assistants ease fears of ever missing a meeting again.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-11-17T00:00:00-05:00</dateline>  <iso_dateline>2020-11-17T00:00:00-05:00</iso_dateline>  <gmt_dateline>2020-11-17 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>641380</item>      </media>  <hg_media>          <item>          <nid>641380</nid>          <type>image</type>          <title><![CDATA[Taking Notes]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Note taking photo.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Note%20taking%20photo.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Note%20taking%20photo.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Note%2520taking%2520photo.jpg?itok=uPR0lqn4]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[A stack of notes on a table]]></image_alt>                    <created>1605631344</created>          <gmt_created>2020-11-17 16:42:24</gmt_created>          <changed>1605631344</changed>          <gmt_changed>2020-11-17 16:42:24</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="181639"><![CDATA[cc-research; ic-ai-ml]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="641253">  <title><![CDATA[Being Polite Can Be Essential to Getting a Loan]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1605127830</created>  <gmt_created>2020-11-11 20:50:30</gmt_created>  <changed>1605127830</changed>  <gmt_changed>2020-11-11 20:50:30</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[security dynamics]]></publication>  <article_dateline>2020-11-11T00:00:00-05:00</article_dateline>  <iso_article_dateline>2020-11-11T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2020-11-11T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://b.gatech.edu/36pv3h2]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="641236">  <title><![CDATA[ML@GT Further Establishes Itself in Natural Language Processing Community]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1605122882</created>  <gmt_created>2020-11-11 19:28:02</gmt_created>  <changed>1605122882</changed>  <gmt_changed>2020-11-11 19:28:02</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[ML@GT Further Establishes Itself in Natural Language Processing Community]]></publication>  <article_dateline>2020-11-11T00:00:00-05:00</article_dateline>  <iso_article_dateline>2020-11-11T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2020-11-11T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://b.gatech.edu/37zn5UT]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="37041"><![CDATA[Computational Science and Engineering]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="641235">  <title><![CDATA[ML@GT Introduces Machine Learning and AI Cluster at INFORMS Conference]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1605122812</created>  <gmt_created>2020-11-11 19:26:52</gmt_created>  <changed>1605122812</changed>  <gmt_changed>2020-11-11 19:26:52</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[security dynamics]]></publication>  <article_dateline>2020-11-09T00:00:00-05:00</article_dateline>  <iso_article_dateline>2020-11-09T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2020-11-09T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://b.gatech.edu/36EfweL]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="640990">  <title><![CDATA[Thought Leaders to Address How Bias and Lack of Diversity Impact Data, Software, and Institutions]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1604586995</created>  <gmt_created>2020-11-05 14:36:35</gmt_created>  <changed>1604586995</changed>  <gmt_changed>2020-11-05 14:36:35</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Thought Leaders to Address How Bias and Lack of Diversity Impact Data, Software, and Institutions]]></publication>  <article_dateline>2020-11-05T00:00:00-05:00</article_dateline>  <iso_article_dateline>2020-11-05T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2020-11-05T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/34PIcQL]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="37041"><![CDATA[Computational Science and Engineering]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="640964">  <title><![CDATA[Teaching AI agents to communicate and act in fantasy worlds]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1604517993</created>  <gmt_created>2020-11-04 19:26:33</gmt_created>  <changed>1604517993</changed>  <gmt_changed>2020-11-04 19:26:33</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[bioelectromagnetics]]></publication>  <article_dateline>2020-11-04T00:00:00-05:00</article_dateline>  <iso_article_dateline>2020-11-04T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2020-11-04T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://techxplore.com/news/2020-11-ai-agents-fantasy-worlds.html]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="640901">  <title><![CDATA[Meet ML@GT: Zhanzhan Zhou Uses Machine Learning to End Residential Segregation in Atlanta]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1604414681</created>  <gmt_created>2020-11-03 14:44:41</gmt_created>  <changed>1604414681</changed>  <gmt_changed>2020-11-03 14:44:41</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Meet ML@GT: Zhanzhan Zhou Uses Machine Learning to End Residential Segregation in Atlanta]]></publication>  <article_dateline>2020-11-03T00:00:00-05:00</article_dateline>  <iso_article_dateline>2020-11-03T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2020-11-03T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/34Q8X7N]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="640900">  <title><![CDATA[ML@GT Introduces Machine Learning and AI Cluster at INFORMS Conference]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1604414583</created>  <gmt_created>2020-11-03 14:43:03</gmt_created>  <changed>1604414583</changed>  <gmt_changed>2020-11-03 14:43:03</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[security dynamics]]></publication>  <article_dateline>2020-11-02T00:00:00-05:00</article_dateline>  <iso_article_dateline>2020-11-02T00:00:00-05:00</iso_article_dateline>  <gmt_article_dateline>2020-11-02T00:00:00-05:00</gmt_article_dateline>  <article_url><![CDATA[https://b.gatech.edu/36EfweL]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="640793">  <title><![CDATA[Georgia Tech Researchers Contribute 13 Papers to Premier Visualization Conference]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Georgia Tech contributed to 13 papers and two workshops this week at <a href="http://ieeevis.org/year/2020/welcome">IEEE VIS 2020</a>, the premier forum for advances in theory, methods, and applications of visualization and visual analytics.</p><p>The conference highlights research from universities, government, and industry around the world. It is comprised of three separate events: IEEE Visual Analytics Science and Technology (VAST), IEEE Information Visualization (InfoVis), and IEEE Scientific Visualization (SciVis). Like other conferences throughout the Covid-19 pandemic, VIS was held virtually.</p><p>Georgia Tech&rsquo;s research was highlighted by one Best Paper Honorable Mention titled <em>Mapping Researchers with PeopleMap</em>. The paper &ndash; authored by <strong>Jon Saad-Falcon</strong>, <strong>Omar Shaikh</strong>, <strong>Zijie J. Wang</strong>, <strong>Austin P. Wright</strong>, <strong>Sasha Richardson</strong>, and <strong>Polo Chau</strong> &ndash; presents an open-source interactive tool that uses natural language processing to create visual maps for researchers based on their research interests and publications.</p><p>&ldquo;Discovering research expertise at universities can be a difficult task,&rdquo; the paper contends. &ldquo;Directories routinely become outdated, and few help in visually summarizing researchers&rsquo; work or supporting the exploration of shared interests among researchers. This results in lost opportunities for both internal and external entities to discover new connections, nurture research collaboration, and explore the diversity of research.&rdquo;</p><p>The paper also received a VAST Poster Research Award.</p><p>Also of note, new School of Computational Science &amp; Engineering Chair <strong>Haesun Park</strong> received recognition for a 2010 IEEE VAST Paper. The paper received a Test of Time Award, recognizing it for continued contributions to the visual analytics and visualization community. The paper is titled <em>iVisClassifier: An Interactive Visual Analytics System for Classification Based on Supervised Dimension Reduction</em> and co-authored by <strong>Jaegul Choo</strong>, <strong>Hanseung Lee</strong>, and <strong>Jaeyeon Kihm</strong>.</p><p>School of Interactive Computing Ph.D. student <strong>Emily Wall</strong>, who is advised by Associate Professor <strong>Alex Endert</strong>, was also recognized with the VGTC Outstanding Dissertation Honorable Mention for her work <em>Detecting and Mitigating Human Bias in Visual Analytics</em>.</p><p>&ldquo;People are susceptible to a multitude of biases, including perceptual biases and illusions; cognitive biases like confirmation bias or anchoring bias; and social biases like racial or gender bias that are borne of cultural experiences and stereotypes,&rdquo; Wall contends. &ldquo;As humans are an integral part of data analysis and decision making in many domains, their biases can be injected into and even amplified by models and algorithms.&rdquo;</p><p>Her work aims to develop a better understanding of the role human bias plays in visual data analysis by defining bias, detecting bias, and mitigating bias.</p><p>Explore more about Georgia Tech&rsquo;s contributions to IEEE VIS at the links below, or visit the <a href="http://vis.gatech.edu/">Georgia Tech Visualization Lab</a>. You can follow the lab on Twitter at <a href="https://twitter.com/GT_Vis">@GT_Vis</a>.</p><p><strong>Georgia Tech at IEEE VIS 2020</strong></p><p><strong>Papers</strong></p><ul><li><a href="https://arxiv.org/abs/2007.15832">SafetyLens: Visual Data Analysis of Functional Safety of Vehicles (Arpit Narechania, Ahsan Qamar, and Alex Endert)</a></li><li><a href="https://nl4dv.github.io/nl4dv/">NL4DV: A Toolkit for Generating Analytic Specifications for Data Visualization from Natural Language Queries (Arpit Narechania, Arjun Srinivasan, and John Stasko)</a></li><li><a href="https://arjun010.github.io/individual-projects/databreeze.html">Interweaving Multimodal Interaction with Flexible Unit Visualizations for Data Exploration (Arjun Srinivasan, Bongshin Lee, and John Stasko)</a></li><li><a href="https://terrancelaw.github.io/publications/data_insight_interviews_vis20.pdf">What are Data Insights to Professional Visualization Users? (Po-Ming Law, Alex Endert, and John Stasko)</a></li><li><a href="https://terrancelaw.github.io/publications/auto_insights_vis20.pdf">Characterizing Automated Data Insights (Po-Ming Law, Alex Endert, and John Stasko)</a></li><li><a href="https://arxiv.org/abs/2004.15004">CNN Explainer: Learning Convolutional Neural Networks with Interactive Visualization (Zijie J. Wang, Robert Turko, Omar Shaikh, Haekyu Park, Nilaksh Das, Fred Hohman, Minsuk Kahng, Duen Horng (Polo) Chau)</a></li><li><a href="https://arxiv.org/abs/2009.02608">Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks (Nilaksh Das, Haekyu Park, Zijie J. Wang, Fred Hohman, Robert Firstman, Emily Rogers, Duen Horng (Polo) Chau)</a></li><li><a href="https://poloclub.github.io/papers/20-vis-ganlabeval.pdf">How Does Visualization Help People Learn Deep Learning? Evaluating GAN Lab with Observational Study and Log Analysis (Minsuk Kahng, Duen Horng (Polo) Chau)</a></li><li><a href="https://arxiv.org/abs/2009.00091">Mapping Researchers with PeopleMap (Jon Saad-Falcon, Omar Shaikh, Zijie J. Wang, Austin P. Wright, Sasha Richardson, Duen Horng (Polo) Chau)</a></li><li><a href="https://gtvalab.github.io/files/legion.pdf">LEGION: Visually compare modeling techniques for regression (Subhajit Das, Alex Endert)</a></li><li><a href="https://gtvalab.github.io/files/cava_dataaug.pdf">CAVA: A Visual Analytics System for Exploratory Columnar Data Augmentation Using Knowledge Graphs (Dylan Cashman, Shenyu Xu, Subhajit Das, Florian Heimerl, Cong Liu, Shah Rukh Humayoun, Michael Gleicher, Alex Endert, Remco Chang)</a></li><li>A Comparative Analysis of Industry Human-AI Interaction Guidelines (Austin P. Wright, Zijie J. Wang, Haekyu Park, Grace Guo, Fabian Sperrle, Mennatallah El-Assady, Alex Endert, Daniel Keim, Duen Horng (Polo) Chau)</li><li><a href="https://trexvis.github.io/Workshop2020/papers/Coscia.pdf">Toward A Bias-Aware Future for Mixed Initiative Visual Analytics (Adam Coscia, Duen Horng (Polo) Chau, Alex Endert)</a></li></ul><p><strong>Recognitions</strong></p><ul><li><a href="https://www.cc.gatech.edu/~hpark/papers/choo_vast10_v1.pdf">iVisClassifier: an Interactive Visual Analytics System for Classification Based on Supervised Dimension Reduction (Jaegul Choo, Hanseung Lee, Jaeyeon Kihm and Haesun Park)</a></li><li><a href="https://smartech.gatech.edu/handle/1853/63597">Detecting and Mitigating Human Bias in Visual Analytics (Emily Wall (Advisor: Alex Endert))</a></li></ul><p><strong>Workshops</strong></p><ul><li>MoVIS &#39;20 (Organizers: Clio Andris, Somayeh Dodge, Alan MacEachren)</li><li>VISxAI &#39;20 (Organizers: Adam Perer, Duen Horng (Polo) Chau, Fred Hohman, Hendrik Strobelt, Mennatallah El-Assady)</li></ul>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1604032917</created>  <gmt_created>2020-10-30 04:41:57</gmt_created>  <changed>1604032917</changed>  <gmt_changed>2020-10-30 04:41:57</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[IEEE VIS highlights research from universities, government, and industry around the world.]]></teaser>  <type>news</type>  <sentence><![CDATA[IEEE VIS highlights research from universities, government, and industry around the world.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-10-30T00:00:00-04:00</dateline>  <iso_dateline>2020-10-30T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-10-30 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>640792</item>      </media>  <hg_media>          <item>          <nid>640792</nid>          <type>image</type>          <title><![CDATA[Georgia Tech at IEEE VIS 2020]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Screen Shot 2020-10-30 at 12.34.13 AM.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Screen%20Shot%202020-10-30%20at%2012.34.13%20AM.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Screen%20Shot%202020-10-30%20at%2012.34.13%20AM.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Screen%2520Shot%25202020-10-30%2520at%252012.34.13%2520AM.png?itok=cP3BBnmU]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Georgia Tech at IEEE VIS 2020]]></image_alt>                    <created>1604032582</created>          <gmt_created>2020-10-30 04:36:22</gmt_created>          <changed>1604032582</changed>          <gmt_changed>2020-10-30 04:36:22</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="186124"><![CDATA[cc-research; ic-ai-ml; ic-hcc; ic-social-computing; ic-visualization]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="640480">  <title><![CDATA[People want data privacy but don’t always know what they’re getting]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1603380433</created>  <gmt_created>2020-10-22 15:27:13</gmt_created>  <changed>1603380433</changed>  <gmt_changed>2020-10-22 15:27:13</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[extreme temperatures]]></publication>  <article_dateline>2020-10-22T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-10-22T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-10-22T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://theconversation.com/people-want-data-privacy-but-dont-always-know-what-theyre-getting-143782]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="640478">  <title><![CDATA[Facebook AI’s co-teaching program with ML@GT to increase pathways into AI for diverse candidates]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1603379982</created>  <gmt_created>2020-10-22 15:19:42</gmt_created>  <changed>1603380145</changed>  <gmt_changed>2020-10-22 15:22:25</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[global career]]></publication>  <article_dateline>2020-10-22T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-10-22T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-10-22T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/2HpEvbx]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="640477">  <title><![CDATA[Facebook&#039;s AI team expands post-grad courses for Black and Latinx students]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1603379895</created>  <gmt_created>2020-10-22 15:18:15</gmt_created>  <changed>1603379895</changed>  <gmt_changed>2020-10-22 15:18:15</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[public library]]></publication>  <article_dateline>2020-10-22T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-10-22T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-10-22T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.engadget.com/facebook-ai-diversity-online-learning-course-120044903.html]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="640368">  <title><![CDATA[Student Selected for Competitive Workshop for Females in Engineering and Computer Science]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1603139532</created>  <gmt_created>2020-10-19 20:32:12</gmt_created>  <changed>1603139532</changed>  <gmt_changed>2020-10-19 20:32:12</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[security dynamics]]></publication>  <article_dateline>2020-10-19T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-10-19T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-10-19T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/3kpRBnL]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="640228">  <title><![CDATA[Learning Machines: Neurocomputing Explained by Chris Rozell]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1602766750</created>  <gmt_created>2020-10-15 12:59:10</gmt_created>  <changed>1602766750</changed>  <gmt_changed>2020-10-15 12:59:10</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Learning Machines: Neurocomputing Explained by Chris Rozell]]></publication>  <article_dateline>2020-10-15T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-10-15T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-10-15T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/3dvDDOm]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="134"><![CDATA[Student and Faculty]]></category>          <category tid="135"><![CDATA[Research]]></category>          <category tid="138"><![CDATA[Biotechnology, Health, Bioengineering, Genetics]]></category>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="640199">  <title><![CDATA[Ivan Allen College of Liberal Arts and the College of Computing Launch New Ethics Center]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Building on years of experience in research and education in ethics and technology, the College of Computing and the Ivan Allen College of Liberal Arts have launched the Ethics, Technology, and Human Interaction Center (ETHIC<sup>x</sup>).</p><p>The new Center &mdash; pronounced &ldquo;ethics&rdquo; &mdash; will advance ethics-in-technology-centered research, education, and engagement at the Georgia Institute of Technology in collaboration with communities, government, non-governmental organizations, and industry. The office of the Executive Vice President for Research provided significant funds over a three-year period to seed the Center.</p><p>&ldquo;We must foster Georgia Tech&rsquo;s strengths in ethics, responsible research, and the development of emerging technologies in collaborative ways,&rdquo; said Raheem Beyah, Georgia Tech&rsquo;s vice president for interdisciplinary research. &ldquo;ETHIC<sup>x </sup>&nbsp;will provide the necessary environment to support this work and Georgia Tech&rsquo;s mission to advance technology and improve the human condition.&rdquo;</p><p>The Colleges already have in-depth research and education experience addressing technology-related ethics questions. For instance, the School of Public Policy founded the Center for Ethics and Technology more than 12 years ago to foster a critical inquiry culture and deliberation about technology-related ethical issues. Faculty in that Center research ethical issues in the design of emerging contact tracing technologies, design ethics, social justice theory, and criticism broadly, and their relationship to emerging technologies such as smart cities, self-driving cars, and smart assistants, and a platform for fostering reflection and self-correcting reasoning in teaching and deliberation. The College of Computing also has created thriving research and educational initiatives such as the Ethical AI professional development course and the Law, Policy, and Ethics Initiative for Machine Learning @ GATECH.</p><p>The new Center will build on those strengths and position the Georgia Institute of Technology to become the leader in framing ethical concerns in technology, including fairness, accountability, transparency, social justice, and technological change.</p><h2>Anticipating New Ethical Challenges</h2><p>&ldquo;ETHIC<sup>x</sup> will be a place for robust, multidisciplinary research and a place to engage in systematic ethical analyses,&rdquo; said Kaye Husbands Fealing, dean of the Ivan Allen College of Liberal Arts and co-director of the new Center. &ldquo;It also will be a place for communities, corporations, governments, technologists, educators, and others to discuss and find solutions to complex ethical issues in science and technology.&rdquo;</p><p>The Center will conduct research in ethics and emerging technologies, frame ethical questions, solutions in ethics and technology, and social justice and equity. Interdisciplinary and community-based research also will be emphasized.</p><p>Educational initiatives will include investigating and designing curricula for ethics training that can be woven throughout students&rsquo; educational journeys and for employees at affiliated companies.</p><p>&rdquo;Responsibility is a core value of everything we do in the College of Computing at Georgia Tech. That means focusing on our communities and examining the impacts, both positive and negative, of our research and curricula,&rdquo; said Charles Isbell, dean and John P. Imlay, Jr. chair of the College of Computing. &ldquo;It means reaching across disciplines to collaborate with experts in other fields&nbsp;who&nbsp;can inform our own technological developments. We find solutions for tomorrow&rsquo;s problems, which means we have to anticipate the new ethical challenges we will face. This Center will help us do that.&rdquo;</p><h2>New Center Builds on Deep Experience</h2><p>Ayanna Howard, chair in the School of Interactive Computing, joins Husbands Fealing as co-director of the new Center.</p><p>&ldquo;In the School of Interactive Computing, we encourage all of our faculty and student researchers to think critically about the new challenges their research presents and offer strategies to mitigate any potential negative impact on society,&rdquo; Howard said. &ldquo;Good innovation isn&rsquo;t just about developing new technologies; it&rsquo;s about developing solutions to problems that can make the world a better, more equitable, and more inclusive place.&rdquo;</p><p>Georgia Tech launched the School of Interactive Computing in anticipation of the need for interdisciplinary research in computer science, liberal arts, and more. Faculty members examine diverse ethical challenges, including misinformation, content moderation, free speech on social platforms, data privacy and security, virtual reality, wearable computing devices, and robo-ethics.</p><p>Faculty and students throughout the Ivan Allen College of Liberal Arts engage in interdisciplinary research collaborations on ethics and emerging technologies, including in areas such as engineering, the environment, bioethics, responsible innovation, research ethics, the <em>ethical</em>&nbsp;and political dimensions of design and technology, and more.</p><p>&ldquo;In the Ivan Allen College, careful consideration of the impacts of technology on people, and of people on technology, is a central part of our curriculum and values,&rdquo; said Justin Biddle, an associate professor in the School of Public Policy, director of the Center for Ethics and Technology, and a member of the new Center&rsquo;s leadership team. &ldquo;With innovation today often outpacing our ability to understand its consequences, and widespread questions regarding the relations between technology, equity, and social justice, this kind of thinking is more important than ever.&rdquo;</p><p>Faculty in both Colleges also have initiated discussions on the social and ethical implications of emerging technologies&nbsp;across campus and beyond. These include the <a href="https://ethics.gatech.edu/techdebates"><em>TechDebates on Emerging Technologies</em></a><em>, </em>the <a href="https://ethics.gatech.edu/sparks-forum">Sparks Forum on Ethics and Engineering</a>, the Machine Learning@GT Seminar Series, and the <a href="http://techfutures.lmc.gatech.edu/">Ethics and Technological Futures</a> series developed by Nassim Parvin and Susana Morris in the <a href="https://lmc.gatech.edu">School of Literature, Media, and Communication</a>. Ellen Zegura, a professor in the School of Computer Science, also leads a Mozilla grant aimed at embedding ethics in computer science classes through role play.</p><h2>&#39;Where the Best of Sciences and Humanities Meet&#39;</h2><p>Deven Desai, associate professor and area coordinator for Law and Ethics at Scheller College of Business, also will assume a key leadership role at ETHIC<sup>x</sup>. He said the new Center will &ldquo;build and deepen technology-related ethics scholarship and research across Georgia Tech.</p><p>&ldquo;Scheller College&rsquo;s focus on law and ethics is part of how we train future business leaders, the people who take innovation and bring it to market,&rdquo; said Desai, who is also associate director for Law, Policy, and Ethics for Machine Learning at GA Tech (ML@GATECH), an interdisciplinary research center.</p><p>&ldquo;ETHICx will be a place where the best of science and humanities meet to challenge and push to find the unasked, important questions. In that friction and fun, the best questions about the problems we face and the best answer about how to solve them so that everyone can benefit will come out,&rdquo; he said.</p><p>Other members of the new Center&rsquo;s key leadership team include Jason Borenstein, director of graduate research ethics programs in the School of Public Policy; Betsy DiSalvo, director of the human-centered computing Ph.D. program and associate professor in the School of Interactive Computing; Michael Hoffmann, a professor in the School of Public Policy; and Nassim Parvin, an associate professor in the School of Literature, Media, and Communication.</p><p>A launch event is planned for November, during Ethics Awareness Week, with a forum to identify key challenges in technology ethics. The Center will soon announce details.</p><p>For more information about ETHIC<sup>x</sup>, contact Husbands Fealing at <a href="mailto:dean@gatech.edu">dean@gatech.edu</a> or Howard at <a href="mailto:ah260@gatech.edu">ah260@gatech.edu</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1602687660</created>  <gmt_created>2020-10-14 15:01:00</gmt_created>  <changed>1602695752</changed>  <gmt_changed>2020-10-14 17:15:52</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The new Center will advance ethics-in-technology-centered research, education, and engagement at the Georgia Institute of Technology.]]></teaser>  <type>news</type>  <sentence><![CDATA[The new Center will advance ethics-in-technology-centered research, education, and engagement at the Georgia Institute of Technology.]]></sentence>  <summary><![CDATA[<p>The new Center will advance ethics-in-technology-centered research, education, and engagement at the Georgia Institute of Technology in collaboration with communities, government, non-governmental organizations.</p>]]></summary>  <dateline>2020-10-13T00:00:00-04:00</dateline>  <iso_dateline>2020-10-13T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-10-13 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[michael.pearson@iac.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Michael Pearson<br />michael.pearson@iac.gatech.edu</p><p>David Mitchell<br />david.mitchell@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>640176</item>      </media>  <hg_media>          <item>          <nid>640176</nid>          <type>image</type>          <title><![CDATA[ETHICx Center graphic]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[ETHICx graphic.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/ETHICx%20graphic.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/ETHICx%20graphic.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/ETHICx%2520graphic.jpg?itok=KmTJTtUm]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1602623629</created>          <gmt_created>2020-10-13 21:13:49</gmt_created>          <changed>1602623629</changed>          <gmt_changed>2020-10-13 21:13:49</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="186032"><![CDATA[ETHICx]]></keyword>          <keyword tid="186033"><![CDATA[Ethics Technology and Human Interaction Center]]></keyword>          <keyword tid="1616"><![CDATA[Ivan Allen College of Liberal Arts]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39511"><![CDATA[Public Service, Leadership, and Policy]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71871"><![CDATA[Campus and Community]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="640162">  <title><![CDATA[New Podcast and Video Series Seeks to Highlight AI Researchers Stories Over Stats]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1602604383</created>  <gmt_created>2020-10-13 15:53:03</gmt_created>  <changed>1602604383</changed>  <gmt_changed>2020-10-13 15:53:03</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[New Podcast and Video Series Seeks to Highlight AI Researchers Stories Over Stats]]></publication>  <article_dateline>2020-10-13T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-10-13T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-10-13T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/34Od7vE]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="133"><![CDATA[Special Events and Guest Speakers]]></category>          <category tid="134"><![CDATA[Student and Faculty]]></category>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="640155">  <title><![CDATA[ACM/IEEE Recognizes Chair&#039;s Service to Computer Science Community]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1602594694</created>  <gmt_created>2020-10-13 13:11:34</gmt_created>  <changed>1602594694</changed>  <gmt_changed>2020-10-13 13:11:34</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[ACM/IEEE Recognizes Chair&#039;s Service to Computer Science Community]]></publication>  <article_dateline>2020-10-13T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-10-13T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-10-13T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://b.gatech.edu/34WG1tL]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>          <category tid="134"><![CDATA[Student and Faculty]]></category>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="640077">  <title><![CDATA[Google Awards ML@GT Student for Outstanding Machine Learning Research]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1602273054</created>  <gmt_created>2020-10-09 19:50:54</gmt_created>  <changed>1602273054</changed>  <gmt_changed>2020-10-09 19:50:54</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Google Awards ML@GT Student for Outstanding Machine Learning Research]]></publication>  <article_dateline>2020-10-09T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-10-09T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-10-09T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/3nJUYb8]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="639954">  <title><![CDATA[NYT R&amp;D Team to Discuss Technology’s Impact on Journalism in Live, Virtual Event]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1602077994</created>  <gmt_created>2020-10-07 13:39:54</gmt_created>  <changed>1602078018</changed>  <gmt_changed>2020-10-07 13:40:18</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[NYT R&amp;D Team to Discuss Technology’s Impact on Journalism in Live, Virtual Event]]></publication>  <article_dateline>2020-10-07T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-10-07T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-10-07T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/3iEyVyD]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="37041"><![CDATA[Computational Science and Engineering]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="133"><![CDATA[Special Events and Guest Speakers]]></category>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="639861">  <title><![CDATA[Rebuild Supply Chains for Quicker Hurricane Recovery]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1601663259</created>  <gmt_created>2020-10-02 18:27:39</gmt_created>  <changed>1601663259</changed>  <gmt_changed>2020-10-02 18:27:39</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Robert Louis Stevenson]]></publication>  <article_dateline>2020-10-02T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-10-02T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-10-02T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.isye.gatech.edu/news/rebuild-supply-chains-quicker-hurricane-recovery]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="639791">  <title><![CDATA[Meet ML@GT: Andrew Sedler on Improving Neuroprosthetic Devices]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1601559763</created>  <gmt_created>2020-10-01 13:42:43</gmt_created>  <changed>1601559763</changed>  <gmt_changed>2020-10-01 13:42:43</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[security dynamics]]></publication>  <article_dateline>2020-10-01T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-10-01T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-10-01T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/3cMp0pu]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>          <category tid="134"><![CDATA[Student and Faculty]]></category>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="639567">  <title><![CDATA[Q&amp;A with Dr. Christopher Rozell]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1601050024</created>  <gmt_created>2020-09-25 16:07:04</gmt_created>  <changed>1601050024</changed>  <gmt_changed>2020-09-25 16:07:04</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[stress reduction]]></publication>  <article_dateline>2020-09-25T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-09-25T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-09-25T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://brain.ieee.org/podcast/]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="639520">  <title><![CDATA[Finally, A Site that Crops Headshots Instantly (Without Sharing Your Photos)]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1600966279</created>  <gmt_created>2020-09-24 16:51:19</gmt_created>  <changed>1600966279</changed>  <gmt_changed>2020-09-24 16:51:19</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Finally, A Site that Crops Headshots Instantly (Without Sharing Your Photos)]]></publication>  <article_dateline>2020-09-24T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-09-24T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-09-24T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.cc.gatech.edu/news/639225/finally-site-crops-headshots-instantly-without-sharing-your-photos]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>          <category tid="8862"><![CDATA[Student Research]]></category>          <category tid="135"><![CDATA[Research]]></category>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="639508">  <title><![CDATA[HMS Researchers Develop New Tool for Early Detection of Local-Level COVID-19 Outbreaks]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1600951589</created>  <gmt_created>2020-09-24 12:46:29</gmt_created>  <changed>1600951589</changed>  <gmt_changed>2020-09-24 12:46:29</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Deep Neural Object Detection]]></publication>  <article_dateline>2020-09-24T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-09-24T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-09-24T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.thecrimson.com/article/2020/9/24/covid-outbreak-detection-tool/]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>          <category tid="135"><![CDATA[Research]]></category>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="639402">  <title><![CDATA[Diversity in AI: The Invisible Men and Women]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[<p>In a guest column for MIT Sloan Management Review, Constellations Executive Director Charles Isbell and School of Interactive Computing Chair Ayanna Howard address AI&#39;s invisibility problem, why it matters, and what we can do about it moving forward.</p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1600805032</created>  <gmt_created>2020-09-22 20:03:52</gmt_created>  <changed>1600805032</changed>  <gmt_changed>2020-09-22 20:03:52</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[armed conflict]]></publication>  <article_dateline>2020-09-22T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-09-22T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-09-22T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://sloanreview.mit.edu/article/diversity-in-ai-the-invisible-men-and-women/?utm_source=twitter&amp;utm_medium=social&amp;utm_campaign=sm-direct]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="606703"><![CDATA[Constellations Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>          <keyword tid="181314"><![CDATA[constellations-external]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="639299">  <title><![CDATA[New Tool Can Detect COVID-19 Outbreaks in U.S. Counties]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1600456511</created>  <gmt_created>2020-09-18 19:15:11</gmt_created>  <changed>1600456511</changed>  <gmt_changed>2020-09-18 19:15:11</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Robert Louis Stevenson]]></publication>  <article_dateline>2020-09-18T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-09-18T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-09-18T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.isye.gatech.edu/news/new-tool-can-detect-covid-19-outbreaks-us-counties]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="639233">  <title><![CDATA[Tao Wins Best Paper Award at Artificial Intelligence and Statistics Conference]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1600370109</created>  <gmt_created>2020-09-17 19:15:09</gmt_created>  <changed>1600370109</changed>  <gmt_changed>2020-09-17 19:15:09</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[security dynamics]]></publication>  <article_dateline>2020-09-17T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-09-17T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-09-17T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/32BQ3Aq]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="639151">  <title><![CDATA[Siva Theja Maguluri Appointed to Fouts Family Early Career Professorship]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1600264379</created>  <gmt_created>2020-09-16 13:52:59</gmt_created>  <changed>1600264379</changed>  <gmt_changed>2020-09-16 13:52:59</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Robert Louis Stevenson]]></publication>  <article_dateline>2020-09-16T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-09-16T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-09-16T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.isye.gatech.edu/news/siva-theja-maguluri-appointed-fouts-family-early-career-professorship]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="639092">  <title><![CDATA[Georgia Tech Receives Google Grant to Study Impact of Pandemic Information Seeking on Vulnerable Populations]]></title>  <uid>33939</uid>  <body><![CDATA[<p><a href="http://gatech.edu">Georgia Tech</a> will receive $155,000 from <a href="https://ai.google/social-good/">Google&rsquo;s Covid-19 AI for Social Good</a> program to investigate patterns and impact of pandemic information-seeking amongst vulnerable populations, such as older adults, low-income households, and Black and Hispanic adults. These populations have experienced disproportionately high rates of Covid-19-related death, severe sickness, and life disruptions like job loss.</p><p>Factors like higher rates of underlying health problems, reduced access to health care, and structural inequities shape access to critical resources. These same populations, however, also often have less access to the types of online information designed to improve health outcomes.</p><p>This project, led by principal investigator <strong>Andrea Grimes Parker</strong>, an associate professor in the <a href="http://ic.gatech.edu">School of Interactive Computing</a>&nbsp;and member of the <a href="http://ipat.gatech.edu">Institute for People and Technology</a>, will investigate how vulnerable and marginalized populations use technology for information seeking during the Covid-19 pandemic, as well as the impact of information exposure on their psychological wellbeing over time.</p><p>&ldquo;The Covid-19 pandemic has brought further attention to systemic disparities in health that have long existed in the United States,&rdquo; Parker said. &ldquo;Within a public health crisis, the information that people are exposed to has huge implications for how attitudes around the pandemic are shaped, how people respond, and thus the course of the pandemic.</p><p>&ldquo;Our work will provide both qualitative and quantitative evidence of the particular ways in which Covid-19 information exposure is tied to outcomes such as mental health in those most vulnerable to Covid-19 mortality and morbidity.&rdquo;</p><p>Researchers will examine this information exposure over time. Their&nbsp;findings will help to shape recommendations for crisis information communication, particularly online, in the future. This work builds upon existing work by Parker and collaborators at Northeastern University.</p><p>Parker and colleagues Professors <strong>Miso Kim</strong> and Dr. <strong>Jacqueline Griffin</strong> began their collaboration investigation how well crisis apps &ndash; mobile apps designed to provide help during emergency situations &ndash; support older adults. This work was published at the 2020 ACM Conference on Human Factors in Computing Systems.</p><p>When the pandemic began, they expanded their focus to additional groups of vulnerable to poor health, such as low-income and racial and ethnic minority populations. The team, in collaboration with Professor <strong>Stacy Marsella</strong>, also expanded their focus beyond crisis apps, designing a survey to investigate information seeking practices in vulnerable populations amidst the pandemic.</p><p>This survey has been distributed to over 600 individuals in Massachusetts and Georgia to date. Parker&rsquo;s new Google funding will enable the team to iterate on and expand the dissemination of this survey, conduct longitudinal analyses, and compliment the quantitative analysis with a qualitative component that will help unpack the nuances behind information-seeking practices and resulting Covid-19 attitudes, behaviors, and mental health outcomes.</p><p>This funding is part of Google.org&rsquo;s $100 million commitment to Covid-19 relief efforts.&nbsp;Organizations receiving funds were selected through a competitive review. Funding focus areas include health equity, disease spread monitoring and forecasting, frontline health worker support, secondary public health effects, and privacy-preserving contact tracing efforts.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1600112797</created>  <gmt_created>2020-09-14 19:46:37</gmt_created>  <changed>1600112797</changed>  <gmt_changed>2020-09-14 19:46:37</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Populations including older adults, low-income households, and Black and Hispanic adults have disproportionately high fatality rates, as well as less access to critical pandemic information.]]></teaser>  <type>news</type>  <sentence><![CDATA[Populations including older adults, low-income households, and Black and Hispanic adults have disproportionately high fatality rates, as well as less access to critical pandemic information.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-09-14T00:00:00-04:00</dateline>  <iso_dateline>2020-09-14T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-09-14 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>639090</item>      </media>  <hg_media>          <item>          <nid>639090</nid>          <type>image</type>          <title><![CDATA[Covid-19 Google Grant]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[coronavirus-4981906_1920.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/coronavirus-4981906_1920.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/coronavirus-4981906_1920.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/coronavirus-4981906_1920.jpg?itok=u64mGjcj]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Two women wearing masks during Covid-19 pandemic]]></image_alt>                    <created>1600112099</created>          <gmt_created>2020-09-14 19:34:59</gmt_created>          <changed>1600112099</changed>          <gmt_changed>2020-09-14 19:34:59</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="184821"><![CDATA[cc-research; ic-hcc; ic-ai-ml; COVID-19]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="639077">  <title><![CDATA[Georgia Tech Part of $5 Million Grant to Develop AI Tech Supporting Individuals With Autism Spectrum Disorder in the Workplace]]></title>  <uid>33939</uid>  <body><![CDATA[<p>The <a href="http://nsf.gov">National Science Foundation</a> has awarded a $5 million grant to a multi-university team of researchers that includes <a href="http://gatech.edu">Georgia Tech</a> to create novel artificial intelligence technology that trains and supports individuals with Autism Spectrum Disorder (ASD) in the workplace.</p><p>The investment follows a successful $1 million, nine-month pilot grant to the same team, which also includes Yale University, Cornell University, Vanderbilt University, and the Vanderbilt University Medical Center. Georgia Tech&rsquo;s portion of the grant is $500,000.</p><p>Led by co-principal investigator Professor <strong>Jim Rehg</strong> of the <a href="http://ic.gatech.edu">School of Interactive Computing</a>, Georgia Tech will develop methods for assessing nonverbal communication behaviors during face-to-face social interactions such as job interviews.</p><p>&ldquo;Our innovative approach uses an unobtrusive wearable camera to record social behaviors, which are then analyzed using computer vision and deep learning models,&rdquo; Rehg said. &ldquo;Our automated analysis will allow job seekers to get feedback on their communication skills as part of our team&rsquo;s integrated approach to job interview coaching.&rdquo;</p><p>The project, which is part of the NSF&rsquo;s <a href="https://www.nsf.gov/od/oia/convergence-accelerator/">Convergence Accelerator</a> program, addresses an underutilized U.S. talent pool that poses a &ldquo;critical but overlooked public health and economic challenge: how to include individuals with ASD&rdquo; in the workforce, according to Vanderbilt Professor <strong>Nilanjan Sarkar</strong> who is leading the project team.</p><p>Consider:</p><ul><li>One in 54 people in the United States has ASD;</li><li>Each year 70,000 young adults with ASD leave high school and face grim employment prospects;&nbsp;</li><li>More than 8 in 10 adults with ASD are either unemployed or underemployed, a significantly higher rate than adults with other developmental disabilities;</li><li>The estimated lifetime cost of supporting an individual with ASD and limited employment prospects $3.2 million.&nbsp;</li><li>The total estimated cost of caring for Americans with ASD was $268 billion in 2015 and projected to grow to $461 billion in 2025.</li><li>An estimated $50,000 per person per year could be contributed back into society when individuals with ASD are employed.</li></ul><p>&ldquo;We want to harness the power of AI, stakeholder engagement and convergent research to include neurodiverse individuals in the 21<sup>st</sup> century workforce,&rdquo; Sarkar said. &ldquo;We feel that there is a big opportunity to turn great societal cost into great societal value.&rdquo;</p><p>For this project, organizational, clinical and implementation experts are integrated with engineering teams to pave the way for real-world impact. The multi-university, multi-disciplinary team already has commitments from major employers to license some of the technology and tools developed.</p><p>Researchers will address three themes:</p><ul><li>Individualized assessment of unique abilities and appropriate job-matching</li><li>Tailored understanding and ongoing support related to social communication and interaction challenge</li><li>Tools to support job candidates, employees and employers.</li></ul><p>Already, notable private-sector companies that employ people with ASD have committed to using at least one of the technologies developed under this program: Auticon, The Precisionists, Ernst &amp; Young and SAP among them.</p><p>Two other companies, Floreo and Tipping Point Media, will make their existing VR modules available for adaptation to the program. Microsoft, which has a long-standing interest in hiring people with ASD, is involved as well and provided seed funding and access to cloud services for technology integration.</p><p>The five technologies can be used separately or as an integrated system, and the work has broader potential beyond ASD to expand employment access. In the U.S. alone, an estimated 50 million people have ASD, attention-deficit/ hyper-activity disorder, learning disability or other neuro-diverse conditions.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1600105921</created>  <gmt_created>2020-09-14 17:52:01</gmt_created>  <changed>1600105921</changed>  <gmt_changed>2020-09-14 17:52:01</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech will develop methods for assessing nonverbal communication behaviors during face-to-face social interactions such as job interviews.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech will develop methods for assessing nonverbal communication behaviors during face-to-face social interactions such as job interviews.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-09-14T00:00:00-04:00</dateline>  <iso_dateline>2020-09-14T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-09-14 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>590844</item>      </media>  <hg_media>          <item>          <nid>590844</nid>          <type>image</type>          <title><![CDATA[Child Study Lab Autism Research]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Autism5.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Autism5.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Autism5.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Autism5.jpg?itok=JYiOtuat]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Lab coordinator Audrey Southerland, along with undergraduate assistants, leads data collection at the Child Study Lab.]]></image_alt>                    <created>1493061979</created>          <gmt_created>2017-04-24 19:26:19</gmt_created>          <changed>1493061979</changed>          <gmt_changed>2017-04-24 19:26:19</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="182941"><![CDATA[cc-research; ic-cybersecurity; ic-hcc]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="638833">  <title><![CDATA[Learning Machines: Natural Language Processing Explained with Diyi Yang]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1599579891</created>  <gmt_created>2020-09-08 15:44:51</gmt_created>  <changed>1599579891</changed>  <gmt_changed>2020-09-08 15:44:51</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Learning Machines: Natural Language Processing Explained with Diyi Yang]]></publication>  <article_dateline>2020-09-08T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-09-08T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-09-08T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/2Fi9uow]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="638801">  <title><![CDATA[Better, Faster, and Less Biased Machine Learning]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1599243609</created>  <gmt_created>2020-09-04 18:20:09</gmt_created>  <changed>1599243609</changed>  <gmt_changed>2020-09-04 18:20:09</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[John Wallingford]]></publication>  <article_dateline>2020-09-04T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-09-04T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-09-04T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.me.gatech.edu/Better-Faster-and-Less-Biased-Machine-Learning]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="638748">  <title><![CDATA[Meet ML@GT: Yuan Yang Looks to Find the ‘Right’ Problem to Solve]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1599142200</created>  <gmt_created>2020-09-03 14:10:00</gmt_created>  <changed>1599142200</changed>  <gmt_changed>2020-09-03 14:10:00</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Meet ML@GT: Yuan Yang Looks to Find the ‘Right’ Problem to Solve]]></publication>  <article_dateline>2020-09-03T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-09-03T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-09-03T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/3lGhNLP]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="638703">  <title><![CDATA[Welcome New IC Faculty: Seven Join School from Variety of Research Areas]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Each year, the School of Interactive Computing conducts a rigorous search for the brightest minds to carry forward its academic and research initiatives. This year, IC welcomes seven&nbsp; to that mission. Take a quick glance at the new research&nbsp;coming to the School in 2020.</p><p><strong>Sehoon Ha</strong></p><p>Ph.D. in Computer Science, Georgia Tech 2015</p><p>Research interests: Robotics, Artificial Intelligence, Character Animation</p><p>Ha&rsquo;s research lies at the intersection of computer graphics and robotics, including physics-based animation, deep reinforcement learning, and computational robot design. Specifically, he has published work that addresses the need for more intelligent control software in robotics to improve agility, robustness, efficiency, and safety. In the long term, he aims to develop robotic companions for the home, search-and-rescue robots for disaster recovery scenes, and custom medical surgery robots that are tailored to individual patients.</p><p><strong>Jennifer Kim</strong></p><p>Ph.D. in Computer Science, University of Illinois, Urbana-Champaign 2019</p><p>Research interests: Human-Computer Interaction, Interactive Systems, Health Care</p><p>Kim&rsquo;s research investigates and develops interactive systems as communication artifacts to address various health-related challenges such as financial burdens of medical costs, difficulties in understanding behaviors of people with neurological disorders, and online health misinformation.</p><p><strong>Chris Le Dantec</strong></p><p>Ph.D. in Human-Centered Computing, Georgia Tech 2011</p><p>Research interests: Digital Media, Science and Technology Studies</p><p>Le Dantec is interested in developing community-based design practices that support new forms of collective action through production and use of civic data. Specifically, his research has direct impact on how policy makers and citizens work together to address issues of community engagement, social justice, urban transportation, and development.</p><p><strong>Andrea Grimes Parker</strong></p><p>Ph.D. in Human-Centered Computing, Georgia Tech 2011</p><p>Research interests: Human-Computer Interaction, Computer Supported Cooperative Work, Health Informatics</p><p>Grimes Parker designs and evaluates the impact of software tools that help people manage their health and wellness with a particular focus on equity. She studies racial, ethnic and economic health disparities, and the social context of health management. Through technology design, her research examines intrapersonal, social, cultural, and environmental factors that influence a person&rsquo;s ability and desire to make healthy decisions.</p><p><strong>Alan Ritter</strong></p><p>Ph.D. in Computer Science and Engineering, University of Washington 2013</p><p>Research interests: Natural Language Processing, Information Extraction, Machine Learning</p><p>Ritter&rsquo;s research aims to solve challenging technical problems that can help machines learn to read vast quantities of text with minimal supervision. Past work included a system that reads millions of tweets for mentions of new software vulnerabilities. This tool spotted critical security flaws in software. He is also interested in data-driven dialogue agents that can converse with people more naturally.</p><p><strong>Sashank Varma</strong></p><p>Ph.D. in Cognitive Studies, Vanderbilt University 2006</p><p>Research interests: Abstract Mathematical Thinking, Memory Systems Supporting Language Processing, Computational Models of High-Level Cognition</p><p>Varma&rsquo;s research investigates complex forms of cognition that are uniquely human from multiple disciplinary perspectives. Primarily, this involves mathematical cognition, where he investigates how people use symbols systems to understand abstract mathematical concepts, how they develop intuitions about and insights into mathematics, and the mental mechanisms shared between reasoning and algorithmic thinking.</p><p><strong>Wei Xu</strong></p><p>Ph.D. in Computer Science, New York University 2014</p><p>Research Interests: Natural Language Processing, Machine Learning, Social Media</p><p>Xu&rsquo;s recent work focuses on methods to understand the varied expressions in human language and to generate paraphrases for applications, such as reading and writing assistive technology. She has also worked on crowdsourcing, summarization, and information extraction for user-generated data, such as Twitter and StackOverflow.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1599066809</created>  <gmt_created>2020-09-02 17:13:29</gmt_created>  <changed>1599066809</changed>  <gmt_changed>2020-09-02 17:13:29</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Take a quick glance at the new research coming to the School of Interactive Computing in 2020.]]></teaser>  <type>news</type>  <sentence><![CDATA[Take a quick glance at the new research coming to the School of Interactive Computing in 2020.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-09-02T00:00:00-04:00</dateline>  <iso_dateline>2020-09-02T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-09-02 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>638702</item>      </media>  <hg_media>          <item>          <nid>638702</nid>          <type>image</type>          <title><![CDATA[New IC faculty 2020]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[New IC Faculty 2020.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/New%20IC%20Faculty%202020.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/New%20IC%20Faculty%202020.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/New%2520IC%2520Faculty%25202020.png?itok=S8Iw3lLg]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Sashank Varma, Sehoon Ha, Chris Le Dantec, Wei Xu, Alan Ritter, Andrea Grimes Parker, Jennifer Kim]]></image_alt>                    <created>1599066470</created>          <gmt_created>2020-09-02 17:07:50</gmt_created>          <changed>1599066470</changed>          <gmt_changed>2020-09-02 17:07:50</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="181216"><![CDATA[cc-research]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="638689">  <title><![CDATA[IC Student Ceara Byrne Trades Dog Toys for Masks to Chip in on Covid Relief]]></title>  <uid>33939</uid>  <body><![CDATA[<p>What do dog toys have to do with Covid-19 pandemic relief? Leave it to a Georgia Tech student to find a connection.</p><p>School of Interactive Computing Ph.D. student <strong>Ceara Byrne</strong>, whose primary research focuses on instrumenting dog toys with various sensors to measure canine behavior, found a way to contribute to the cause when she was approached by a fellow Georgia Tech student for assistance in 3D printing.</p><p><strong>Lee Whitcher</strong>, a Ph.D. student in the <a href="https://www.ae.gatech.edu/">Daniel Guggenheim School of Aerospace Engineering</a>, had already joined colleagues from the <a href="https://gtri.gatech.edu/">Georgia Tech Research Institute</a> and <a href="https://www.me.gatech.edu/">George W. Woodruff School of Mechanical Engineering</a> to design and manufacture personal protective equipment (PPE) like face shields to supplement the available supplies in the Atlanta area.</p><p>The work from GTRI and ME assisted in hospitals, and Whitcher&rsquo;s work &ndash; a non-profit called <a href="http://AtlantaBeatsCOVID.com">Atlanta Beats COVID</a> &ndash; aimed to design and produce masks and ventilators that could be produced by non-engineers wherever they are needed.</p><p>To do that, Whitcher and his partners needed a 3D printer that could cast the negatives for the masks. Georgia Tech&rsquo;s <a href="https://gvu.gatech.edu/">GVU</a> Prototyping Lab in the Technology Square Research Building had just what they needed. So did Byrne.</p><p>Byrne has been using the Prototyping Lab&rsquo;s printer for a while now to develop negatives of the silicone dog toys she uses in her research. Byrne&rsquo;s work involves studying behavior in canines to understand temperament for service animals.</p><p>&ldquo;I was inspired by a friend from high school who grew up on a ranch,&rdquo; Byrne said. &ldquo;She and I got involved in 4-H. When I came back for a master&rsquo;s degree, I started working with <strong>Thad Starner</strong> and <strong>Melody Jackson</strong> on the FIDO project. I started noticing these aspects of the data that were reflective of dog temperament like drive and how they tackle activities. It really interested me.&rdquo;</p><p>Part of the research was to find good ways to measure that temperament beyond just visual observation. One solution was to place sensors into toys to take measurements as the dog played with it.</p><p>&ldquo;I&rsquo;ve used the Prototyping Lab to 3D print my negative molds so that I can silicone cast the positives like balls and tug toys,&rdquo; Byrne said. &ldquo;It&rsquo;s a long process of finding the right silicones, materials, hardness.&nbsp; For the toys, I went through three or four different molds to find the right way to actually cast the parts. It was a lot of experimenting.&rdquo;</p><p>That experimentation made her uniquely prepared to chip in with Whitcher&rsquo;s project when Covid-19 hit. Looking for a way to develop the right mold for easy do-it-yourself mask production, Whitcher turned to Byrne for assistance.</p><p>&ldquo;There are a number of aspects to it,&rdquo; Byrne said. &ldquo;How do you de-gas some of the silicone? When you have a mask, you can&rsquo;t have the bubbles in the mold because you need a seal. How do you do it with the vacuum? If there&rsquo;s no vacuum available, what are some easier ways? How do we make these negatives properly, and how many can you cast at once? What are the environmental aspects when you do it from home?&rdquo;</p><p>These are all questions Byrne has had to explore when it comes to her dog toys. The experience proved useful in the mask production, as well.</p><p>Byrne was happy to get involved in pandemic relief assistance. She has brothers and sisters-in-law who are doctors.</p><p>&ldquo;They&rsquo;ve been amazing in helping around the community,&rdquo; she said. &ldquo;My brother is making masks, which I think is fascinating. He&rsquo;s a radiation oncologist and has built respiratory masks with the Pancreatic Cancer Foundation. So, I wanted to help out in any way that I could, as well.&rdquo;</p><p>Being at Georgia Tech, she said, made the collaboration a natural occurrence.</p><p>&ldquo;That&rsquo;s what makes Georgia Tech unique, right?&rdquo; she said. &ldquo;We can collaborate across these disciplines that maybe don&rsquo;t connect to each other on the surface.&rdquo;</p><p>Read more about the relief effort, how to request PPE, and how to get involved at <a href="http://AtlantaBeatsCOVID.com">AtlantaBeatsCOVID.com</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1599000797</created>  <gmt_created>2020-09-01 22:53:17</gmt_created>  <changed>1599000797</changed>  <gmt_changed>2020-09-01 22:53:17</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Byrne, whose work uses a 3D printer to make dog toys, is using her expertise to help in mask production.]]></teaser>  <type>news</type>  <sentence><![CDATA[Byrne, whose work uses a 3D printer to make dog toys, is using her expertise to help in mask production.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-09-01T00:00:00-04:00</dateline>  <iso_dateline>2020-09-01T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-09-01 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>638688</item>      </media>  <hg_media>          <item>          <nid>638688</nid>          <type>image</type>          <title><![CDATA[Ceara Byrne]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[heart-innovation.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/heart-innovation.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/heart-innovation.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/heart-innovation.jpg?itok=aXFSoV9_]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[ceara byrne]]></image_alt>                    <created>1598997204</created>          <gmt_created>2020-09-01 21:53:24</gmt_created>          <changed>1598997204</changed>          <gmt_changed>2020-09-01 21:53:24</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://ae.gatech.edu/news/2020/04/what-engineers-do-crisis]]></url>        <title><![CDATA[What Engineers Do in a Crisis]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="185769"><![CDATA[cc-research; ic-hcc; COVID-19]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="638544">  <title><![CDATA[New Toolchain Automatically Finds Database Management System Bugs]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1598639422</created>  <gmt_created>2020-08-28 18:30:22</gmt_created>  <changed>1598639422</changed>  <gmt_changed>2020-08-28 18:30:22</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[New Toolchain Automatically Finds Database Management System Bugs]]></publication>  <article_dateline>2020-08-28T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-08-28T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-08-28T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.cc.gatech.edu/news/638534/new-toolchain-automatically-finds-database-management-system-bugs]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="638233">  <title><![CDATA[New algorithm follows human intuition to make visual captioning more grounded]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1598275348</created>  <gmt_created>2020-08-24 13:22:28</gmt_created>  <changed>1598275348</changed>  <gmt_changed>2020-08-24 13:22:28</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[anti-black]]></publication>  <article_dateline>2020-08-24T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-08-24T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-08-24T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://aihub.org/2020/08/24/new-algorithm-follows-human-intuition-to-make-visual-captioning-more-grounded/]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="638207">  <title><![CDATA[ML@GT and Facebook release tools to help AI navigate complex environments]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1598036090</created>  <gmt_created>2020-08-21 18:54:50</gmt_created>  <changed>1598036090</changed>  <gmt_changed>2020-08-21 18:54:50</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[macrophages]]></publication>  <article_dateline>2020-08-21T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-08-21T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-08-21T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://venturebeat.com/2020/08/21/facebook-releases-tools-to-help-ai-navigate-complex-environments/]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="638205">  <title><![CDATA[Georgia Tech Showcases Research at Premier Data Mining Conference]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1598036047</created>  <gmt_created>2020-08-21 18:54:07</gmt_created>  <changed>1598036047</changed>  <gmt_changed>2020-08-21 18:54:07</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Georgia Tech Showcases Research at Premier Data Mining Conference]]></publication>  <article_dateline>2020-08-21T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-08-21T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-08-21T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://sites.gatech.edu/gtkdd2020/]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="637999">  <title><![CDATA[ML@GT Students Learn Valuable Skills from Internships Despite Participating Remotely]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1597755213</created>  <gmt_created>2020-08-18 12:53:33</gmt_created>  <changed>1597755213</changed>  <gmt_changed>2020-08-18 12:53:33</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[ML@GT Students Learn Valuable Skills from Internships Despite Participating Remotely]]></publication>  <article_dateline>2020-08-18T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-08-18T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-08-18T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/2YeaAIF]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="37041"><![CDATA[Computational Science and Engineering]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>      </groups>  <categories>          <category tid="134"><![CDATA[Student and Faculty]]></category>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="637928">  <title><![CDATA[ML@GT Makes a Strong Showing at Premier European Computer Vision Conference]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1597675707</created>  <gmt_created>2020-08-17 14:48:27</gmt_created>  <changed>1597675707</changed>  <gmt_changed>2020-08-17 14:48:27</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[ML@GT Makes a Strong Showing at Premier European Computer Vision Conference]]></publication>  <article_dateline>2020-08-17T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-08-17T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-08-17T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://b.gatech.edu/3gOYb5a]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="37041"><![CDATA[Computational Science and Engineering]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="637875">  <title><![CDATA[Learning Machines: Polo Chau Explains Data Visualizations]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1597421534</created>  <gmt_created>2020-08-14 16:12:14</gmt_created>  <changed>1597421534</changed>  <gmt_changed>2020-08-14 16:12:14</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Learning Machines: Polo Chau Explains Data Visualizations]]></publication>  <article_dateline>2020-08-14T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-08-14T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-08-14T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/343tZzJ]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="37041"><![CDATA[Computational Science and Engineering]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>      </groups>  <categories>          <category tid="134"><![CDATA[Student and Faculty]]></category>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="637835">  <title><![CDATA[Swati Gupta Appointed to Fouts Family Early Career Professorship]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1597332094</created>  <gmt_created>2020-08-13 15:21:34</gmt_created>  <changed>1597332094</changed>  <gmt_changed>2020-08-13 15:21:34</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Robert Louis Stevenson]]></publication>  <article_dateline>2020-08-13T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-08-13T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-08-13T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.isye.gatech.edu/news/swati-gupta-appointed-fouts-family-early-career-professorship]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="637779">  <title><![CDATA[New Algorithm Follows Human Intuition to Make Visual Captioning More Grounded]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1597241085</created>  <gmt_created>2020-08-12 14:04:45</gmt_created>  <changed>1597241085</changed>  <gmt_changed>2020-08-12 14:04:45</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[New Algorithm Follows Human Intuition to Make Visual Captioning More Grounded]]></publication>  <article_dateline>2020-08-12T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-08-12T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-08-12T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://b.gatech.edu/3kCBwvk]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="637711">  <title><![CDATA[Two IC Grads Earn Sigma Xi Best Ph.D. Thesis Awards]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Recent Georgia Tech Ph.D. graduates <strong>Caitlyn Seim</strong> and <strong>Aishwarya Agrawal</strong>, both from the School of Interactive Computing, were awarded the 2020 Sigma Xi Best Ph.D. Thesis Award. They were two of just 10 Ph.D. students at Georgia Tech recognized with the honor.</p><p>Seim&rsquo;s thesis, titled <em>Wearable Vibrotactile Stimulation: How Passive Stimulation Can Train and Rehabilitate</em>, presents a technique in which a vibrating wearable device is used to retrain motor function following debilitating occurrences of spinal fracture or stroke. Now a postdoc at Stanford University and a fellow with the National Institutes of Health, Seim is currently working with stroke survivors to develop accessible and functional wearable devices to reduce physical disability in both the upper and lower limbs.</p><p>&ldquo;Lately, I have also developed new mechanical tools to assess hand and arm function when there are no quantitative clinical tests available,&rdquo; Seim said. &ldquo;I plan to continue research on wearable and ubiquitous systems for health, accessibility, and training.&rdquo;</p><p>In Agrawal&rsquo;s thesis, titled <em>Visual Question Answering and Beyond</em>, she explores a multi-modal artificial intelligence task called visual question answering. In this task, given an image and natural language question about it, a machine is programmed to automatically produce an accurate natural language answer. The applications of VQA include aiding visually impaired users in understanding their surroundings, aiding analysts in examining large quantities of surveillance data, teaching children through interactive demos, interacting with personal AI assistants, and making visual social media content more accessible.</p><p>Now at DeepMind and soon to be an assistant professor at the University of Montreal and Mila, an AI research institute, Agrawal intends to equip current VQA systems with better skills to move toward artificial general intelligence.</p><p>&ldquo;In the long term, I am excited about science fiction becoming reality, when we all have smart virtual assistants that can see and talk and serve as an aid to visually impaired users,&rdquo; she said.</p><p>The eight other recipients of the Georgia Tech Sigma Xi Best Ph.D. Thesis Award were Mingue Kim (ECE), Ming Zhao (Chemistry), Andres Caballero (BME), Ke (Chris) Liu (CEE), Monica McNerney (ChBE), Chris Sugino (ME), Hamid Reza Seyf (ME), and Eric Tervo (ME).</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1597067162</created>  <gmt_created>2020-08-10 13:46:02</gmt_created>  <changed>1597067162</changed>  <gmt_changed>2020-08-10 13:46:02</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[They were two of just 10 Ph.D. students at Georgia Tech recognized with the honor.]]></teaser>  <type>news</type>  <sentence><![CDATA[They were two of just 10 Ph.D. students at Georgia Tech recognized with the honor.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-08-10T00:00:00-04:00</dateline>  <iso_dateline>2020-08-10T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-08-10 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>637710</item>      </media>  <hg_media>          <item>          <nid>637710</nid>          <type>image</type>          <title><![CDATA[Aishwarya Agrawal and Caitlyn Seim]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Personal Vlog YouTube Thumbnail.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Personal%20Vlog%20YouTube%20Thumbnail.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Personal%20Vlog%20YouTube%20Thumbnail.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Personal%2520Vlog%2520YouTube%2520Thumbnail.png?itok=Dq6mvFJQ]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Aishwarya Agrawal and Caitlyn Seim]]></image_alt>                    <created>1597067128</created>          <gmt_created>2020-08-10 13:45:28</gmt_created>          <changed>1597067128</changed>          <gmt_changed>2020-08-10 13:45:28</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="181639"><![CDATA[cc-research; ic-ai-ml]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="637525">  <title><![CDATA[ML@GT Expands Natural Language Processing and Data Science Research with New Faculty Hires]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1596633775</created>  <gmt_created>2020-08-05 13:22:55</gmt_created>  <changed>1596633775</changed>  <gmt_changed>2020-08-05 13:22:55</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[ML@GT Expands Natural Language Processing and Data Science Research with New Faculty Hires]]></publication>  <article_dateline>2020-08-05T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-08-05T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-08-05T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/39Wbv5j]]></article_url>  <media>          <item><![CDATA[637524]]></item>      </media>  <hg_media>          <item>          <nid>637524</nid>          <type>image</type>          <title><![CDATA[ML@GT welcomes new faculty hires this fall.]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[NewMLFaculty_Fall2020.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/NewMLFaculty_Fall2020.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/NewMLFaculty_Fall2020.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/NewMLFaculty_Fall2020.png?itok=YGaEItMu]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[new faculty hires]]></image_alt>                              <created>1596633734</created>          <gmt_created>2020-08-05 13:22:14</gmt_created>          <changed>1596633734</changed>          <gmt_changed>2020-08-05 13:22:14</gmt_changed>      </item>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="637277">  <title><![CDATA[Why we need a G.I. Bill for the COVID era]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1595854768</created>  <gmt_created>2020-07-27 12:59:28</gmt_created>  <changed>1595854768</changed>  <gmt_changed>2020-07-27 12:59:28</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[implicit bias training]]></publication>  <article_dateline>2020-07-27T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-07-27T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-07-27T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://thehill.com/opinion/education/509064-why-we-need-a-gi-bill-for-the-covid-era?rnd=1595772043]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="637259">  <title><![CDATA[Herrmann Honored with 2020 SEG Reginald Fessenden Award]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1595616372</created>  <gmt_created>2020-07-24 18:46:12</gmt_created>  <changed>1595616372</changed>  <gmt_changed>2020-07-24 18:46:12</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Herrmann Honored with 2020 SEG Reginald Fessenden Award]]></publication>  <article_dateline>2020-07-24T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-07-24T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-07-24T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.cc.gatech.edu/news/637235/herrmann-honored-2020-seg-reginald-fessenden-award]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="637223">  <title><![CDATA[The Record Industry Is Going After Parody Songs Written By an Algorithm]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1595518580</created>  <gmt_created>2020-07-23 15:36:20</gmt_created>  <changed>1595518580</changed>  <gmt_changed>2020-07-23 15:36:20</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Nathan Bowman]]></publication>  <article_dateline>2020-07-23T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-07-23T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-07-23T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.vice.com/en_ca/article/m7jpp3/the-record-industry-is-going-after-parody-songs-written-by-an-algorithm]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="637122">  <title><![CDATA[Georgia Tech, 6 Collaborators Receive $5.9 Million NIH Grant for a National Center in AI-based mHealth Research]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Georgia Tech researchers will develop more effective and personalized treatment approaches for chronic health conditions under a new grant from the <a href="http://nih.gov">National Institutes of Health</a>.</p><p>The NIH is issuing $5.9 million in funding for a new national biomedical technology&nbsp;resource center (BTRC), called the mHealth Center for Discovery, Optimization &amp; Translation of Temporally-Precise Interventions (mDOT). Georgia Tech, one of seven collaborators on the project, will receive $500,000, and mDOT&nbsp;will be headquartered at the MD2K Center of Excellence at The University of Memphis.</p><p>One of the biggest drivers of the nation&rsquo;s rising healthcare spending is providing care for patients with chronic diseases, many of which are linked to daily behaviors such as dietary choices, sedentary behavior, stress, and addiction. The mDOT Center will be a new national technology resource for improving people&rsquo;s health and wellness. It will conduct cutting-edge AI research to produce easily deployable wearables, apps for wearables and smartphones, and a companion cloud system. mDOT&rsquo;s innovative technology will enable patients to initiate and sustain the healthy lifestyle choices necessary to prevent and/or successfully manage the growing burden of multiple chronic conditions.</p><p>Led by <strong>Jim Rehg</strong>, a Professor in Georgia Tech&rsquo;s <a href="http://ic.gatech.edu">School of Interactive Computing</a>, Georgia Tech&rsquo;s project will focus on analyzing streams of biomarker data to enable the development of more effective, personalized treatment approaches for chronic health conditions like smoking and physical activity. To achieve this, the team will develop machine learning methods that can discover important risk factors from sensor data and identify effective intervention targets.</p><p>&ldquo;Consider developing an intervention to help people who are trying to quit smoking by providing personalized strategies for managing risk factors that are known to precipitate relapse,&rdquo; Rehg said. &ldquo;Researchers and practitioners would use our tools to analyze biomarker data and characterize the patterns that lead to relapse and identify potential intervention targets.&rdquo;</p><p>The collaboration can then use the tools provided by the other teams to develop and tailor an effective personalized stress intervention and deliver it efficiently on a mobile device. <strong>Omer Inan</strong>, a faculty member in Georgia Tech&rsquo;s <a href="http://ece.gatech.edu">School of Electrical and Computer Engineering</a>, will also collaborate with the team, leveraging work on novel non-invasive biosensors that detect cardiovascular changes in heart failure. Working alongside the mDOT team will enhance the ability to develop and deploy interventions based on his novel wearable sensors.</p><p>&ldquo;Researchers and industry innovators can leverage mDOT&rsquo;s technological resources to create the next generation of mHealth technology that is highly personalized to each user, transforming people&rsquo;s health and wellness,&rdquo; said <strong>Santosh Kumar</strong>, the lead investigator of mDOT, who is the director of MD2K Center of Excellence and Lillian &amp; Morrie Moss Chair of Excellence Professor of Computer Science at the University of Memphis.</p><p>To ensure mDOT&rsquo;s innovative technology can be used by scientists to solve real-world problems, mDOT will be working closely with over a dozen other federally-funded projects to engage in joint technology development, testing, and large-scale real-life deployment. To ensure that mDOT&rsquo;s technological resources can fuel innovation in the health technology industry, the mDOT Center is establishing a new industry consortium to provide access to mDOT&rsquo;s latest research and seek feedback to inform its ongoing research.</p><p>The mDOT Center will be administered by the National Institute of Biomedical Imaging and Bioengineering (NIBIB).</p><p>&ldquo;The mDOT Center will be the first<a href="https://www.nibib.nih.gov/research-funding/biomedical-technology-resource-centers">&nbsp;BTRC</a>&nbsp;focused on developing innovative mHealth technologies. It is positioned to empower scientists to discover, personalize, and deliver temporally-precise mHealth interventions and treatments, ensuring that health and wellness tools are delivered at the right moment, via the right personal device, and is optimized to have the most influence,&rdquo; said mDOT&rsquo;s program officer&nbsp;<strong>Tiffani Lash</strong>, director of the NIBIB program inConnected Health.</p><p>The multidisciplinary mDOT team consists of leading researchers in artificial intelligence (AI), mobile computing, wearable sensors, privacy, and precision medicine from Harvard University, Georgia Institute of Technology, The Ohio State University, The University of Massachusetts-Amherst, The University of Memphis (lead), The University of California at Los Angeles, and The University of California at San Francisco.</p><p><strong>About MD2K:</strong>&nbsp;The Center of Excellence for Mobile Sensor Data-to-Knowledge (MD2K), headquartered (in FedEx Institute of Technology) at The University of Memphis, was established in 2014 by a grant from National Institutes of Health (NIH) under its Big-Data-To-Knowledge (BD2K) initiative. It has developed mobile sensor big data technologies to improve health and wellness. MD2K&rsquo;s open-source software platforms for smartphones and the cloud are used across the nation to conduct scientific studies.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1595277387</created>  <gmt_created>2020-07-20 20:36:27</gmt_created>  <changed>1595277387</changed>  <gmt_changed>2020-07-20 20:36:27</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[he NIH is issuing $5.9 million in funding for a new national biomedical technology resource center (BTRC), called the mHealth Center for Discovery, Optimization & Translation of Temporally-Precise Interventions (mDOT).]]></teaser>  <type>news</type>  <sentence><![CDATA[he NIH is issuing $5.9 million in funding for a new national biomedical technology resource center (BTRC), called the mHealth Center for Discovery, Optimization & Translation of Temporally-Precise Interventions (mDOT).]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-07-20T00:00:00-04:00</dateline>  <iso_dateline>2020-07-20T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-07-20 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>592632</item>      </media>  <hg_media>          <item>          <nid>592632</nid>          <type>image</type>          <title><![CDATA[Rehg-Jim]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Rehg-Jim250.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Rehg-Jim250.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Rehg-Jim250.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Rehg-Jim250.jpg?itok=wYCBihGD]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[James Rehg]]></image_alt>                    <created>1497298524</created>          <gmt_created>2017-06-12 20:15:24</gmt_created>          <changed>1497298713</changed>          <gmt_changed>2017-06-12 20:18:33</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="182525"><![CDATA[cc-research; ic-hcc; ic-ai-ml]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="636928">  <title><![CDATA[Hello Robot Launches Stretch]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1594731605</created>  <gmt_created>2020-07-14 13:00:05</gmt_created>  <changed>1594731605</changed>  <gmt_changed>2020-07-14 13:00:05</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[including School of Electrical and Computer Engineering professors Oliver Brand]]></publication>  <article_dateline>2020-07-14T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-07-14T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-07-14T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://petitinstitute.gatech.edu/news/hello-robot-launches-stretch]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="636848">  <title><![CDATA[Teaching Neural Networks When to Stop]]></title>  <uid>34540</uid>  <body><![CDATA[<p>If you were running and finished a course in five steps, you would not continue to march at the finish line to make sure you reached 10 steps in total. However, this absurd characteristic is a key problem that plagues deep neural networks, a type of machine learning model that governs a wide breadth of applications.</p><p>Typically, neural networks must go through a predetermined number of layers in order to complete every task, despite being able to complete the task in more or less layers.&nbsp;</p><p>Now, a team of researchers from Georgia Tech, Google Brain, and King Abdullah University of Science and Technology have created a steerable architecture that allows neural networks to sequentially determine whether to stop at an intermediate layer for each input or to continue going.&nbsp;</p><p>This novel approach combines a feed-forward deep model with&nbsp;a variational stopping policy, allowing the network to adaptively stop at an earlier layer to avoid wasting energy. Experimentally, research has shown that the new deep learning model with the newly applied stopping policy is able to improve the performances on a diverse set of tasks such as image denoising and multitask learning.</p><p>&ldquo;Recently, there have been many efforts to bridge traditional algorithms with deep neural networks by combining the interpretability of the former and flexibility of the latter. Inspired by traditional algorithms which have certain stopping criteria for outputting results at different iterations, we design a variational stopping policy to decide which layer to stop for each input in the neural network,&rdquo;&nbsp;said&nbsp;<strong>Xinshi Chen</strong>&nbsp;a Ph.D. student from the&nbsp;<a href="https://cse.gatech.edu/">School of Computational Science and Engineering</a>&nbsp;and researcher on the project.</p><p>According to Chen, training the neural network along with the stopping policy is very challenging and is one of the most important contributions of this research.</p><p>&ldquo;Notably, our paper proposes a principled and efficient algorithm to jointly train these two components together. This algorithm can be mathematically explained from the variational Bayes perspective and can be generally applied to many problems,&rdquo; she said.</p><p>What&rsquo;s more is that deep neural networks are typically considered black boxes, meaning that researchers don&rsquo;t mathematically know why an output &ndash; no matter how accurate &ndash; is produced. By bridging deep learning neural networks with the traditional algorithms&rsquo; steps, it has broken the black box restriction and is now an inherently interpretable &ndash; and therefore, more accountable &ndash; system.&nbsp;</p><p>The findings of this research are published in the paper,&nbsp;<a href="https://arxiv.org/pdf/2006.05082.pdf" title="https://arxiv.org/pdf/2006.05082.pdf"><em>Learning to Stop While Learning to Predict</em></a>, which is set to be presented at the virtual&nbsp;<a href="https://icml.cc/">Thirty-seventh International Conference on Machine Learning</a>&nbsp;July 14 , 1:00-1:45 and July 15 12:00-12:45 EDT.</p>]]></body>  <author>Kristen Perez</author>  <status>1</status>  <created>1594306429</created>  <gmt_created>2020-07-09 14:53:49</gmt_created>  <changed>1594316845</changed>  <gmt_changed>2020-07-09 17:47:25</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A team of researchers from Georgia Tech, Google Brain, and King Abdullah University of Science and Technology have successfully created an architecture that allows neural networks to determine when it reaches an optimal stopping point]]></teaser>  <type>news</type>  <sentence><![CDATA[A team of researchers from Georgia Tech, Google Brain, and King Abdullah University of Science and Technology have successfully created an architecture that allows neural networks to determine when it reaches an optimal stopping point]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-07-09T00:00:00-04:00</dateline>  <iso_dateline>2020-07-09T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-07-09 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[kristen.perez@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Kristen Perez</p><p>Communications Officer</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>636844</item>      </media>  <hg_media>          <item>          <nid>636844</nid>          <type>image</type>          <title><![CDATA[Training Neural Networks When to Stop]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Diagram showing neural network steps.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Diagram%20showing%20neural%20network%20steps.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Diagram%20showing%20neural%20network%20steps.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Diagram%2520showing%2520neural%2520network%2520steps.png?itok=9x742SqO]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[A diagram showing the layers that a neural network passes through and determining when to stop]]></image_alt>                    <created>1594305053</created>          <gmt_created>2020-07-09 14:30:53</gmt_created>          <changed>1594305053</changed>          <gmt_changed>2020-07-09 14:30:53</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="4305"><![CDATA[cse]]></keyword>          <keyword tid="176999"><![CDATA[neural networks]]></keyword>          <keyword tid="185278"><![CDATA[xinshi Chen]]></keyword>          <keyword tid="127171"><![CDATA[Le Song]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="636853">  <title><![CDATA[Dellaert Wins Test of Time Award for a Second Time This Year]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1594309593</created>  <gmt_created>2020-07-09 15:46:33</gmt_created>  <changed>1594309593</changed>  <gmt_changed>2020-07-09 15:46:33</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[security dynamics]]></publication>  <article_dateline>2020-07-09T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-07-09T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-07-09T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/2AJdJaZ]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="636821">  <title><![CDATA[ML@GT to Present Nine Papers at Competitive Machine Learning Conference]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1594221385</created>  <gmt_created>2020-07-08 15:16:25</gmt_created>  <changed>1594221385</changed>  <gmt_changed>2020-07-08 15:16:25</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[ML@GT to Present Nine Papers at Competitive Machine Learning Conference]]></publication>  <article_dateline>2020-07-08T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-07-08T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-07-08T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/2Z0RF4O]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="37041"><![CDATA[Computational Science and Engineering]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="636739">  <title><![CDATA[Researchers combine reinforcement learning and NLP to escape a Grue monster]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1594040105</created>  <gmt_created>2020-07-06 12:55:05</gmt_created>  <changed>1594040105</changed>  <gmt_changed>2020-07-06 12:55:05</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[macrophages]]></publication>  <article_dateline>2020-07-06T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-07-06T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-07-06T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://venturebeat.com/2020/06/30/researchers-combine-reinforcement-learning-and-nlp-to-escape-a-grue-monster/]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="636738">  <title><![CDATA[Georgia Tech Professor Leads Multi-Institution Team in Combatting Hospital Acquired Infections]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1594040040</created>  <gmt_created>2020-07-06 12:54:00</gmt_created>  <changed>1594040040</changed>  <gmt_changed>2020-07-06 12:54:00</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Georgia Tech Professor Leads Multi-Institution Team in Combatting Hospital Acquired Infections]]></publication>  <article_dateline>2020-07-01T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-07-01T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-07-01T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.cc.gatech.edu/news/636566/georgia-tech-professor-leads-multi-institution-team-combatting-hospital-acquired]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="636737">  <title><![CDATA[New Training Data Labeling System for Machine Learning Helps Developers]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1594039899</created>  <gmt_created>2020-07-06 12:51:39</gmt_created>  <changed>1594039899</changed>  <gmt_changed>2020-07-06 12:51:39</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[New Training Data Labeling System for Machine Learning Helps Developers]]></publication>  <article_dateline>2020-06-29T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-06-29T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-06-29T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.cc.gatech.edu/news/636603/new-training-data-labeling-system-machine-learning-helps-developers]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="636734">  <title><![CDATA[Equity in Computing: Access to Opportunity]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1594039542</created>  <gmt_created>2020-07-06 12:45:42</gmt_created>  <changed>1594039542</changed>  <gmt_changed>2020-07-06 12:45:42</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Equity in Computing: Access to Opportunity]]></publication>  <article_dateline>2020-07-06T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-07-06T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-07-06T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.cc.gatech.edu/content/equity-in-computing]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="636454">  <title><![CDATA[New AI Method Lets Robots Get By With a Little Help From Their Friends]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1593001105</created>  <gmt_created>2020-06-24 12:18:25</gmt_created>  <changed>1593001105</changed>  <gmt_changed>2020-06-24 12:18:25</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[New AI Method Lets Robots Get By With a Little Help From Their Friends]]></publication>  <article_dateline>2020-06-24T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-06-24T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-06-24T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.cc.gatech.edu/news/636432/new-ai-method-lets-robots-get-little-help-their-friends]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="636435">  <title><![CDATA[The Formula of Creativity]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1592932400</created>  <gmt_created>2020-06-23 17:13:20</gmt_created>  <changed>1592932400</changed>  <gmt_changed>2020-06-23 17:13:20</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[The Formula of Creativity]]></publication>  <article_dateline>2020-06-23T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-06-23T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-06-23T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://coe.gatech.edu/news/2020/05/formula-creativity?utm_medium=email&amp;utm_source=dailydigest&amp;utm_campaign=june24&amp;utm_content=formula]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="636376">  <title><![CDATA[Life from the Perspective of a Cyborg]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1592834465</created>  <gmt_created>2020-06-22 14:01:05</gmt_created>  <changed>1592834465</changed>  <gmt_changed>2020-06-22 14:01:05</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Multimedia Scholarship Commons]]></publication>  <article_dateline>2020-06-22T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-06-22T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-06-22T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.byuradio.org/episode/61eb64c2-c94c-4531-9981-8fd675057e5d/constant-wonder-human-machines?playhead=1327&amp;autoplay=true]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="636372">  <title><![CDATA[New Study Explores Sentiment Around Electric Vehicles, Leading to Faster Government Response and More Infrastructure]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1592829982</created>  <gmt_created>2020-06-22 12:46:22</gmt_created>  <changed>1592829982</changed>  <gmt_changed>2020-06-22 12:46:22</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[security dynamics]]></publication>  <article_dateline>2020-06-22T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-06-22T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-06-22T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/3dlr2LN]]></article_url>  <media>          <item><![CDATA[636371]]></item>      </media>  <hg_media>          <item>          <nid>636371</nid>          <type>image</type>          <title><![CDATA[Researchers and policymakers are now able to better evaluate electric vehicle infrastructure using machine learning techniques. This work was featured on the June 2020 cover of Nature Sustainability.]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[NSus_Cover_JUNE20_print-1.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/NSus_Cover_JUNE20_print-1.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/NSus_Cover_JUNE20_print-1.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/NSus_Cover_JUNE20_print-1.jpg?itok=0pyCAUDH]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Nature Sustainability June 2020 cover]]></image_alt>                              <created>1592828515</created>          <gmt_created>2020-06-22 12:21:55</gmt_created>          <changed>1592828515</changed>          <gmt_changed>2020-06-22 12:21:55</gmt_changed>      </item>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="636271">  <title><![CDATA[Georgia Tech Team Uses Machine Learning to Drive Electric Vehicle Policy Findings]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1592332924</created>  <gmt_created>2020-06-16 18:42:04</gmt_created>  <changed>1592332924</changed>  <gmt_changed>2020-06-16 18:42:04</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Tianze Song]]></publication>  <article_dateline>2020-06-16T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-06-16T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-06-16T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.iac.gatech.edu/research/features/deep-learning-electric-vehicle-charging-research]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="636268">  <title><![CDATA[DARPA Awards $9.2M Grant to Inter-agency Team Researching Quantum Computing]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1592319869</created>  <gmt_created>2020-06-16 15:04:29</gmt_created>  <changed>1592319869</changed>  <gmt_changed>2020-06-16 15:04:29</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Robert Louis Stevenson]]></publication>  <article_dateline>2020-06-16T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-06-16T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-06-16T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.isye.gatech.edu/news/darpa-awards-92m-grant-inter-agency-team-researching-quantum-computing]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="636243">  <title><![CDATA[Georgia Tech model predicts jump in COVID cases, deaths]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1592251807</created>  <gmt_created>2020-06-15 20:10:07</gmt_created>  <changed>1592251807</changed>  <gmt_changed>2020-06-15 20:10:07</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Meatless Monday]]></publication>  <article_dateline>2020-05-08T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-05-08T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-05-08T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.georgiahealthnews.com/2020/05/georgia-tech-model-predicts-spike-covid-cases-deaths/]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="636242">  <title><![CDATA[We’re in the Calm before a New Storm of COVID-19 Infections and Deaths]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1592251747</created>  <gmt_created>2020-06-15 20:09:07</gmt_created>  <changed>1592251747</changed>  <gmt_changed>2020-06-15 20:09:07</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[work family interactions]]></publication>  <article_dateline>2020-05-11T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-05-11T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-05-11T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://blogs.scientificamerican.com/observations/were-in-the-calm-before-a-new-storm-of-covid-19-infections-and-deaths/]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="636196">  <title><![CDATA[ML@GT Faculty Members Will Discuss Projects Related to Covid-19 Relief During Virtual Panel]]></title>  <uid>34773</uid>  <body><![CDATA[<p>The coronavirus (Covid-19) pandemic has wreaked havoc on the world, spurring researchers across disciplines into action to help human-kind. Four researchers affiliated with the <a href="http://ml.gatech.edu/">Machine Learning Center at Georgia Tech (ML@GT)</a> and one <a href="https://omscs.gatech.edu/">Online Master of Science in Computer Science (OMSCS)</a> student examined different aspects of the virus&rsquo; impact. From creating forecasting models to studying the psychological impact of the disease, these researchers are helping people understand the virus.</p><p>On June 24, ML@GT faculty members <strong>Srijan Kumar </strong>(School of Computational Science and Engineering,) <strong>Aditya Prakash </strong>(School of Computational Science and Engineering,) <strong>Munmun De Choudhury </strong>(School of Interactive Computing,) <strong>Nicoleta Serban&nbsp;</strong>(H. Milton Stewart School of Industrial and Systems Engineering,) and OMSCS student <strong>Kenneth Miller</strong> will participate in a virtual panel discussing their work. The panel will be moderated by ML@GT executive director <strong>Irfan Essa</strong>.</p><p>Panelists will give individual presentations before participating in a general question-and-answer segment with audience members.</p><ul><li>Kumar and De Choudhury will share details of their work regarding the <a href="http://ml.gatech.edu/hg/item/635397">psychological impact of Covid-19</a>. Kumar will also discuss his work examining <a href="https://www.cc.gatech.edu/news/635858/predicting-hate-crimes-targeting-asian-americans-amid-covid-19-outbreak">hate and counter-hate messages on Twitter against Asian Americans</a> during the pandemic.</li><li>Prakash is a member of the Center for Disease Control and Prevention&rsquo;s (CDC) forecasting team, and will share their <a href="https://www.cc.gatech.edu/news/635849/forecasting-covid-19-pandemic-united-states">new data-driven approach to disease forecasting</a>.</li><li>Serban&rsquo;s presentation will focus on her work creating an <a href="https://www.georgiahealthnews.com/2020/05/georgia-tech-model-predicts-spike-covid-cases-deaths/">agent-based simulation&nbsp;forecasting model</a>. This model captures the progression of the disease in an individual and in households, schools, communities, and workplaces.</li><li>A lawyer by day and OMSCS student by night, Miller participated in a Kaggle challenge using natural language processing and machine learning to <a href="https://www.cc.gatech.edu/news/635081/omscs-student-uses-machine-learning-help-understand-covid-19">help doctors and scientists read the most important studies</a> related to Covid-19.</li></ul><p>The panel will take place virtually via a Bluejeans Event at 11 a.m. on June 24 and is open to the public. <a href="https://primetime.bluejeans.com/a2m/register/sfpbpsgg">Registration is required</a>.</p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1591969253</created>  <gmt_created>2020-06-12 13:40:53</gmt_created>  <changed>1592250730</changed>  <gmt_changed>2020-06-15 19:52:10</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Members of the ML@GT community will discuss their Covid-19 related research efforts in a panel discussion on June 24, 2020.]]></teaser>  <type>news</type>  <sentence><![CDATA[Members of the ML@GT community will discuss their Covid-19 related research efforts in a panel discussion on June 24, 2020.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-06-12T00:00:00-04:00</dateline>  <iso_dateline>2020-06-12T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-06-12 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allie.mcfadden@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>636195</item>      </media>  <hg_media>          <item>          <nid>636195</nid>          <type>image</type>          <title><![CDATA[Members of the ML@GT community will discuss their Covid-19 related research efforts in a panel discussion on June 24, 2020.]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Using Machine Learning to Respond to Covid-19.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Using%20Machine%20Learning%20to%20Respond%20to%20Covid-19.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Using%20Machine%20Learning%20to%20Respond%20to%20Covid-19.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Using%2520Machine%2520Learning%2520to%2520Respond%2520to%2520Covid-19.png?itok=oemqhAhB]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Members of the ML@GT community will discuss their Covid-19 related research efforts in a panel discussion on June 24, 2020.]]></image_alt>                    <created>1591969094</created>          <gmt_created>2020-06-12 13:38:14</gmt_created>          <changed>1591969094</changed>          <gmt_changed>2020-06-12 13:38:14</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="431631"><![CDATA[OMS]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="636148">  <title><![CDATA[Van Hentenryck Transforms Theories into Products Used Worldwide]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1591815331</created>  <gmt_created>2020-06-10 18:55:31</gmt_created>  <changed>1591815331</changed>  <gmt_changed>2020-06-10 18:55:31</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Tokyo Smart City studio]]></publication>  <article_dateline>2020-06-10T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-06-10T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-06-10T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/2UuIkzv]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="636137">  <title><![CDATA[Tech Designed to Help Assistive Robots Work More Closely with Patients]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1591811766</created>  <gmt_created>2020-06-10 17:56:06</gmt_created>  <changed>1591811766</changed>  <gmt_changed>2020-06-10 17:56:06</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Tokyo Smart City studio]]></publication>  <article_dateline>2020-06-10T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-06-10T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-06-10T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/2UgZyR3]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="636110">  <title><![CDATA[Robotics Research Includes Advances in Systems Design, Applications, and other Key Areas]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Roboticists from around the world, including researchers from the Georgia Institute of Technology, are publishing their latest work at the 2020 IEEE International Conference on Robotics and Automation (ICRA), a two-week live online virtual affair&nbsp;the first part of June, with research activities continuing through August.</p><p><a href="https://icra.cc.gatech.edu/" target="_blank">Georgia Tech is a leading contributor</a> with 42 papers that include research covering more than 60 subfields within robotics. Deep learning in robotics and automation is one of the top areas of research among authors from the College of Computing and College of Engineering, the two largest contributors to Georgia Tech&rsquo;s research at ICRA.</p><p>Deep learning is computing&rsquo;s strongest area while mechanism design is where engineering excels based on the number of authors in the areas.</p><p><strong>Top 3 subfields for computing authors:</strong></p><ul><li>Deep Learning</li><li>Semantic Scene Understanding</li><li>Physically Assistive Devices and Network Devices (tie)</li></ul><p><strong>Top 3 subfields for engineering authors:</strong></p><ul><li>Mechanism Design</li><li>Learning and Adaptive Systems</li><li>Multi-Robot Systems</li></ul><p>The diversity of Georgia Tech robotics research at ICRA is also represented in the number of academic units and disciplines involved in advancing the field. Multidisciplinary teams often come together to tackle challenges that require a unique combination of skill sets.</p><p>There are nearly 100 Georgia Tech authors with work at ICRA. Explore the people, research, and trends from our community at <a href="https://icra.cc.gatech.edu/">https://icra.cc.gatech.edu/</a></p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1591732965</created>  <gmt_created>2020-06-09 20:02:45</gmt_created>  <changed>1591733183</changed>  <gmt_changed>2020-06-09 20:06:23</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Roboticists from around the world, including researchers from the Georgia Institute of Technology, are publishing their latest work at the 2020 IEEE International Conference on Robotics and Automation (ICRA).]]></teaser>  <type>news</type>  <sentence><![CDATA[Roboticists from around the world, including researchers from the Georgia Institute of Technology, are publishing their latest work at the 2020 IEEE International Conference on Robotics and Automation (ICRA).]]></sentence>  <summary><![CDATA[<p>Roboticists from around the world, including researchers from the Georgia Institute of Technology, are publishing their latest work at the 2020 IEEE International Conference on Robotics and Automation (ICRA), a two-week live online virtual affair&nbsp;the first part of June, with research activities continuing through August.</p>]]></summary>  <dateline>2020-06-09T00:00:00-04:00</dateline>  <iso_dateline>2020-06-09T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-06-09 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu?subject=ICRA">Joshua Preston</a><br />Research Communications Manager<br />College of Computing</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>636111</item>      </media>  <hg_media>          <item>          <nid>636111</nid>          <type>image</type>          <title><![CDATA[ICRA 2020]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[viz-icra_authors-by-kw.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/viz-icra_authors-by-kw.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/viz-icra_authors-by-kw.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/viz-icra_authors-by-kw.png?itok=a_wUCyY9]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1591733133</created>          <gmt_created>2020-06-09 20:05:33</gmt_created>          <changed>1591733133</changed>          <gmt_changed>2020-06-09 20:05:33</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="636085">  <title><![CDATA[Georgia Tech at ICRA 2020]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1591716816</created>  <gmt_created>2020-06-09 15:33:36</gmt_created>  <changed>1591716816</changed>  <gmt_changed>2020-06-09 15:33:36</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Georgia Tech at ICRA 2020]]></publication>  <article_dateline>2020-06-09T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-06-09T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-06-09T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://icra.cc.gatech.edu/]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="636082">  <title><![CDATA[Dellaert Awarded IEEE ICRA Milestone Award]]></title>  <uid>34773</uid>  <body><![CDATA[<p><strong>Frank Dellaert</strong>, a professor in the&nbsp;<a href="https://ic.gatech.edu/">School of Interactive Computing</a>, and affiliated with the&nbsp;<a href="http://ml.gatech.edu/">Machine Learning Center at Georgia Tech (ML@GT)</a>&nbsp;and&nbsp;<a href="https://gvu.gatech.edu/">GVU Center</a>, has been honored with the IEEE ICRA Milestone Award at the&nbsp;<a href="https://www.icra2020.org/">2020 IEEE International Conference on Robotics and Automation (ICRA.)</a></p><p>The award recognizes the most influential ICRA paper published between 1998-2002 and selected&nbsp;<a href="https://www.ri.cmu.edu/pub_files/pub1/dellaert_frank_1999_2/dellaert_frank_1999_2.pdf"><em>Monte Carlo Localization for Mobile Robots</em></a>&nbsp;as this year&rsquo;s recipient. Dellaert conducted this work during his Ph.D studies at Carnegie Mellon University with&nbsp;<strong>Dieter Fox, Wolfram Burgard</strong>, and&nbsp;<strong>Sebastian Thrun</strong>.</p><p>&ldquo;It is a great honor to be recognized, but receiving a &rsquo;20 years on&rsquo; milestone award also makes you feel old!&rdquo; said Dellaert.</p><p>The paper was accepted to ICRA in 1999 and introduced the Monte Carlo Localization (MLC) method or particle filter localization, which represents the probability density involved in maintaining a set of samples that are randomly drawn from it. This method is faster, more accurate, and less memory-intensive than earlier grid-based methods and allows a robot to be localized without knowledge of its starting location.</p><p>MCL is simple to apply to the robotics domain, leading to its popularity. It is now taught in every robotics 101 class around the world. Many mobile robots, including commercial efforts, rely on MCL for localizing.</p><p>&ldquo;Simplicity is key for acceptance and you cannot predict which of your research will have the most impact. This paper was a result of me procrastinating on my Ph.D. thesis which is a paper almost nobody read. It is an enormous honor that MCL has made a lasting impact on our field,&rdquo; said Dellaert.</p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1591715351</created>  <gmt_created>2020-06-09 15:09:11</gmt_created>  <changed>1591715351</changed>  <gmt_changed>2020-06-09 15:09:11</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The award recognizes the most influential ICRA paper published between 1998-2002 and selected Monte Carlo Localization for Mobile Robots as this year’s recipient. ]]></teaser>  <type>news</type>  <sentence><![CDATA[The award recognizes the most influential ICRA paper published between 1998-2002 and selected Monte Carlo Localization for Mobile Robots as this year’s recipient. ]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-06-09T00:00:00-04:00</dateline>  <iso_dateline>2020-06-09T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-06-09 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allie.mcfadden@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>636081</item>      </media>  <hg_media>          <item>          <nid>636081</nid>          <type>image</type>          <title><![CDATA[Frank Dellaert, a professor in the School of Interactive Computing, and affiliated with the Machine Learning Center at Georgia Tech (ML@GT) and GVU Center, has been honored with the IEEE ICRA Milestone Award at the 2020 IEEE International Conference on Ro]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[frank-dellaert2.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/frank-dellaert2.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/frank-dellaert2.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/frank-dellaert2.jpeg?itok=mSPZ9FYL]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1591715211</created>          <gmt_created>2020-06-09 15:06:51</gmt_created>          <changed>1591715211</changed>          <gmt_changed>2020-06-09 15:06:51</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="134"><![CDATA[Student and Faculty]]></category>          <category tid="135"><![CDATA[Research]]></category>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="134"><![CDATA[Student and Faculty]]></term>          <term tid="135"><![CDATA[Research]]></term>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="636040">  <title><![CDATA[Omar Asensio Receives NSF CAREER Award]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1591645790</created>  <gmt_created>2020-06-08 19:49:50</gmt_created>  <changed>1591645790</changed>  <gmt_changed>2020-06-08 19:49:50</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[building renovations]]></publication>  <article_dateline>2020-06-08T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-06-08T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-06-08T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://spp.gatech.edu/news/item/635726/omar-asensio-receives-career-award]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="635991">  <title><![CDATA[ML@GT to Present Diverse Research Interests at CVPR 2020]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1591378777</created>  <gmt_created>2020-06-05 17:39:37</gmt_created>  <changed>1591378777</changed>  <gmt_changed>2020-06-05 17:39:37</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[ML@GT to Present Diverse Research Interests at CVPR 2020]]></publication>  <article_dateline>2020-06-05T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-06-05T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-06-05T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/3de5hyl]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="635990">  <title><![CDATA[A Personal Reflection From Dean Charles Isbell  ]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1591378566</created>  <gmt_created>2020-06-05 17:36:06</gmt_created>  <changed>1591378566</changed>  <gmt_changed>2020-06-05 17:36:06</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[supply chain issues]]></publication>  <article_dateline>2020-06-05T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-06-05T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-06-05T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://b.gatech.edu/3gFlH50]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="635989">  <title><![CDATA[Predicting Hate Crimes Targeting Asian Americans Amid Covid-19 Outbreak]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1591378522</created>  <gmt_created>2020-06-05 17:35:22</gmt_created>  <changed>1591378522</changed>  <gmt_changed>2020-06-05 17:35:22</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Predicting Hate Crimes Targeting Asian Americans Amid Covid-19 Outbreak]]></publication>  <article_dateline>2020-06-05T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-06-05T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-06-05T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.cc.gatech.edu/news/635858/predicting-hate-crimes-targeting-asian-americans-amid-covid-19-outbreak]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="635988">  <title><![CDATA[Meet ML@GT: Samarth Brahmbhatt Studies How Humans Grasp Objects, Enabling Robots to Do the Same]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1591378455</created>  <gmt_created>2020-06-05 17:34:15</gmt_created>  <changed>1591378455</changed>  <gmt_changed>2020-06-05 17:34:15</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Tokyo Smart City studio]]></publication>  <article_dateline>2020-06-05T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-06-05T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-06-05T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/2z0qsoS]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="635987">  <title><![CDATA[Team Using Deep Learning to Forecast Pandemic in the U.S.]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1591378375</created>  <gmt_created>2020-06-05 17:32:55</gmt_created>  <changed>1591378375</changed>  <gmt_changed>2020-06-05 17:32:55</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Team Using Deep Learning to Forecast Pandemic in the U.S.]]></publication>  <article_dateline>2020-06-05T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-06-05T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-06-05T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.cc.gatech.edu/news/635849/forecasting-covid-19-pandemic-united-states]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="635397">  <title><![CDATA[NSF Grant to Fund Georgia Tech Research into Psychological Impact of COVID-19]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Arguably the most visible of all prescriptions to the COVID-19 pandemic this year have been guidelines or imposed restrictions commonly referred to as &ldquo;social distancing.&rdquo; Less physical contact, the thinking goes, means a lowered risk of viral transmission.</p><p>Like the virus itself, however, stress and anxiety stemming from overconsumption of news or other media can spread through social networks. As the mental health fallout becomes clearer, are some similar social media distancing recommendations needed to stem the flow through the online world?</p><p>A multidisciplinary team of researchers at Georgia Tech, Washington University-St. Louis, and the University of Wisconsin-Madison argue that these mental health implications of the pandemic are equally important, and <a href="https://www.nsf.gov/awardsearch/showAward?AWD_ID=2027689">a new grant from the National Science Foundation (NSF) has recently funded new research to that effect</a>.</p><p>&ldquo;It&rsquo;s not just the fear and anxiety that I might get infected or I might infect or know someone who is infected,&rdquo; said <strong>Munmun De Choudhury</strong>, an associate professor in Georgia Tech&rsquo;s <a href="http://ic.gatech.edu/">School of Interactive Computing</a> and the co-principal investigator on the project. &ldquo;It&rsquo;s all of these things around it that are furthering the psychological impact. It&rsquo;s very different from other kinds of illnesses or pandemics because of the uncertainty of the crisis. We simply don&rsquo;t know how long we are into it.&rdquo;</p><p>The grant is funded by the NSF&rsquo;s Rapid Response Project program, which is intended for research that addresses an immediate need within society. It has provided $200,000 toward the yearlong project.</p><p>The research will combine investigations in two separate environments: the online world, where news, personal posts, videos, and other media are shared rampantly across social networks, and the offline real world, where the epidemiological data about the spread of the virus or economic data about the financial fallout can be measured.</p><p>For the former, they will use social media data from various popular social platforms &ndash; Twitter, Reddit, and YouTube &ndash; to measure the spread of information and how consumers of it express themselves in terms of anxiety or fear, or what they are saying about their own psychological wellbeing.</p><p>&ldquo;How often are people expressing anger or fear or blaming someone through their posts?&rdquo; said <strong>Srijan Kumar</strong>, an assistant professor in Georgia Tech&rsquo;s <a href="http://cse.gatech.edu/">School of Computational Science and Engineering</a> and the other co-principal investigator. &ldquo;We&rsquo;ll develop new classifiers using natural language processing that will help us classify social posts into two categories: either anxiety-inducing or anxiety itself.&rdquo;</p><p>This is new territory, according to De Choudhury. Although there have been other pandemics such as the 1918 influenza epidemic, none of this magnitude have taken place during the digital/social age. And while social media provides an important mechanism for staying informed and remaining in contact with friends and loved ones during the difficult social distancing measures, overexposure could result in negative mental health consequences.</p><p>&ldquo;There is probably a sweet spot,&rdquo; De Choudhury said. &ldquo;Just like we need physical distancing in the real world, we probably need to practice distancing from social media or online information to an extent to avoid consuming too much anxiety-inducing media, while also staying informed.</p><p>&ldquo;If I say something, it doesn&rsquo;t just affect me. It affects all the people who read my posts. If they share it or if they post something, then it affects all of their social neighbors. It can be an outward ripple that affects people. We want to measure that, how they spread through social networks.&rdquo;</p><p>They&rsquo;ll compare that data with the other element: the offline world. Currently, people in New York City are likely more stressed and anxious in a different way than people in Georgia. New York has been the epicenter of the viral outbreak in the United States, meaning that much of the anxiety locally stems from the virus itself.</p><p><em>Will I contract the virus? Will someone I know contract the virus? Can I go to the store for groceries? How much disinfecting is required when I return home?</em></p><p>And then, you can tease out that geographical data. How are higher-income individuals stressed in comparison to lower-income? What about differences along racial lines? Data has shown higher mortality rates in African-Americans, for example, which leads to different fears than those in other communities.</p><p>In U.S. cities where there is also sufficient social media data, they will examine this offline data to see rates of infection, fatalities, when shelter-in-place was imposed, and more.</p><p>The final piece will be what they will do with this information. The goal is to create tools for social platforms to provide coping techniques or guidelines for use.</p><p>&ldquo;Maybe that might include encouraging you to limit the amount of time you spend on social media,&rdquo; Kumar said. &ldquo;Or, maybe you step out and do something with family members. Some kind of physical activity. Then we can begin to examine how people react to these messages. Do we see that their anxiety levels are coming down, or not?&rdquo;</p><p>&ldquo;In this time, we have a very unique lens to study this pandemic in a whole new light as opposed to other events of a global scale,&rdquo; De Choudhury said. &ldquo;There is no guarantee this won&rsquo;t come back. And even if it doesn&rsquo;t, something else will. Being able to have these tools built and available will better prepare us for the future.&rdquo;</p><p>For more coverage of Georgia Tech&rsquo;s response to the coronavirus pandemic, please visit our <a href="https://helpingstories.gatech.edu/">Responding to COVID-19 page</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1589560810</created>  <gmt_created>2020-05-15 16:40:10</gmt_created>  <changed>1591276187</changed>  <gmt_changed>2020-06-04 13:09:47</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A multidisciplinary team of researchers has received a grant from the NSF to study the mental health outcomes of COVID-19 through examination of social media activity and geographic epidemiological data.]]></teaser>  <type>news</type>  <sentence><![CDATA[A multidisciplinary team of researchers has received a grant from the NSF to study the mental health outcomes of COVID-19 through examination of social media activity and geographic epidemiological data.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-05-15T00:00:00-04:00</dateline>  <iso_dateline>2020-05-15T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-05-15 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>635396</item>      </media>  <hg_media>          <item>          <nid>635396</nid>          <type>image</type>          <title><![CDATA[Munmun De Choudhury and Srijan Kumar]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[NSF RAPID GRANT - Munmun and Srijan.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/NSF%20RAPID%20GRANT%20-%20Munmun%20and%20Srijan.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/NSF%20RAPID%20GRANT%20-%20Munmun%20and%20Srijan.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/NSF%2520RAPID%2520GRANT%2520-%2520Munmun%2520and%2520Srijan.png?itok=xn0eHXsB]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Munmun De Choudhury and Srijan Kumar]]></image_alt>                    <created>1589560736</created>          <gmt_created>2020-05-15 16:38:56</gmt_created>          <changed>1589560736</changed>          <gmt_changed>2020-05-15 16:38:56</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="184821"><![CDATA[cc-research; ic-hcc; ic-ai-ml; COVID-19]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="635847">  <title><![CDATA[Researchers Use Machine Learning to Fight Covid-19 Disinformation]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1591018526</created>  <gmt_created>2020-06-01 13:35:26</gmt_created>  <changed>1591106495</changed>  <gmt_changed>2020-06-02 14:01:35</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Researchers Use Machine Learning to Fight Covid-19 Disinformation]]></publication>  <article_dateline>2020-06-01T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-06-01T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-06-01T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.cc.gatech.edu/news/635700/researchers-use-machine-learning-fight-covid-19-disinformation]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="635593">  <title><![CDATA[IC Students Support Innovation in India through 'MakerGhat']]></title>  <uid>33939</uid>  <body><![CDATA[<p><strong>Azra Ismail</strong> was working with health workers in Delhi, India, when she had a realization. What she saw from locals in the community was that there was an intense desire for societal impact from many workers &ndash; and the ideas to go with it &ndash; but an absence of resources necessary to fully realize the innovation.</p><p>&ldquo;The experience that these health workers had in these communities provided unique perspectives and ideas that produced the kinds of ideas that could be relevant,&rdquo; said Ismail, now a Ph.D. student in Georgia Tech&rsquo;s <a href="http://ic.gatech.edu">School of Interactive Computing</a>.</p><p>&ldquo;But because they were the lowest rung on the health infrastructure and were low income or low social class, those ideas weren&rsquo;t recognized and represented.&rdquo;</p><p>Around the same time, <a href="http://cc.gatech.edu">College of Computing</a> alumnus <strong>Aditya Vishwanath</strong>, now a doctoral student at Stanford University, had a similar realization. He was working with Asha Mumbai, a non-profit in a low-resourced slum in India&rsquo;s biggest city, using virtual reality to see how students appropriated and made sense of it. Like Ismail, he recognized a group of students who had unique viewpoints and drive, but too few resources to realize them.</p><p>Knowing how important it is to support innovation from those who understand the specific needs of a community, the two of them founded <a href="https://makerghat.org/space">MakerGhat,</a> a non-profit with the mission to take ideas from concept to creation and application where they are needed most: the communities they serve.</p><p>Situated in an impoverished neighborhood in Mumbai, MakerGhat is a community lab in which local students, young and old, can join to receive education and resources to put their ideas into practice. Makers join through subscription or scholarships if they are unable to afford membership. In exchange, they receive access to support ranging from an electronics room, a 3D printing and PC workstation, a science lab, a woodworking shop, and a design and workshop studio.</p><p>The space is intentionally unsophisticated. Enter the space, and you may find a mish-mash of supplies and painting on the walls, a far cry from the labs of the nearby Indian Institute of Technology-Bombay, one of the top technological universities in India.</p><p>&ldquo;We want people to be encouraged to try things and not afraid to break it,&rdquo; Ismail said. &ldquo;We don&rsquo;t want something that people are afraid to use.&rdquo;</p><p>If a maker can&rsquo;t find what they are looking for, they can turn to connections within the community to meet the need. Heavier equipment, for example, might require a trip to the local smith for welding.</p><p>&ldquo;Students coming in have family members in these other industries, so it sets up an informal infrastructure where the students know where to go for a specific need,&rdquo; Ismail said.</p><p>The model has resulted in a number of tangible outputs. In Summer 2019, a handful of interns from Georgia Tech, Stanford, and Smith College were able to take advantage of the Denning Global Engagement Seed Fund to fund their travel to India. Interns were there not just to teach or run the lab, but to co-learn with locals. Collaborations between the technical expertise of the interns and the locally-significant knowledge of the makers resulted in a handful of innovations that addressed local needs.</p><p>One collaboration resulted in a system that could compact plastic bottles to assist in a waste management challenge in Mumbai. Workers who collect waste locally and transport to recycling plants to sell to companies or government institutions face challenges transporting plastic bottles, the most common waste item, which take up a lot of space.</p><p>Another created a community mapping platform to help identify local resources. Makers and interns went into the community and conducted surveys to find needs specific to different geographies.</p><p>&ldquo;A big part of this is engaging with the community to identify needs, current status quos, and how to approach the challenge,&rdquo; Ismail said. &ldquo;This happens in the schools too. What are the gaps that need to be addressed, and how can we help address them?&rdquo;</p><p>MakerGhat serves about 300 students weekly, ranging from young to old &ndash; it is open to any age or background. Many come from STEM fields, but others may be interested in math or art or fashion design.</p><p>&ldquo;It&rsquo;s a melting pot,&rdquo; Ismail said.</p><p>The goal is to turn MakerGhat into an incubator. As the first class of students graduates from the program, they will move on to other sources of education or work. Ismail said that she and her collaborators &ndash; which includes Vishwanath, a team programmer, local leaders in finances and project resources, and a group of 10 or so volunteers &ndash; want to help build companies from the ideas and innovations that formed at MakerGhat.</p><p>&ldquo;The mission is to actually transform these students and community members into entrepreneurs,&rdquo; Ismail said. &ldquo;We want to take these creations to the next level and help them scale beyond their own community.&quot;</p><p>That might mean launching new MakerGhat centers elsewhere. The goal is to make the model of the original open-source so that other communities can replicate &ndash; in India and beyond. While it may play out different in each location depending on the community&rsquo;s needs, the organizational structure would be the same.</p><p>&ldquo;There&rsquo;s a misconception that great innovation only comes from these big tech companies or big universities,&rdquo; Ismail said. &ldquo;But we want to challenge that narrative. Many of the great ideas that can make significant impacts on society come from the people in these communities of need themselves.&rdquo;</p><p>Other members of the Georgia Tech community have contributed to the project. <strong>Neha Kumar</strong>, an assistant professor joint between the School of Interactive Computing and the Sam Nunn School of International Affiars, is an advisor. Students involved in a Makers-in-Residence program last summer were <strong>Ritesh Bhatt</strong>, <strong>Solum Onwuchekwa</strong>, and <strong>Josiah Mangiameli</strong>. <strong>Vishal Sharma</strong>, an incoming IC Ph.D. student, was also involved.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1590175035</created>  <gmt_created>2020-05-22 19:17:15</gmt_created>  <changed>1590175035</changed>  <gmt_changed>2020-05-22 19:17:15</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[MakerGhat is a local makerspace in India designed to cater specifically to low-resourced innovators.]]></teaser>  <type>news</type>  <sentence><![CDATA[MakerGhat is a local makerspace in India designed to cater specifically to low-resourced innovators.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-05-22T00:00:00-04:00</dateline>  <iso_dateline>2020-05-22T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-05-22 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>635592</item>      </media>  <hg_media>          <item>          <nid>635592</nid>          <type>image</type>          <title><![CDATA[MakerGhat]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[MakerGhat.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/MakerGhat.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/MakerGhat.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/MakerGhat.jpeg?itok=YMZIXw_e]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Makers paint walls at MakerGhat in India.]]></image_alt>                    <created>1590174252</created>          <gmt_created>2020-05-22 19:04:12</gmt_created>          <changed>1590174252</changed>          <gmt_changed>2020-05-22 19:04:12</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.cc.gatech.edu/content/researchers-work-kids-mumbai-examine-classroom-potential-virtual-reality]]></url>        <title><![CDATA[Researchers Work with Kids in Mumbai to Examine Classroom Potential of Virtual Reality]]></title>      </link>          <link>        <url><![CDATA[https://www.cc.gatech.edu/news/605000/vr-taking-students-where-once-only-ms-frizzle-and-magic-school-bus-could]]></url>        <title><![CDATA[VR Taking Students Where Once Only Ms. Frizzle and the Magic School Bus Could]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="184890"><![CDATA[cc-research; ic-hcc]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="634312">  <title><![CDATA[Machine Learning Technique Helps Wearable Devices Get Better at Diagnosing Sleep Disorders and Quality]]></title>  <uid>34773</uid>  <body><![CDATA[<p>Getting diagnosed with a sleep disorder or assessing quality of sleep is an often expensive and tricky proposition, involving sleep clinics where patients are hooked up to sensors and wires for monitoring.</p><p>Wearable devices, such as the Fitbit and Apple Watch, offer less intrusive and more cost-effective sleeping monitoring, but the tradeoff can be inaccurate or imprecise sleep data.</p><p>Researchers at the Georgia Institute of Technology are working to combine the accuracy of sleep clinics with the convenience of wearable computing by developing machine learning models, or smart algorithms, that provide better sleep measurement data as well as considerably faster, more energy-efficient software.&nbsp;</p><p>The team is focusing on electrical ambient noise&nbsp;that is emitted by devices but that is often not audible and can interfere with sleep sensors on a wearable gadget. Leave the TV on at night, and the electrical signal - not the infomercial in the background - might mess with your sleep tracker.</p><p><a href="https://cse.gatech.edu/news/616715/new-deep-learning-approach-improves-access-sleep-diagnostic-testing">[Related News:&nbsp;New Deep Learning Approach Improves Access to Sleep Diagnostic Testing]</a></p><p>These additional electrical signals are problematic for wearable devices that typically have only one sensor to measure a single biometric data point, normally heart rate. A device picking up signals from ambient electrical noise skews the data and leads to potentially misleading results.&nbsp;</p><p>&ldquo;We are building a new process to help train [machine learning] models to be used for the home environment and help address this and other issues around sleep,&rdquo; said&nbsp;<strong>Scott Freitas</strong>, a second-year machine learning Ph.D. student and co-lead author of a newly published&nbsp;<a href="https://arxiv.org/pdf/2001.11363.pdf" target="_blank">paper</a>.</p><p>The team employed adversarial training in tandem with spectral regularization, a technique that makes neural networks more robust to electrical signals in the input data. This means that the system can accurately assess sleep stages even when an EEG signal is corrupted by additional signals like a TV or washing machine.</p><p>Using machine-learning methods such as sparsity regularization, the new model can also compress the amount of time it takes to gather and analyze data, as well as increase energy efficiency of the wearable device.</p><p>The researchers are testing with a product worn on the head but hope to also integrate it into smartwatches and bracelets. Results would then be transmitted to a person&rsquo;s doctor to analyze and provide a diagnosis. This could result in fewer visits to the doctor, reducing the cost, time, and stress involved with receiving a sleep disorder diagnosis.</p><p>Another issue that the researchers are looking at is reducing the&nbsp;amount&nbsp;of sensors needed to accurately track sleep.&nbsp;</p><p>&ldquo;When someone visits a sleep clinic, they are hooked up to all kinds of monitors and wires to gather data ranging from brain activity on EEG&rsquo;s, heart rate, and more. Wearable tech only monitors heart rate with one sensor. The one sensor is more ideal and comfortable, so we are looking for a way to get more data without adding more wires or sensors,&rdquo; said&nbsp;<strong>Rahul Duggal</strong>, a second-year computer science Ph.D. student and co-lead author.</p><p>The team&rsquo;s work is published in the paper&nbsp;<em>REST: Robust and Efficient Neural Networks for Sleep Monitoring in the Wild</em>,&nbsp;accepted to the&nbsp;<a href="https://www2020.thewebconf.org/" target="_blank">International World Wide Web Conference&nbsp;(WWW)</a>, scheduled to take place April 20 through 24 in Taipei, Taiwan.</p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1586800028</created>  <gmt_created>2020-04-13 17:47:08</gmt_created>  <changed>1590067825</changed>  <gmt_changed>2020-05-21 13:30:25</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[ML@GT researchers are improving the accuracy and efficiency of devices used to track sleeping using machine learning techniques.]]></teaser>  <type>news</type>  <sentence><![CDATA[ML@GT researchers are improving the accuracy and efficiency of devices used to track sleeping using machine learning techniques.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-04-15T00:00:00-04:00</dateline>  <iso_dateline>2020-04-15T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-04-15 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allie.mcfadden@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>634311</item>      </media>  <hg_media>          <item>          <nid>634311</nid>          <type>image</type>          <title><![CDATA[ML@GT researchers are improving the accuracy and efficiency of devices used to track sleeping using machine learning techniques.]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[kinga-cichewicz-5NzOfwXoH88-unsplash.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/kinga-cichewicz-5NzOfwXoH88-unsplash.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/kinga-cichewicz-5NzOfwXoH88-unsplash.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/kinga-cichewicz-5NzOfwXoH88-unsplash.jpg?itok=en4Z4sps]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[woman sleeping]]></image_alt>                    <created>1586799743</created>          <gmt_created>2020-04-13 17:42:23</gmt_created>          <changed>1586799743</changed>          <gmt_changed>2020-04-13 17:42:23</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="184463"><![CDATA[sleep tracking]]></keyword>          <keyword tid="9167"><![CDATA[machine learning]]></keyword>          <keyword tid="2556"><![CDATA[artificial intelligence]]></keyword>          <keyword tid="365"><![CDATA[Research]]></keyword>      </keywords>  <core_research_areas>          <term tid="39451"><![CDATA[Electronics and Nanotechnology]]></term>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="635528">  <title><![CDATA[Working towards explainable and data-efficient machine learning models via symbolic reasoning]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1589989289</created>  <gmt_created>2020-05-20 15:41:29</gmt_created>  <changed>1589989289</changed>  <gmt_changed>2020-05-20 15:41:29</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[anti-black]]></publication>  <article_dateline>2020-05-20T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-05-20T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-05-20T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://aihub.org/2020/05/20/working-towards-explainable-and-data-efficient-machine-learning-models-via-symbolic-reasoning/]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="634175">  <title><![CDATA[Four Machine Learning Faculty Members Earn Promotions and Tenure]]></title>  <uid>34773</uid>  <body><![CDATA[<p>Four faculty members at the <a href="http://ml.gatech.edu/">Machine Learning Center at Georgia Tech</a> have received promotions or been granted tenure.</p><p><strong>Jake Abernethy</strong> has been promoted to associate professor in the <a href="https://scs.gatech.edu/">School of Computer Science</a> and granted tenure. Abernethy&rsquo;s research focus is machine learning, where he enjoys discovering connections between optimization, statistics, and economics.</p><p>In 2011, he completed his Ph.D. at the University of California, Berkeley before becoming a Simons postdoctoral fellow for the following two years. After the water crisis in Flint, Mich., Abernethy worked on <a href="https://www.cc.gatech.edu/~jabernethy9/flint/">detecting lead contamination and infrastructure remediation</a>. Prior to studying and teaching machine learning, Abernethy performed comedy and juggling shows, opening for Sinbad and Dave Chappelle.</p><p><strong>Munmun De Choudhury</strong> has been promoted to associate professor in the<a href="https://ic.gatech.edu/"> School of Interactive Computing</a> and granted tenure. De Choudhury is also affiliated with the <a href="http://gvu.gatech.edu/">GVU</a> Center and <a href="http://ipat.gatech.edu/">Institute for People and Technology (IPaT)</a> and leads the <a href="http://socweb.cc.gatech.edu/">Social Dynamics and Wellbeing Lab (SocWeb Lab.)</a> De Choudhury studies problems at the intersection of computer science and social media, building computational methods and artefacts to help understand human behaviors and psychological states and how they manifest online.</p><p>Prior to joining Georgia Tech in 2014, De Choudhury was a postdoctoral researcher in the nexus group at Microsoft Research, Redmond. In 2011, she received her Ph.D. from Arizona State University, Tempe. After graduate school, De Choudhury spent time at Rutgers University and was a faculty associate with the Berkman Center for Internet and Society at Harvard University.</p><p><strong>Yajun Mei</strong> has been promoted to professor in the <a href="https://www.isye.gatech.edu/">H. Milton Stewart School of Industrial and Systems Engineering</a>. Mei&#39;s research interests include change-point problems and sequential analysis in mathematical statistics and sensor networks and information theory in engineering. Mei also examines longitudinal data analysis, random effects models, and clinical trials in biostatistics.</p><p>Mei received his Ph.D. in mathematics from the California Institute of Technology in 2003. He has also worked as a postdoc in biostatistics at the Fred Hutchinson Cancer Research Center. In 2010, Mei was awarded the National Science Foundation (NSF) CAREER Award and in 2008 was awarded Best Paper at FUSION. Mei was awarded the prestigious Abraham Wald Prize in Sequential Analysis in 2009.</p><p><strong>Alex Endert </strong>has been promoted to associate professor and granted tenure in the School of Interactive Computing. Endert directs the <a href="https://gtvalab.github.io/">Visual Analytics Lab</a> where he and his students apply fundamental research to&nbsp;domains including text analysis, intelligence analysis, cybersecurity, and decision-making, and explore novel user interaction techniques for visual analytics.</p><p>Endert earned his Ph.D. from Virginia Tech in 2012, and in 2013 his work on Semantic Interaction was awarded the IEEE VGTC VPG Pioneers Group Doctoral Dissertation Award, and the Virginia Tech Computer Science Best Dissertation Award. In 2018, Endert received the NSF CAREER Award.</p><p>Editors Note:&nbsp;<strong>Molei Tao</strong>&nbsp;has been promoted to associate professor with tenure in the School of Math. Tao is an applied and computational mathematician, designing algorithms for faster and more accurate computations and developing mathematical tools to analyze and design engineering systems or answer scientific questions.</p><p>He earned his Ph.D. in control and dynamical systems with a minor in physics from the California Institute of Technolgy where he also worked as a postdoctoral researcher. He is the 2011 recipient of the W.P. Carey Ph.D. Prize in Applied Mathematics and a 2019 NSF CAREER Award.</p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1586367437</created>  <gmt_created>2020-04-08 17:37:17</gmt_created>  <changed>1589223454</changed>  <gmt_changed>2020-05-11 18:57:34</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Four faculty members at the Machine Learning Center at Georgia Tech have received promotions or been granted tenure.]]></teaser>  <type>news</type>  <sentence><![CDATA[Four faculty members at the Machine Learning Center at Georgia Tech have received promotions or been granted tenure.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-04-08T00:00:00-04:00</dateline>  <iso_dateline>2020-04-08T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-04-08 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allie.mcfadden@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>634173</item>      </media>  <hg_media>          <item>          <nid>634173</nid>          <type>image</type>          <title><![CDATA[Four ML@GT faculty members earn promotions and tenure]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Spring 2020 ML Promotions.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Spring%202020%20ML%20Promotions.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Spring%202020%20ML%20Promotions.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Spring%25202020%2520ML%2520Promotions.png?itok=G89xMzOo]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Congratulations Alex, Jake, Munmun, and Yajun]]></image_alt>                    <created>1586367276</created>          <gmt_created>2020-04-08 17:34:36</gmt_created>          <changed>1586367276</changed>          <gmt_changed>2020-04-08 17:34:36</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="635210">  <title><![CDATA[Machine Learning Method Amplifies &#039;Voice of the People&#039; to Model Workplace Culture]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1588943271</created>  <gmt_created>2020-05-08 13:07:51</gmt_created>  <changed>1588943271</changed>  <gmt_changed>2020-05-08 13:07:51</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Machine Learning Method Amplifies &#039;Voice of the People&#039; to Model Workplace Culture]]></publication>  <article_dateline>2020-05-08T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-05-08T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-05-08T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.cc.gatech.edu/news/634899/machine-learning-method-amplifies-voice-people-model-workplace-culture]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="589608"><![CDATA[Machine Learning]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="635209">  <title><![CDATA[The Student Center Celebrates 50 Years]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1588943226</created>  <gmt_created>2020-05-08 13:07:06</gmt_created>  <changed>1588943226</changed>  <gmt_changed>2020-05-08 13:07:06</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Professional Master&#039;s in public safety and occupational health]]></publication>  <article_dateline>2020-05-08T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-05-08T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-05-08T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.ece.gatech.edu/news/634992/student-center-celebrates-50-years]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="589608"><![CDATA[Machine Learning]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="635208">  <title><![CDATA[Social Media and Wellbeing: Does Bias in Self-Reported Data Impact Research?]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Along with the development of each new technological platform comes a series of questions designed to understand its ultimate impact on users&rsquo; wellbeing or performance. It&rsquo;s like clockwork.</p><p><em>Does watching too much television rot your child&rsquo;s brain? How much is too much when it comes to video games? Is our time spent on social media impacting our mental health?</em></p><p>These are all important questions, but how they are asked matters to the ultimate conclusions we can draw. It is well-established that the most commonly used method in this area of research &ndash; user self-reports and survey questions &ndash; are prone to error. Now, new research from collaborators at Georgia Tech, Facebook, and the University of Michigan have shed light on the nature of error &ndash; that is to say whether user over or underestimate their data, who and which questions are more prone to error, and more.</p><p>Error in the data, said <a href="http://ic.gatech.edu">School of Interactive Computing</a> Ph.D. student&nbsp;<strong>Sindhu Ernala</strong>, can impact the inferences drawn from the data itself.</p><p>&ldquo;We know survey questions have several well-documented biases,&rdquo; Ernala said. &ldquo;People may not remember correctly. They can&rsquo;t keep up with their time. They remember recent things more accurately than those further in the past. All of this matters because error in measurement might impact the downstream inferences we make. Accurate assessments of social media use is critical because of the everyday impact it has on people&rsquo;s lives.&rdquo;</p><p>Indeed, Ernala and her collaborators found that these biases held up in many surveys. In a paper accepted to the <a href="http://chi.gatech.edu">2020 ACM Conference on Human Factors in Computing</a> (CHI), they picked 10 of the most common survey questions in prior literature that investigate time spent on Facebook. The questions were asked in a variety of ways: open ended or multiple choice, the frequency of visits or the total time spent. They asked these 10 questions in a survey to 50,000 random users in 15 countries around the world.</p><p>With self-reported data in hand, they compared it to the actual server logs at Facebook to see how it stacked up. Interestingly, people most often overestimated the time they spent on the platform and underestimated the number of times they visited. Specifically, in the 18-24 demographic, a common age range for research done at universities, there was even more error in self-reports.</p><p>&ldquo;This is important, because a lot of our research is done with these age samples,&rdquo; Ernala said.</p><p>With this information in mind, the researchers made a handful of recommendation in order to improve the data and, thus, the research around the data itself:</p><ol><li>As a researcher, if you are investigating time spent, consider using time tracking applications as an alternative to self-report time spent measures. These applications include things like Apple&rsquo;s screen time feature or Facebook&rsquo;s &ldquo;Your Time on Facebook.&rdquo;<br />&nbsp;</li><li>If researchers want to use surveys, which often makes sense, consider using the phrasing with the lowest error or multiple-choice questions.</li></ol><p>The researchers caution against using time spent self-reports directly, but rather interpret reports as noisy estimates of where someone falls on a distribution. More important when determining wellbeing outcomes is&nbsp;<em>how</em>&nbsp;users actually spend their time on the platform.</p><p>&ldquo;Social platforms change and user habits change over time,&rdquo; Ernala said. &ldquo;The questions now might not be the best questions five or 10 years from now. This is fluid, and we need to continue to look at this to make sure our past and future research is well-informed.&rdquo;</p><p>She and her collaborators hope to contribute positively to this ongoing process by providing some validated measures that can be used across studies, while understanding that these methods may change over time as user habits transform.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1588926987</created>  <gmt_created>2020-05-08 08:36:27</gmt_created>  <changed>1588926987</changed>  <gmt_changed>2020-05-08 08:36:27</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Error in the data, said School of Interactive Computing Ph.D. student Sindhu Ernala, can impact the inferences drawn from the data itself.]]></teaser>  <type>news</type>  <sentence><![CDATA[Error in the data, said School of Interactive Computing Ph.D. student Sindhu Ernala, can impact the inferences drawn from the data itself.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-05-08T00:00:00-04:00</dateline>  <iso_dateline>2020-05-08T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-05-08 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>624519</item>      </media>  <hg_media>          <item>          <nid>624519</nid>          <type>image</type>          <title><![CDATA[Social Media Logos]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Social Media logos.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Social%20Media%20logos.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Social%20Media%20logos.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Social%2520Media%2520logos.jpg?itok=9S-0Fb6s]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[A keyboard featuring different social media logos]]></image_alt>                    <created>1565805908</created>          <gmt_created>2019-08-14 18:05:08</gmt_created>          <changed>1565805908</changed>          <gmt_changed>2019-08-14 18:05:08</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://chi.gatech.edu]]></url>        <title><![CDATA[CHI 2020 at Georgia Tech]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="182508"><![CDATA[cc-research; ic-hcc; ic-social-computing]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="634980">  <title><![CDATA[Interactive Tool Helps People See Why Staying Home Matters During a Pandemic]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1588598438</created>  <gmt_created>2020-05-04 13:20:38</gmt_created>  <changed>1588598438</changed>  <gmt_changed>2020-05-04 13:20:38</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Professional Master&#039;s in public safety and occupational health]]></publication>  <article_dateline>2020-05-04T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-05-04T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-05-04T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.scs.gatech.edu/news/634615/interactive-tool-helps-people-see-why-staying-home-matters-during-pandemic]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="634979">  <title><![CDATA[Three Faculty Members Promoted in School of Computer Science]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1588598381</created>  <gmt_created>2020-05-04 13:19:41</gmt_created>  <changed>1588598381</changed>  <gmt_changed>2020-05-04 13:19:41</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Three Faculty Members Promoted in School of Computer Science]]></publication>  <article_dateline>2020-05-04T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-05-04T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-05-04T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.scs.gatech.edu/news/634918/three-faculty-members-promoted-school-computer-science]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="634978">  <title><![CDATA[SCS Ph.D. Student is at the Forefront of Emerging Memory Systems Research]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1588598300</created>  <gmt_created>2020-05-04 13:18:20</gmt_created>  <changed>1588598300</changed>  <gmt_changed>2020-05-04 13:18:20</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[SCS Ph.D. Student is at the Forefront of Emerging Memory Systems Research]]></publication>  <article_dateline>2020-05-04T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-05-04T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-05-04T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.scs.gatech.edu/news/634916/scs-phd-student-forefront-emerging-memory-systems-research]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="634889">  <title><![CDATA[Topliff, Truong Tapped for 2020 NDSEG Fellowships]]></title>  <uid>34773</uid>  <body><![CDATA[<p><strong>Charles Topliff</strong> and <strong>Joanne Truong</strong> have been named as recipients of the 2020 <a href="https://ndseg.asee.org/ndseg_fellows">National Defense Science and Engineering Graduate (NDSEG) Fellowships</a>. They are both Ph.D. students in the Georgia Tech <a href="https://www.ece.gatech.edu/">School of Electrical and Computer Engineering (ECE)</a>.&nbsp;</p><p>The NDSEG Fellowships are funded by the U.S. Department of Defense (DoD), through the Office of the Under Secretary for Research and Engineering and the military services, to promote education in science and engineering disciplines relevant to the DoD mission.</p><p>Topliff is a second year ECE Ph.D. student in the <a href="http://ml.gatech.edu/">machine learning program</a>. He is co-advised by ECE Associate Professors <strong>Morris Cohen</strong> and <strong>Mark&nbsp;Davenport</strong>. Topliff&rsquo;s research is focused on&nbsp;time series forecasting using modern machine learning techniques. He is also interested in explainability in machine learning.&nbsp;</p><p>For many applications in time series forecasting, it isn&rsquo;t enough to just improve predictive capabilities. As an example, Topliff wants to know when solar flares can be expected to influence the power grid and what can done to prepare. However, it would be difficult to convince anyone to make decisions without explainability. Having a strong understanding of why Topliff&rsquo;s models produce forecasts that necessitate action will provide guidance for authorities in their decision making.</p><p>Truong is a first year ECE Ph.D. student in the robotics program. She is co-advised by <strong>Sonia Chernova</strong> and <strong>Dhruv Batra</strong>, who are associate professors in the <a href="https://ic.gatech.edu/">School of Interactive Computing</a> and the <a href="http://ml.gatech.edu/">Machine Learning Center</a>.&nbsp;Truong&rsquo;s long-term research goal is to allow robots to intelligently interact with humans in complex environments to improve quality of life. She&nbsp;hopes to develop techniques to enable the transfer of robot skills learned in simulation to a real robotic system.&nbsp;This is important because working in simulation presents many benefits: it&rsquo;s often cheaper, faster, and safer than working on a real robot. However, it&rsquo;s non-trivial to transfer this learned knowledge onto a real robot, as many inconsistencies and differences exist between the simulation platform and the real world.&nbsp;</p><p>Truong hopes to develop generalization techniques to bridge the gap between simulation and reality so that robotics developments can be done by more artificial intelligence researchers/students &ndash; even those who don&rsquo;t have access to the physical hardware, which will help to accelerate robotics research.</p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1588259260</created>  <gmt_created>2020-04-30 15:07:40</gmt_created>  <changed>1588259810</changed>  <gmt_changed>2020-04-30 15:16:50</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[ECE Ph.D. students Charles Topliff and Joanne Truong have been named as recipients of the 2020 National Defense Science and Engineering Graduate (NDSEG) Fellowships. ]]></teaser>  <type>news</type>  <sentence><![CDATA[ECE Ph.D. students Charles Topliff and Joanne Truong have been named as recipients of the 2020 National Defense Science and Engineering Graduate (NDSEG) Fellowships. ]]></sentence>  <summary><![CDATA[<p>ECE Ph.D. students&nbsp;Charles Topliff and Joanne Truong have been named as recipients of the 2020 National Defense Science and Engineering Graduate (NDSEG) Fellowships.</p>]]></summary>  <dateline>2020-04-29T00:00:00-04:00</dateline>  <iso_dateline>2020-04-29T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-04-29 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jackie.nemeth@ece.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Jackie Nemeth</p><p>School of Electrical and Computer Engineering</p><p>404-894-2906</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>634873</item>      </media>  <hg_media>          <item>          <nid>634873</nid>          <type>image</type>          <title><![CDATA[Charles Topliff and Joanne Truong]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Topliff and Truong - 2020 NDSEG Fellows.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Topliff%20and%20Truong%20-%202020%20NDSEG%20Fellows.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Topliff%20and%20Truong%20-%202020%20NDSEG%20Fellows.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Topliff%2520and%2520Truong%2520-%25202020%2520NDSEG%2520Fellows.png?itok=S3cwvABu]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[photograph of Charles Topliff and Joanne Truong, 2020 NDSEG Fellows]]></image_alt>                    <created>1588197191</created>          <gmt_created>2020-04-29 21:53:11</gmt_created>          <changed>1588197191</changed>          <gmt_changed>2020-04-29 21:53:11</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://www.ece.gatech.edu]]></url>        <title><![CDATA[School of Electrical and Computer Engineering]]></title>      </link>          <link>        <url><![CDATA[http://www.ic.gatech.edu]]></url>        <title><![CDATA[School of Interactive Computing]]></title>      </link>          <link>        <url><![CDATA[http://www.gatech.edu]]></url>        <title><![CDATA[Georgia Tech]]></title>      </link>          <link>        <url><![CDATA[https://ndseg.sysplus.com]]></url>        <title><![CDATA[National Defense Science and Engineering Graduate Fellowships]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>          <category tid="134"><![CDATA[Student and Faculty]]></category>          <category tid="8862"><![CDATA[Student Research]]></category>          <category tid="135"><![CDATA[Research]]></category>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="144"><![CDATA[Energy]]></category>          <category tid="145"><![CDATA[Engineering]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="134"><![CDATA[Student and Faculty]]></term>          <term tid="8862"><![CDATA[Student Research]]></term>          <term tid="135"><![CDATA[Research]]></term>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="144"><![CDATA[Energy]]></term>          <term tid="145"><![CDATA[Engineering]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="184694"><![CDATA[Charles Topliff]]></keyword>          <keyword tid="184511"><![CDATA[Joanne Truong]]></keyword>          <keyword tid="1808"><![CDATA[graduate students]]></keyword>          <keyword tid="184695"><![CDATA[National Defense Science and Engineering Graduate (NDSEG) Fellowships]]></keyword>          <keyword tid="166855"><![CDATA[School of Electrical and Computer Engineering]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="109"><![CDATA[Georgia Tech]]></keyword>          <keyword tid="180043"><![CDATA[U.S. Department of Defense]]></keyword>          <keyword tid="9167"><![CDATA[machine learning]]></keyword>          <keyword tid="667"><![CDATA[robotics]]></keyword>          <keyword tid="83321"><![CDATA[Mark Davenport]]></keyword>          <keyword tid="171619"><![CDATA[Morris Cohen]]></keyword>          <keyword tid="169047"><![CDATA[Sonia Chernova]]></keyword>          <keyword tid="173615"><![CDATA[dhruv batra]]></keyword>          <keyword tid="184696"><![CDATA[time series forecasting]]></keyword>          <keyword tid="184697"><![CDATA[explainability]]></keyword>          <keyword tid="175381"><![CDATA[power grid]]></keyword>          <keyword tid="184698"><![CDATA[solar flares]]></keyword>          <keyword tid="169956"><![CDATA[robot-human interaction]]></keyword>          <keyword tid="184699"><![CDATA[robotic simulations]]></keyword>          <keyword tid="184700"><![CDATA[generalization techniques]]></keyword>          <keyword tid="2556"><![CDATA[artificial intelligence]]></keyword>      </keywords>  <core_research_areas>          <term tid="39431"><![CDATA[Data Engineering and Science]]></term>          <term tid="39451"><![CDATA[Electronics and Nanotechnology]]></term>          <term tid="39531"><![CDATA[Energy and Sustainable Infrastructure]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="634879">  <title><![CDATA[Howard Receives Georgia Tech&#039;s Outstanding Achievement in Research Innovation Award]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1588251923</created>  <gmt_created>2020-04-30 13:05:23</gmt_created>  <changed>1588251923</changed>  <gmt_changed>2020-04-30 13:05:23</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Howard Receives Georgia Tech&#039;s Outstanding Achievement in Research Innovation Award]]></publication>  <article_dateline>2020-04-30T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-04-30T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-04-30T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.cc.gatech.edu/news/634833/school-chair-among-2020-institute-research-award-winners]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="634592">  <title><![CDATA[Machine learning technique helps wearable devices get better at diagnosing sleep disorders and quality]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1587475386</created>  <gmt_created>2020-04-21 13:23:06</gmt_created>  <changed>1587475386</changed>  <gmt_changed>2020-04-21 13:23:06</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[anti-black]]></publication>  <article_dateline>2020-04-21T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-04-21T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-04-21T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://aihub.org/2020/04/21/machine-learning-technique-helps-wearable-devices-get-better-at-diagnosing-sleep-disorders-and-quality/]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="634486">  <title><![CDATA[Ph.D. Students Named 2020 Members of NSF Graduate Research Fellowship Program]]></title>  <uid>34773</uid>  <body><![CDATA[<p>A pair of&nbsp;<a href="http://ic.gatech.edu/">School of Interactive Computing</a>&nbsp;and <a href="http://ml.gatech.edu/">Machine Learning Center at Georgia Tech (ML@GT</a>) students were selected as 2020 members of the <a href="https://www.nsfgrfp.org/">National Science Foundation Graduate Research Fellowship Program</a>&nbsp;(NSF GRFP).</p><p>First-year Ph.D. students&nbsp;<strong>Daniel Bolya</strong>&nbsp;and&nbsp;<strong>Joanne Truong</strong>&nbsp;were recognized by the program, which supports graduate students pursuing research-based Masters and doctoral degrees at United States institutions.</p><p>The NSF GRFP provides financial support for three years, comprised of a $34,000 stipend per 12-month fellowship year, as well as a direct payment of $12,000 to Georgia Tech toward the cost of education for each of the three years of fellowship funding.</p><p>Advised by assistant professor&nbsp;<strong>Judy Hoffman</strong>, Bolya&nbsp;works with machine learning and computer vision. Recent work at Georgia Tech has focused on error profiling in instance segmentation and object detection models. His method, building upon previous work at MIT, is unique in that it captures all possible sources of error in a model, while properly weighing the importance of each. He plans to continue the pursuit of faster methods of instance segmentation that can he make accessible. Current methods are not practical for many applications due to limits in speed, accuracy, and data efficiency. His research addresses this challenge.</p><p>&ldquo;This is not just about computer vision,&rdquo; he said in his research statement. &ldquo;Improving instance segmentation would impact the tech we use every day.&rdquo;</p><p>Like his work at MIT, called YOLACT, he plans to fully release the project open source once it is ready.</p><p>Truong&rsquo;s long-term research goal is to develop robots that can see, talk, reason, and act in complex human environments. Specifically, she will focus on a method called &ldquo;sim2robot transfer,&rdquo; which develops efficient domain adaptation techniques to enable pre-training of AI agents in simulators.</p><p>&ldquo;The overall goals of my research plan are to, one, break down the possible errors in simulation-to-reality transfer that result in a reality gap, and, two, close the loop between simulation and reality by using data collected on a real robot to finetune and optimize parameters in simulation,&rdquo; said Truong, who is co-advised&nbsp;by associate professors&nbsp;<strong>Dhruv Batra</strong>&nbsp;and&nbsp;<strong>Sonia Chernova</strong>.</p><p>She worked on the first goal last fall, achieving optimization in simulator settings for sim2real predictivity. Currently, she is working on the second goal, developing domain adaptation techniques to enable low-shop adaptation between simulation and reality.</p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1587145360</created>  <gmt_created>2020-04-17 17:42:40</gmt_created>  <changed>1587145370</changed>  <gmt_changed>2020-04-17 17:42:50</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The NSF GRFP provides financial support for three years, comprised of a $34,000 stipend per 12-month fellowship year, as well as a direct payment of $12,000 to Georgia Tech toward the cost of education for each of the three years of fellowship funding.]]></teaser>  <type>news</type>  <sentence><![CDATA[The NSF GRFP provides financial support for three years, comprised of a $34,000 stipend per 12-month fellowship year, as well as a direct payment of $12,000 to Georgia Tech toward the cost of education for each of the three years of fellowship funding.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-04-17T00:00:00-04:00</dateline>  <iso_dateline>2020-04-17T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-04-17 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>634467</item>      </media>  <hg_media>          <item>          <nid>634467</nid>          <type>image</type>          <title><![CDATA[Daniel Bolya and Joanne Truong]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Joanne and Daniel.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Joanne%20and%20Daniel.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Joanne%20and%20Daniel.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Joanne%2520and%2520Daniel.png?itok=VWxfQLjH]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Daniel Bolya and Joanne Truong]]></image_alt>                    <created>1587067147</created>          <gmt_created>2020-04-16 19:59:07</gmt_created>          <changed>1587067147</changed>          <gmt_changed>2020-04-16 19:59:07</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="634469">  <title><![CDATA[IC Ph.D. Students Named 2020 Members of NSF Graduate Research Fellowship Program]]></title>  <uid>33939</uid>  <body><![CDATA[<p>A pair of <a href="http://ic.gatech.edu">School of Interactive Computing</a> students was selected as 2020 members of the N<a href="https://www.nsfgrfp.org/">ational Science Foundation Graduate Research Fellowship Program</a> (NSF GRFP).</p><p>First-year Ph.D. students <strong>Daniel Bolya</strong> (advised by <strong>Judy Hoffman</strong>) and <strong>Joanne Truong</strong> (advised by <strong>Dhruv Batra</strong> and <strong>Sonia Chernova</strong>) were recognized by the program, which supports graduate students pursuing research-based Master&rsquo;s and doctoral degrees at United States institutions.</p><p>The NSF GRFP provides financial support for three years, comprised of a $34,000 stipend per 12-month fellowship year, as well as a direct payment of $12,000 to Georgia Tech toward the cost of education for each of the three years of fellowship funding.</p><p>Bolya&rsquo;s work is in machine learning and computer vision. Recent work at Georgia Tech has focused on error profiling in instance segmentation and object detection models. His method, building upon previous work at MIT, is unique in that it captures all possible sources of error in a model, while properly weighing the importance of each. He plans to continue pursuit of faster methods of instance segmentation that can he make accessible. Current methods are not practical for many applications due to limits in speed, accuracy, and data efficiency. His research addresses this challenge.</p><p>&ldquo;This is not just about computer vision,&rdquo; he said in his research statement. &ldquo;Improving instance segmentation would impact the tech we use every day.&rdquo;</p><p>Like his work at MIT, called YOLACT, he plans to fully release the project open source once it is ready.</p><p>Truong&rsquo;s long-term research goal is to develop robots that can see, talk, reason, and act in complex human environments. Specifically, she will focus on a method called &ldquo;sim2robot transfer,&rdquo; which develops efficient domain adaptation techniques to enable pre-training of AI agents in simulators while ensuring that the learned skills generalize to a real robotic platform.</p><p>&ldquo;The overall goals of my research plan are to, one, break down the possible errors in simulation-to-reality transfer that result in a reality gap, and, two, close the loop between simulation and reality by using data collected on a real robot to finetune and optimize parameters in simulation,&rdquo; she said.</p><p>She worked on the first goal last fall, achieving optimization in simulator settings for sim2real predictivity. Currently, she is working on the second goal, developing domain adaptation techniques to enable low-shop adaptation between simulation and reality.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1587067366</created>  <gmt_created>2020-04-16 20:02:46</gmt_created>  <changed>1587067366</changed>  <gmt_changed>2020-04-16 20:02:46</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The NSF GRFP provides financial support for three years, comprised of a $34,000 stipend per 12-month fellowship year, as well as a direct payment of $12,000 to Georgia Tech toward the cost of education for each of the three years of fellowship funding.]]></teaser>  <type>news</type>  <sentence><![CDATA[The NSF GRFP provides financial support for three years, comprised of a $34,000 stipend per 12-month fellowship year, as well as a direct payment of $12,000 to Georgia Tech toward the cost of education for each of the three years of fellowship funding.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-04-15T00:00:00-04:00</dateline>  <iso_dateline>2020-04-15T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-04-15 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>634467</item>      </media>  <hg_media>          <item>          <nid>634467</nid>          <type>image</type>          <title><![CDATA[Daniel Bolya and Joanne Truong]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Joanne and Daniel.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Joanne%20and%20Daniel.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Joanne%20and%20Daniel.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Joanne%2520and%2520Daniel.png?itok=VWxfQLjH]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Daniel Bolya and Joanne Truong]]></image_alt>                    <created>1587067147</created>          <gmt_created>2020-04-16 19:59:07</gmt_created>          <changed>1587067147</changed>          <gmt_changed>2020-04-16 19:59:07</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="181639"><![CDATA[cc-research; ic-ai-ml]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="634460">  <title><![CDATA[Chao Zhang Wins Google Faculty Research Award]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1587064556</created>  <gmt_created>2020-04-16 19:15:56</gmt_created>  <changed>1587064556</changed>  <gmt_changed>2020-04-16 19:15:56</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Chao Zhang Wins Google Faculty Research Award]]></publication>  <article_dateline>2020-04-16T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-04-16T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-04-16T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://www.cc.gatech.edu/news/634457/chao-zhang-wins-google-faculty-research-award]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="634438">  <title><![CDATA[Meet ML@GT: Lara J. Martin Trains AI Agents to Become Storytellers]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1587044801</created>  <gmt_created>2020-04-16 13:46:41</gmt_created>  <changed>1587044801</changed>  <gmt_changed>2020-04-16 13:46:41</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Tokyo Smart City studio]]></publication>  <article_dateline>2020-04-16T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-04-16T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-04-16T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/2Ve23ob]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="634296">  <title><![CDATA[Georgia Tech Researchers Presenting Work Virtually at Top AI Conference Due to COVID-19]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1586783990</created>  <gmt_created>2020-04-13 13:19:50</gmt_created>  <changed>1586783990</changed>  <gmt_changed>2020-04-13 13:19:50</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Georgia Tech Researchers Presenting Work Virtually at Top AI Conference Due to COVID-19]]></publication>  <article_dateline>2020-04-13T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-04-13T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-04-13T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[http://bit.ly/ICLR2020]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="634254">  <title><![CDATA[Georgia Tech and Intel Awarded Multimillion-Dollar Program to Defend Against Attacks on AI]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1586526191</created>  <gmt_created>2020-04-10 13:43:11</gmt_created>  <changed>1586526191</changed>  <gmt_changed>2020-04-10 13:43:11</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Georgia Tech and Intel Awarded Multimillion-Dollar Program to Defend Against Attacks on AI]]></publication>  <article_dateline>2020-04-10T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-04-10T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-04-10T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://b.gatech.edu/34sQzzO]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="634130">  <title><![CDATA[Working Towards Explainable and Data-efficient Machine Learning Models via Symbolic Reasoning]]></title>  <uid>34773</uid>  <summary><![CDATA[]]></summary>  <body><![CDATA[]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1586264691</created>  <gmt_created>2020-04-07 13:04:51</gmt_created>  <changed>1586264691</changed>  <gmt_changed>2020-04-07 13:04:51</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[]]></teaser>  <type>hgTechInTheNews</type>  <publication><![CDATA[Tokyo Smart City studio]]></publication>  <article_dateline>2020-04-07T00:00:00-04:00</article_dateline>  <iso_article_dateline>2020-04-07T00:00:00-04:00</iso_article_dateline>  <gmt_article_dateline>2020-04-07T00:00:00-04:00</gmt_article_dateline>  <article_url><![CDATA[https://bit.ly/2JKxL5S]]></article_url>  <media>      </media>  <hg_media>      </hg_media>  <files>      </files>  <groups>          <group id="37041"><![CDATA[Computational Science and Engineering]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>      </groups>  <categories>      </categories>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>    <userdata><![CDATA[]]></userdata></node><node id="634055">  <title><![CDATA[Looking for Activities at Home? Try These Interactive Tools from IC Researchers]]></title>  <uid>33939</uid>  <body><![CDATA[<p>The world is on lockdown right now, and we&rsquo;re all searching for new ways to occupy our time inside. With only so many times you can re-watch The Office (oh, who are we kidding &ndash; maybe just one more time through&hellip;), we thought it would be fun to share some of the interactive tools from our own researchers&rsquo; workshops.</p><p>Below, you&rsquo;ll find just a couple of the tools you can interact with online, giving you opportunities from learning how to code to creating art. But this is only just a start &ndash; we&rsquo;d love to hear from you.</p><p>If you&rsquo;re a Georgia Tech student or faculty member, submit your interactive tools to communications officer David Mitchell at <a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a>. We&rsquo;ll add to the list, share with our audience, and help everyone find some enjoyment during a difficult time.</p><p><strong>Create Your Own Generative Art Pieces &ndash; </strong>submitted by Devi Parikh</p><p>Looking for a new piece of art for your wall? With this tool, you can flex your creative muscles. Choose a style, adjust the values, colors, and properties, and generate a piece that would fit in nicely in your home.</p><p>This work demonstrates a broader area of research into machine learning and creativity. The first piece of AI-generated art to go to auction sold for $432,500 in 2018.</p><p><strong>LINK: </strong><a href="https://cc.gatech.edu/~parikh/art.html">https://cc.gatech.edu/~parikh/art.html</a></p><p><strong>Interact with Visual Chatbot </strong>&ndash; submitted by Devi Parikh</p><p>Parikh&rsquo;s lab is doing research in an area called visual question answering. Developed in 2017, this demo allows you to upload an image and have a conversation with a chatbot about it. Pick out an image you&rsquo;ve taken or just grab one from the web and ask questions to see just how quickly and accurately this AI can perform the task. This research is key to developing agents that can reason about specific tasks in the real world.</p><p><strong>LINK: </strong><a href="http://demo-visualdialog.cloudcv.org/">http://demo-visualdialog.cloudcv.org/</a></p><p><strong>Learn to Code Using EarSketch and TunePad</strong> &ndash; submitted by Brian Magerko</p><p>Have you been dying to learn how to code? There&rsquo;s no time like the present. Without the benefit of a classroom setting to learn all the ins and outs, you might find a usable tool like EarSketch beneficial. EarSketch uses music to guide the learner. With sounds from the EarSketch library or your own uploads, along with Python or JavaScript to code, you can produce quality music online.</p><p>Like EarSketch, TunePad &ndash; developed in collaboration with Northwestern University &ndash; is a tool for creating music using the Python programming language. No knowledge in music or coding is required to get started. Get those musical juices flowing, and start creating.</p><p><strong>LINK: </strong><a href="http://earsketch.gatech.edu/">earsketch.gatech.edu</a></p><p><strong>Learn About Grasping Tasks Using this Online Tool </strong>&ndash; submitted by Samarth Brahmbhatt</p><p>This tool allows people to interactively explore how we grasp household objects. So, why is this important? Grasping is a key capability in the development of household robotics. In order to train robots how to grab and use items in the house, we need to identify the most efficient approach. Explore this tool, which includes items from an apple to a doorknob to a video game controller.</p><p><strong>LINK: </strong><a href="https://contactdb.cc.gatech.edu/contactdb_explorer.html">https://contactdb.cc.gatech.edu/contactdb_explorer.html</a></p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1585958447</created>  <gmt_created>2020-04-04 00:00:47</gmt_created>  <changed>1585958447</changed>  <gmt_changed>2020-04-04 00:00:47</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[These are just a couple of the tools you can interact with online, giving you opportunities from learning how to code to creating art.]]></teaser>  <type>news</type>  <sentence><![CDATA[These are just a couple of the tools you can interact with online, giving you opportunities from learning how to code to creating art.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-04-03T00:00:00-04:00</dateline>  <iso_dateline>2020-04-03T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-04-03 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>444971</item>      </media>  <hg_media>          <item>          <nid>444971</nid>          <type>image</type>          <title><![CDATA[EarSketch]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[static1.squarespace.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/static1.squarespace_0.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/static1.squarespace_0.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/static1.squarespace_0.png?itok=i5HineHX]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[EarSketch]]></image_alt>                    <created>1449256205</created>          <gmt_created>2015-12-04 19:10:05</gmt_created>          <changed>1475895184</changed>          <gmt_changed>2016-10-08 02:53:04</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="182940"><![CDATA[cc-research; ic-ai-ml; ic-robotics; ic-hcc]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="633985">  <title><![CDATA[Pitch Perfect: GT Computing Undergrads Provide Automated Training Upgrade for Softball Team]]></title>  <uid>33939</uid>  <body><![CDATA[<p>There&rsquo;s a classic story that former Atlanta Braves pitching coach Leo Mazzone used to share about Hall-of-Famer Greg Maddux, one of the smartest hurlers of all time. Although the exact details have changed in retelling over time, it goes something like this:</p><p>Maddux, a meticulous documenter of pitch sequences and batter results throughout his career, once explained to Mazzone in between innings that the leadoff batter in the following frame would pop out to third base on the fourth pitch of the at-bat. He&rsquo;d start him with a fastball, change speeds for strike two, waste a pitch outside, and then induce the popup on a one-ball, two-strike count. Sure enough, a few minutes later, Maddux did exactly as he&rsquo;d said.</p><p>There are a couple of lessons here: One, Maddux was a wizard. Many pitchers over time have tried to replicate his impeccable approach to the game, but few have ever succeeded at that level; two, pitch sequence matters &ndash; perhaps more than how overpowering your fastball is or how sharp the break is on your curve.</p><p>Capitalizing on this intuition, a group of undergraduate students at <a href="http://gatech.edu">Georgia Tech</a> are working with the softball team to provide an automated upgrade to players&rsquo; training. Using the wealth of statistics kept by the team &ndash; pitch-by-pitch data for balls, strikes, types of pitches thrown, and results &ndash; they have trained an algorithm that can select the best pitch to throw in any given situation.</p><p>The tool is used by the coaches and pitchers for game planning purposes, generating daily reports after every game and practice to help inform coaches of trends in sequences and results.</p><p>&ldquo;In baseball and softball nowadays, data analytics has become such an incredibly important part of the game,&rdquo; said <strong>Jack Bennett</strong>, a third-year <a href="http://isye.gatech.edu">Industrial Engineering</a> student. &ldquo;Anything that can get them data to go into games more prepared. Technology is at the forefront of this.&rdquo;</p><p>They began using the approach during the 2019 season. Bennett and partners <strong>Zach Panzarino</strong> (third-year <a href="http://cc.gatech.edu">Computer Science</a>) and Ron Kushkuley (third-year <a href="http://coe.gatech.edu">Computer Engineering</a>) had demonstrated a similar capability at last year&rsquo;s Sports Innovation Hackathon using data for Atlanta Braves pitcher Mike Foltynewicz, finishing in third place. <strong>Doug Allvine</strong>, assistant athletics director for innovation at Georgia Tech, put the team in touch with softball coach <strong>Aileen Morales</strong>. Morales was interested, and the students were able to begin testing the approach.</p><p>It works like this: The softball team keeps track of its own data &ndash; not just player statistics, but pitch selections and results for every pitcher in every game throughout the season. That&rsquo;s a lot of data and can offer a lot of information. What happened when Pitcher X threw a 3-2 changeup to a lefthanded batter?</p><p>But it goes a little deeper than that. Panzarino, Bennett, and Kushkuley found that the pitch sequence is what matters most. That follows the standard strategic thinking &ndash; a slider away can be more effective if set up by an inside fastball on the previous pitch, for example. What the algorithm does, however, is consider the order each pitch is thrown in the at-bat and provide a score for which pitch will be most effective based on past data.</p><p>&ldquo;We leverage sequences, the count, outs, everything,&rdquo; Panzarino said. &ldquo;Looking at the current state and the previous pitches, it will score all the potential future routes a pitcher can choose. We give them reports before each game so that they can prepare, and then we look at success or failure after the game.&rdquo;</p><p>After a test run a year ago, the students have honed the technology and are working with the team again this year. Qualitatively speaking, they said they noticed results throughout the year.</p><p>&ldquo;When we first gave them our analysis, it would recommend certain stuff in certain situations,&rdquo; Bennett said. &ldquo;Maybe it would say a changeup should be thrown more in this situation. Then, when we&rsquo;d get postgame data later, we&rsquo;d see that more changeups were being thrown and were continuing to be effective.&rdquo;</p><p>&ldquo;When I first saw what they were developing, I was beyond impressed,&rdquo;&nbsp;Morales said. &ldquo;We are very meticulous with collecting data in our program and trying to find ways to learn more about what is and what is not working for our athletes. It&rsquo;s remarkable to see how they can take the data we had and leverage it in a way that allowed us to finetune our training.&rdquo;</p><p>Recently, at the 2020 Sports Innovation Hackathon, the group developed a similar solution for baseball. They received runner-up in the competition, and hope to connect further with the Georgia Tech baseball team in the future.</p><p>&ldquo;Tons of theory has been written on how pitchers should approach sequencing in games, but this is a model that can show you the data about how well that works,&rdquo; Panzarino said.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1585763841</created>  <gmt_created>2020-04-01 17:57:21</gmt_created>  <changed>1585763841</changed>  <gmt_changed>2020-04-01 17:57:21</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A group of undergraduate students at Georgia Tech are working with the softball team to provide an automated upgrade to players’ training.]]></teaser>  <type>news</type>  <sentence><![CDATA[A group of undergraduate students at Georgia Tech are working with the softball team to provide an automated upgrade to players’ training.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-04-01T00:00:00-04:00</dateline>  <iso_dateline>2020-04-01T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-04-01 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>520851</item>      </media>  <hg_media>          <item>          <nid>520851</nid>          <type>image</type>          <title><![CDATA[Softball]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[softball.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/softball_0.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/softball_0.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/softball_0.png?itok=TVooho3o]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Softball]]></image_alt>                    <created>1459789200</created>          <gmt_created>2016-04-04 17:00:00</gmt_created>          <changed>1475895289</changed>          <gmt_changed>2016-10-08 02:54:49</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="181639"><![CDATA[cc-research; ic-ai-ml]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node></nodes>