<nodes> <node id="649636">  <title><![CDATA[Associate Professor Elected SIGCHI President]]></title>  <uid>33939</uid>  <body><![CDATA[<p>School of Interactive Computing joint Associate Professor <strong>Neha Kumar</strong> was elected president of the <a href="https://sigchi.org/">Special Interest Group on Computer-Human Interaction</a> (SIGCHI) for 2021-22. She will serve a three-year term for the group, which is the premier international society for professionals and academics interested in human-computer interaction.</p><p>SIGCHI sponsors numerous conferences, publications, web sites, and other services that advance HCI through workshops and outreach. <a href="https://medium.com/sigchi/thank-you-sigchi-dae601d883bb">In a blog post for SIGCHI</a>, Kumar said that she and the other incoming executive committee members aim to continue the long history of advancing the group&rsquo;s key missions.</p><p>&ldquo;We hope to continue to expand the excellent work that our many colleagues in this (executive committee) have done, with their commitment (among other things) to accessibility, equity and inclusion, to the safety of our community, global community building, and a #SIGCHI4ALL,&rdquo; she wrote. &ldquo;Together the six of us represent a wide range of perspectives; our hope is that this representation with ensure that we remain answerable to our entire global membership as we work towards supporting and fostering participation and growth locally and globally.&rdquo;</p><p>Kumar&rsquo;s research at Georgia Tech lies at the intersection of human-centered computing and global development. She has produced research that improves technology design for historically underserved communities. Her <a href="http://www.tandem.gatech.edu/">TanDEm Lab</a> &ndash; short for Technology and Design towards &lsquo;Empowerment&rsquo; &ndash; has focused on health and wellbeing on the margins, centering topics such as gender, stigma, and knowledge production.</p><p>Kumar has received other honors, such as the National Science Foundation&rsquo;s CAREER Award, and also chairs the <a href="https://www.acm.org/fca#:~:text=The%20ACM%20Future%20of%20Computing,next%20generation%20of%20computing%20professionals.&amp;text=The%20ACM%20FCA%20aspires%20to,of%20computing%20into%20the%20future.">Association of Computing Machinery&rsquo;s Future of Computing Academy</a>.</p><p>Georgia Tech Ph.D. graduate <strong>Tamara Clegg</strong> is also on the SIGCHI executive committee, serving as the vice president of membership and communication.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1628787031</created>  <gmt_created>2021-08-12 16:50:31</gmt_created>  <changed>1628787031</changed>  <gmt_changed>2021-08-12 16:50:31</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Neha Kumar will serve a three-year term for the group, which is the premier international society for professionals and academics interested in human-computer interaction.]]></teaser>  <type>news</type>  <sentence><![CDATA[Neha Kumar will serve a three-year term for the group, which is the premier international society for professionals and academics interested in human-computer interaction.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2021-08-12T00:00:00-04:00</dateline>  <iso_dateline>2021-08-12T00:00:00-04:00</iso_dateline>  <gmt_dateline>2021-08-12 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>507851</item>      </media>  <hg_media>          <item>          <nid>507851</nid>          <type>image</type>          <title><![CDATA[Neha Kumar]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[neha.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/neha_0.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/neha_0.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/neha_0.jpeg?itok=7IV4SSE7]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Neha Kumar]]></image_alt>                    <created>1457114400</created>          <gmt_created>2016-03-04 18:00:00</gmt_created>          <changed>1475895270</changed>          <gmt_changed>2016-10-08 02:54:30</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="649635">  <title><![CDATA[Assistant Professor Named 2021 Microsoft Research Faculty Fellow]]></title>  <uid>33939</uid>  <body><![CDATA[<p>School of Interactive Computing Assistant Professor <strong>Diyi Yang</strong> was named one of five <a href="https://www.microsoft.com/en-us/research/academic-program/faculty-fellowship/#!fellows">2021 Microsoft Research Faculty Fellows</a> earlier this summer. The two-year fellowship recognizes innovative and promising early-career professors in the Americas who are exploring breakthrough research in computer science or a related field.</p><p>Yang was recognized for her work leading the <a href="https://www.cc.gatech.edu/~dyang888/group.html">Social and Language Technologies Lab</a>, concentrating on research across fields of natural language processing, machine learning, and computational social science. Yang&rsquo;s research works to understand social aspects of language and build responsible NLP systems with social intelligence.</p><p>&ldquo;We live in an era where many aspects of our daily activities are recorded as textual data,&rdquo; Yang said in her proposal to Microsoft Research. &ldquo;Over the last few decades, NLP has dramatically improved performance and produced industrial applications like personal assistants. Despite being sufficient to enable these applications, current NLP systems largely ignore the social part of language.&rdquo;</p><p>This ignorance limits the functionality of the programs, Yang said. This research examines what is said, who says it, in what context and for what goals in hopes of developing systems to facilitate human-human and human-machine communication. So far, her team has produced projects on mitigating bias in text, detecting mental health issues, improving support in online support groups, and more.</p><p>According to Microsoft Research&rsquo;s website, Yang is the first Georgia Tech faculty member to be named a Microsoft Research Faculty Fellow since 2011 and only the third overall. Yang has earned a number of other awards and recognitions, such as Forbes 30 Under 30 in Science and IEEE AI 10 to Watch.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1628786655</created>  <gmt_created>2021-08-12 16:44:15</gmt_created>  <changed>1628786655</changed>  <gmt_changed>2021-08-12 16:44:15</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The two-year fellowship recognizes innovative and promising early-career professors in the Americas who are exploring breakthrough research in computer science or a related field.]]></teaser>  <type>news</type>  <sentence><![CDATA[The two-year fellowship recognizes innovative and promising early-career professors in the Americas who are exploring breakthrough research in computer science or a related field.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2021-08-12T00:00:00-04:00</dateline>  <iso_dateline>2021-08-12T00:00:00-04:00</iso_dateline>  <gmt_dateline>2021-08-12 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>630588</item>      </media>  <hg_media>          <item>          <nid>630588</nid>          <type>image</type>          <title><![CDATA[Diyi Yang 2020]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Diyi_Yang.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Diyi_Yang.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Diyi_Yang.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Diyi_Yang.jpg?itok=6jzP0Yjh]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1578338255</created>          <gmt_created>2020-01-06 19:17:35</gmt_created>          <changed>1578338255</changed>          <gmt_changed>2020-01-06 19:17:35</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="649137">  <title><![CDATA[Georgia Tech Will Help Bring Critical Advancements to Online Learning as Part of Multimillion Dollar NSF Grant]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Georgia Tech is a major partner in a new <a href="https://www.nsf.gov/">National Science Foundation</a> (NSF) <a href="https://www.nsf.gov/funding/pgm_summ.jsp?pims_id=505686">Artificial Intelligence Research Institute</a> focused on adult learning in online education, it was announced today. Led by the Georgia Research Alliance, the National AI Institute for Adult Learning in Online Education (ALOE) is one of 11 new NSF institutes created as part of an investment totaling $220 million.</p><p>The ALOE Institute will develop new AI theories and techniques for enhancing the quality of online education for lifelong learning and workforce development. According to some projections, about 100 million American workers will need to be reskilled or upskilled over the next decade. With the increase of AI and automation, said Co-Principal Investigator and Georgia Tech lead Professor <strong>Ashok Goel</strong>, many jobs will be redefined.</p><p>&ldquo;There will be some loss of jobs, but mostly we will see individuals needing to learn a new skill to get a new job or to advance their career,&rdquo; said Goel, a professor of computer science and human-centered computing in Georgia Tech&rsquo;s <a href="http://ic.gatech.edu">School of Interactive Computing</a> (IC) and the chief scientist with the <a href="https://c21u.gatech.edu/">Center for 21<sup>st</sup> Century Universities</a> (C21U). &ldquo;So, how do you help 100 million workers reskill or upskill in 10 years? Because AI is in part responsible for this need, it is our belief it should also be responsible for finding a solution.&rdquo;</p><p>That is the goal of this project, which will be led by principal investigator <strong>Myk Garn</strong>, assistant vice chancellor for New Models of Learning at the University System of Georgia and senior advisor to the <a href="https://gra.org/">Georgia Research Alliance</a> (GRA).</p><p>&ldquo;Online education for adults has enormous implications for tomorrow&rsquo;s workforce,&rdquo; Garn said. &ldquo;Yet, serious questions remain about the quality of online learning and how best to teach adults online. Artificial intelligence offers a powerful technology for dramatically improving the quality of online learning and adult education.&rdquo;</p><p>To do that successfully, the education must be personalized and scaled to unprecedented levels. Educating 100 million people in online environments will, of course, require far more time and energy than in-person educators can offer their students. That is where AI comes into play.</p><p>Researchers will build new AI techniques that can adequately and efficiently train <em>other</em> AI agents to interact with humans in a classroom setting, similar to the virtual teaching assistant Jill Watson that Goel has used in his online computer science classes for the past five years. This will help satisfy the scalability requirement.</p><p>&ldquo;That&rsquo;s the fundamental advancement in AI,&rdquo; Goel said. &ldquo;A human can train an AI agent in just a few hours how to teach other AI agents on how to interact with humans on various subjects.&rdquo;</p><p>To satisfy the need for personalized AI, researchers will train machines to have a mutual theory of mind with their human counterparts. In other words, there will be a greater understanding by both machine and human of the others&rsquo; needs, knowledge, and expectations.</p><p>&ldquo;Our vision is to develop AI agents that achieve a mutual understanding of learning expectations, outcomes, and methods between students and teachers,&rdquo; said Alex Endert, an assistant professor in Georgia Tech&rsquo;s <a href="http://cc.gatech.edu">College of Computing</a> who will help the team analyze and understand data from the project. &ldquo;Along with my students, I look forward to developing visual analytic interfaces that serve that purpose to foster trust and interpretability of AI for this domain.&rdquo;</p><p>Ultimately, the hope is that education becomes more available, affordable, achievable, and, thereby, equitable. Such an expansive project, understandably, requires the expertise of many kinds from many people. In addition to Endert and Goel, who will be executive director of the ALOE Institute, there will be a host of faculty at Georgia Tech will participate.</p><p>Senior Georgia Tech members of the ALOE team include <strong>Stephen Harmon</strong> (Industrial Design and C21U), <strong>Michael Hoffmann</strong> (Public Policy), <strong>David Joyner</strong> (Online Master of Science in Computer Science), <strong>Ruth Kanfer</strong> (Psychology), <strong>Brian Magerko</strong> (Language, Media, and Culture), <strong>Keith McGreggor</strong> (IC and VentureLab), <strong>Chaohua Ou</strong> (Center for Teaching and Learning), and <strong>Spencer Rugaber</strong> (Computer Science).</p><p>Other partners in the ALOE Institute include Arizona State University, Drexel University, Georgia State University, Harvard University, the Technical College System of Georgia, the University of North Carolina at Greensboro, IMS Global, Boeing, IBM, and Wiley.</p><p><a href="https://research.gatech.edu/georgia-tech-joins-us-national-science-foundation-advance-ai-research-and-education">Georgia Tech is a key partner in two additional institutes</a> in partnership with the U.S. Department of Agriculture, the National Institute of Food and Agricultures, the U.S. Department of Homeland Security Science &amp; Technology Directorate, and the U.S. Department of Transportation Federal Highway Administration. Georgia Tech will lead the AI Institute for Advances in Optimization (AI4Opt) and the AI Institute for Collaborative Assistance and Responsiveness Interaction for Networked Groups (AI-CARING), the latter of which is led by College of Computing Associate Professor Sonia Chernova to support aging-related issues.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1627572498</created>  <gmt_created>2021-07-29 15:28:18</gmt_created>  <changed>1627572498</changed>  <gmt_changed>2021-07-29 15:28:18</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Led by the Georgia Research Alliance, the National AI Institute for Adult Learning in Online Education (ALOE) is one of 11 new NSF institutes created as part of an investment totaling $220 million.]]></teaser>  <type>news</type>  <sentence><![CDATA[Led by the Georgia Research Alliance, the National AI Institute for Adult Learning in Online Education (ALOE) is one of 11 new NSF institutes created as part of an investment totaling $220 million.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2021-07-29T00:00:00-04:00</dateline>  <iso_dateline>2021-07-29T00:00:00-04:00</iso_dateline>  <gmt_dateline>2021-07-29 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>611004</item>      </media>  <hg_media>          <item>          <nid>611004</nid>          <type>image</type>          <title><![CDATA[Online learning stock]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[online learning.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/online%20learning.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/online%20learning.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/online%2520learning.jpg?itok=iJjWc-lh]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Fingers typing on a laptop keyboard]]></image_alt>                    <created>1536259875</created>          <gmt_created>2018-09-06 18:51:15</gmt_created>          <changed>1536259875</changed>          <gmt_changed>2018-09-06 18:51:15</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://research.gatech.edu/georgia-tech-joins-us-national-science-foundation-advance-ai-research-and-education]]></url>        <title><![CDATA[Georgia Tech Joins the U.S. National Science Foundation to Advance AI Research and Education]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="649087">  <title><![CDATA[New Browser-Based Chart Builder Gives Line Graphs, Scatterplots Their Very Own Audio Track]]></title>  <uid>32045</uid>  <body><![CDATA[<p>A new multimodal data visualization tool for the web produces charts with a twist &ndash; these charts also represent information using carefully designed sounds for a richer, more powerful, and accessible way to experience data.</p><p>Released by the Georgia Institute of Technology and open-source web application Highcharts, <a href="https://sonification.highcharts.com/#/">Highcharts Sonification Studio (HSS)</a>&nbsp;enables users to enter data into a spreadsheet to create traditional visual charts such as line graphs, scatterplots, and bar charts. At the same time, the tool creates non-speech audio tracks based on the data, a process known as sonification.</p><p>&ldquo;The goal of this tool is to provide a simple, intuitive, and accessible way for users to import, edit, visualize, and sonify their data, and then export the results to a useful format,&rdquo; said Professor <strong>Bruce Walker</strong>, director of <a href="http://sonify.psych.gatech.edu/">Georgia Tech&rsquo;s Sonification Lab</a>. &ldquo;We want users to be able to use the tool without having to download software or write code, and without prior sonification expertise.&rdquo;</p><p>The data visualization+sonification approach lets users explore data with visual, auditory, or both modalities. This can lead to novel discoveries in its own right, and can also support users who may have limited ability to see or hear a given display.&nbsp;</p><p>&ldquo;Visually impaired readers find sonification and auditory graphs to be very useful for getting an overview of the data, as well as identifying patterns, outliers, and points of interest,&rdquo; said Walker.</p><p><strong>Brandon Biggs</strong>, a researcher&nbsp;and entrepreneur who is blind, highlighted the software&rsquo;s ability to allow users such as himself to create a graph that he can trust will be visually appealing.</p><p>&ldquo;I love how accessible all the components are with a screen-reader and how easy it is to create a sonification,&rdquo; Biggs said.</p><p>And for all users&mdash;even those who can see&mdash;sound can communicate information without requiring visual attention. For instance, instead of looking at a weather forecast or a chart of a stock price on a screen, imagine being able to hear the ups and downs played like a melody, with additional sounds highlighting points of interest in the data.</p><p>HSS is the culmination of a multi-year collaboration between Highsoft&mdash;the makers of Highcharts&mdash;and the Georgia Tech Sonification Lab. The goal of the collaboration is to develop an extensible, accessible, online spreadsheet and multimodal graphing platform for the auditory display, assistive technology, and STEM education community.</p><p>Walker said that HSS is a systematic re-implementation of his lab&rsquo;s Sonification Sandbox to integrate Highsoft&rsquo;s industry-leading web-based Highcharts technology with Georgia Tech&rsquo;s expertise in sonification and interactive auditory displays.</p><p>The tool is open-sourced under the MIT License to allow for extensions and forks in development from the community&nbsp;and to ensure the tool is available to all. A Highcharts license is required for commercial use of the tool, but otherwise, usage is completely free.</p><p>&ldquo;This system will complement other tools and libraries actively used by the auditory display research community and help bring sonification to an even wider audience, especially in the visualization community and in situations of limited resources,&rdquo; said <strong>&Oslash;ystein Moseng</strong>, the Highcharts developer leading the implementation of the HSS.</p><p>A paper describing the research and development of the open-source tool is part of the 26<sup>th</sup> annual International Conference on Auditory Displays (ICAD.org), which took place June 25-28, 2021. The paper <em>Highcharts Sonification Studio: An Online, Open-Source, Extensible, And Accessible Data Sonification Tool</em> is co-authored by Stanley Cantrell, Walker, and Moseng.</p><p>The Highcharts Sonification Studio web app, source code, and developer community are available at <a href="https://sonification.highcharts.com">https://sonification.highcharts.com</a>.</p><p>&nbsp;</p>]]></body>  <author>Ben Snedeker</author>  <status>1</status>  <created>1627418690</created>  <gmt_created>2021-07-27 20:44:50</gmt_created>  <changed>1627485625</changed>  <gmt_changed>2021-07-28 15:20:25</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech researchers have created a data visualization plus sonification approach lets users explore data with visual, auditory, or both modalities.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech researchers have created a data visualization plus sonification approach lets users explore data with visual, auditory, or both modalities.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2021-07-27T00:00:00-04:00</dateline>  <iso_dateline>2021-07-27T00:00:00-04:00</iso_dateline>  <gmt_dateline>2021-07-27 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[Jpreston@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Josh Preston, Research Communications Mgr.<br /><a href="mailto:Jpreston@cc.gatech.edu?subject=Sonification">Jpreston@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>649088</item>      </media>  <hg_media>          <item>          <nid>649088</nid>          <type>image</type>          <title><![CDATA[Data vis sonification tool]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[sonify-2.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/sonify-2.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/sonify-2.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/sonify-2.jpg?itok=Bzr1J6Rz]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[A user working with accessible browser-based Highcharts Sonification Studio software.]]></image_alt>                    <created>1627422780</created>          <gmt_created>2021-07-27 21:53:00</gmt_created>          <changed>1627498800</changed>          <gmt_changed>2021-07-28 19:00:00</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://youtu.be/VdKcyGXLyvg]]></url>        <title><![CDATA[Hearing the Data]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="135"><![CDATA[Research]]></category>      </categories>  <news_terms>          <term tid="135"><![CDATA[Research]]></term>      </news_terms>  <keywords>          <keyword tid="170772"><![CDATA[Sonification]]></keyword>          <keyword tid="438"><![CDATA[data]]></keyword>          <keyword tid="7257"><![CDATA[visualization]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="648905">  <title><![CDATA[Georgia Tech Top Contributor to Research at International Conference on Machine Learning]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Georgia Tech researchers in the College of Engineering and College of Computing are presenting their work at the International Conference on Machine Learning (IMCL), which runs through Saturday.</p><p>ICML is the leading international academic conference in machine learning. Along with NeurIPS and ICLR, it is one of the three primary conferences of high impact in machine learning and artificial intelligence research. It is supported by the International Machine Learning Society (IMLS).</p><p>Explore Georgia Tech people, research abstracts, and when authors will present (Tues-Thurs) in an interactive data graphic of <a href="https://public.tableau.com/views/GeorgiaTechatICML2021/Dashboard1?:language=en-US&amp;:display_count=n&amp;:origin=viz_share_link"><strong>Georgia Tech at IMCL 2021</strong></a>. Also explore the whole program in a second data graphic: <a href="https://public.tableau.com/views/ICML2021/Dashboard12?:showVizHome=no"><strong>Who&rsquo;s Who at ICML 2021</strong></a>.</p><p>Georgia Tech&rsquo;s work is represented in 2% of the program with 22 papers in a range of topics including (asterisk denotes a single paper):</p><ul><li>Applications (CV and NLP)*</li><li>Applications (NLP)*</li><li>Deep Learning Algorithms*</li><li>Deep Learning Theory *</li><li>Deep Reinforcement Learning*</li><li>Learning Theory &ndash; 2 papers</li><li>Optimal Transport &ndash; 2 papers</li><li>Optimization (Convex)*</li><li>Optimization and Algorithms &ndash; 2 papers</li><li>Privacy *</li><li>Reinforcement Learning &ndash; 2 papers</li><li>Reinforcement Learning and Optimization*</li><li>Reinforcement Learning and Planning*</li><li>Reinforcement Learning Theory*</li><li>Time Series &ndash; 4 papers</li></ul>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1626787202</created>  <gmt_created>2021-07-20 13:20:02</gmt_created>  <changed>1626843640</changed>  <gmt_changed>2021-07-21 05:00:40</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech researchers in the College of Engineering and College of Computing are presenting their work at the International Conference on Machine Learning (IMCL), which runs through Saturday.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech researchers in the College of Engineering and College of Computing are presenting their work at the International Conference on Machine Learning (IMCL), which runs through Saturday.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2021-07-20T00:00:00-04:00</dateline>  <iso_dateline>2021-07-20T00:00:00-04:00</iso_dateline>  <gmt_dateline>2021-07-20 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Josh Preston</p><p><a href="mailto:jpreston@cc.gatech.edu">jpreston@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>648904</item>      </media>  <hg_media>          <item>          <nid>648904</nid>          <type>image</type>          <title><![CDATA[ICML 2021]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[ICML2021.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/ICML2021.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/ICML2021.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/ICML2021.jpeg?itok=yYBmKmCm]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1626787175</created>          <gmt_created>2021-07-20 13:19:35</gmt_created>          <changed>1626787175</changed>          <gmt_changed>2021-07-20 13:19:35</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="648864">  <title><![CDATA[Georgia Tech Faculty Hold Workshop to Improve Integration of Ethics into Courses]]></title>  <uid>33939</uid>  <body><![CDATA[<p>As computer science becomes more ingrained into various areas of study and, indeed, our daily lives, an eye on the implications of innovation is needed, experts at Georgia Tech say.</p><p>To help students begin thinking about ethics with regards to research, faculty at Georgia Tech &ndash; in conjunction with Mozilla &ndash; held the first workshop on integrating ethics and responsible computing into courses this summer.</p><p>The workshop was a collaboration between faculty researchers at Georgia Tech in both the Ethics, Technology, and Human Interaction Center (ETHICx) and Computing and Society, as well as Mozilla. The workshop received a strong response, which organizers say indicates a growing desire for ethics at the center of computer science courses.</p><p>Members of the College of Computing&rsquo;s Division of Computing Instruction, the Schools of Interactive Computing, Computational Science and Engineering, Computer Science, and Electrical and Computer Engineering, along with attendees from Georgia State all participated in the online workshop.</p><p>&ldquo;It&rsquo;s really gratifying to have broad representation because it demonstrates the desire for people from so many different areas to think more deeply about the role of ethics in our education,&rdquo; said <strong>Ellen Zegura</strong>, professor in the School of Computer Science and Fleming Chair in Telecommunications.</p><p>The goal of the workshop was to help instructors consider ways in which to implement ethics as a central piece in courses not just later in a student&rsquo;s study, but from the very beginning. There&rsquo;s an issue of urgency, Zegura said, that needed to be considered.</p><p>&ldquo;Computing has reached a point where it is being used for critical decision making that really affects people&rsquo;s lives,&rdquo; she said. &ldquo;The need to use computing responsibly has moved up incredibly. And if we don&rsquo;t talk about ethics early in the curriculum, we&rsquo;re sending a message that it&rsquo;s not important. If you only hear about it in one course and it&rsquo;s later in your career, then what does that say about the importance? Students see that.&rdquo;</p><p>While official plans aren&rsquo;t currently in place to continue the program, Zegura said the idea is to continue this as a series of activities that are responsive to what people&rsquo;s needs are, specifically those who want to do a better job of embedding ethics into their computer science curriculum.</p><p>Georgia Tech graduate <strong>Kathy Pham (CS &rsquo;07, MS CS &rsquo;09)</strong>, now at Mozilla, has been instrumental in engaging the computer science community from 15-20 universities on focusing on ethics, Zegura said.</p><p><a href="https://www.youtube.com/playlist?list=PLF0CYxpffvKx5W-y_xJ9xhrGapmeF70Og">Portions of the workshop can be viewed on YouTube here.</a></p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1626700580</created>  <gmt_created>2021-07-19 13:16:20</gmt_created>  <changed>1626700580</changed>  <gmt_changed>2021-07-19 13:16:20</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[To help students begin thinking about ethics with regards to research, faculty at Georgia Tech – in conjunction with Mozilla – held the first workshop on integrating ethics and responsible computing into courses this summer.]]></teaser>  <type>news</type>  <sentence><![CDATA[To help students begin thinking about ethics with regards to research, faculty at Georgia Tech – in conjunction with Mozilla – held the first workshop on integrating ethics and responsible computing into courses this summer.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2021-07-19T00:00:00-04:00</dateline>  <iso_dateline>2021-07-19T00:00:00-04:00</iso_dateline>  <gmt_dateline>2021-07-19 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>644759</item>      </media>  <hg_media>          <item>          <nid>644759</nid>          <type>image</type>          <title><![CDATA[Ethics stock image]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[AdobeStock_117212757.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/AdobeStock_117212757.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/AdobeStock_117212757.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/AdobeStock_117212757.jpeg?itok=N567OjVZ]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1614365518</created>          <gmt_created>2021-02-26 18:51:58</gmt_created>          <changed>1614365518</changed>          <gmt_changed>2021-02-26 18:51:58</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="645832">  <title><![CDATA[Assistant Professor Earns 2020 Salesforce AI Research Grant]]></title>  <uid>33939</uid>  <body><![CDATA[<p>School of Interactive Computing Assistant Professor <strong>Diyi Yang</strong> was named a <a href="https://blog.einstein.ai/celebrating-the-winners-of-the-third-annual-salesforce-ai-research-grant/">Salesforce AI Research Grant Winner for 2020</a>. One of seven winners of the award, she will receive a $50,000 grant to advance her work. It is the third year the grant has been provided by Salesforce.</p><p>Yang&rsquo;s research, which is being led by her Ph.D. student <strong>Jiaao Chen</strong>, aims to alleviate dependence of supervised models on labeled data via data augmentation approaches. Supervised learning is the machine learning task of learning a function that maps an input to an output based on example pairs, inferring the function from training data that has been tagged with identifying properties or characteristics (labeled data).</p><p>The hope is that they may improve upon the ability to transfer models from one setting to another despite the relative lack of intensive training examples.</p><p>&ldquo;In the era of deep learning, natural language processing (NLP) has achieved extremely good performances in most data-intensive settings,&rdquo; Yang said. &ldquo;However, when there are only one or a few training examples, supervised deep learning models often fail. This strong dependence on labeled data largely prevents neural network models from being applied to new settings or real-world situations.&rdquo;</p><p>Yang&rsquo;s group has published a couple of papers in this field already, and she said the Salesforce grant will further support efforts to extend it to broader contexts, especially when NLP tasks involve complicated outputs.</p><p>&ldquo;These examples might include performing named entity recognition that finds the important information in a text, or semantic parsing that converts a natural language sentence into a structured command,&rdquo; she said.</p><p>You can read previous papers on the subject at the links below:</p><ul><li><a href="https://www.cc.gatech.edu/~dyang888/docs/mixtext_acl_2020.pdf"><em>MixText: Linguistically-Informed Interpolation of Hidden Space for Semi-Supervised Text Classification (Jiaao Chen, Zichao Yang, Diyi Yang)</em></a></li><li><a href="https://arxiv.org/pdf/2010.01677.pdf"><em>Local Additivity Based Data Augmentation for Semi-supervised NER (Jiaao Chen, Zhenghui Wang, Ran Tian, Zichao Yang, Diyi Yang)</em></a></li></ul><p>Yang was chosen from a group of over 180 quality proposals from more than 30 countries.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1617028943</created>  <gmt_created>2021-03-29 14:42:23</gmt_created>  <changed>1617028943</changed>  <gmt_changed>2021-03-29 14:42:23</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Yang’s research, which is being led by her Ph.D. student Jiaao Chen, aims to alleviate dependence of supervised models on labeled data via data augmentation approaches.]]></teaser>  <type>news</type>  <sentence><![CDATA[Yang’s research, which is being led by her Ph.D. student Jiaao Chen, aims to alleviate dependence of supervised models on labeled data via data augmentation approaches.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2021-03-29T00:00:00-04:00</dateline>  <iso_dateline>2021-03-29T00:00:00-04:00</iso_dateline>  <gmt_dateline>2021-03-29 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>630588</item>      </media>  <hg_media>          <item>          <nid>630588</nid>          <type>image</type>          <title><![CDATA[Diyi Yang 2020]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Diyi_Yang.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Diyi_Yang.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Diyi_Yang.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Diyi_Yang.jpg?itok=6jzP0Yjh]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1578338255</created>          <gmt_created>2020-01-06 19:17:35</gmt_created>          <changed>1578338255</changed>          <gmt_changed>2020-01-06 19:17:35</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="644380">  <title><![CDATA[Ph.D. Student Earns 2021 Focus Fellowship from Georgia Tech's Office of Minority Educational Development]]></title>  <uid>33939</uid>  <body><![CDATA[<p>School of Interactive Computing (IC) Ph.D. student <strong>Kantwon Rogers</strong> was awarded a 2021 Focus Fellowship by Georgia Tech&rsquo;s <a href="https://omed.gatech.edu/">Office of Minority Educational Development</a> (OMED).</p><p>The award recognizes participants in the <a href="https://focus.gatech.edu/">Focus Program</a> who have demonstrated academic excellence, community leadership, and been granted admittance to a graduate program. The Focus Program aims to introduce minority students to graduate school in hopes of increasing the number who pursue higher degrees.</p><p>Rogers attended the Focus Program five years ago as an undergraduate student at Georgia Tech.</p><p>&ldquo;It helped me learn about grad school and set me up for success,&rdquo; Rogers said of the program.</p><p>The award, which carries a prize of up to $2,500 per student based on funds available and number of awardees, is not based on specific research but recognizes overall accomplishments. In an application essay, Rogers shared how OMED was pivotal to is success at Georgia Tech.</p><p>As an undergraduate, he participated in the <a href="https://omed.gatech.edu/programs/challenge">Challenge Program</a>, a five-week academic residential program for incoming first-year students. Later, he became a counselor in the same program, an OMED tutor, a Focus participant, a Focus panelist, and last summer a computer science (CS) instructor in the Challenge program.</p><p>&ldquo;It was really spooky because I was teaching the new Challenge students in the exact same room that I sat in when I was learning CS for the first time in Challenge a decade ago,&rdquo; Rogers said. &ldquo;Truly full circle. OMED has truly been a foundation for me here at Georgia Tech, and I am eternally grateful.&rdquo;</p><p>Rogers&rsquo; research focuses on human-robot interaction, investigating the effects that intelligent agent verbal deception has on human interaction.</p><p>&ldquo;Animals deceive. Humans deceive. Should robots and AI deceive?&rdquo; Rogers poses in his research tagline.</p><p>Additionally, the work aims to provide AI systems the ability to autonomously produce contextually meaningful and successfully deceptive utterances while determining when it is appropriate to verbally deceive humans.</p><p>He is advised by IC Chair <strong>Ayanna Howard</strong>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1613581028</created>  <gmt_created>2021-02-17 16:57:08</gmt_created>  <changed>1613581728</changed>  <gmt_changed>2021-02-17 17:08:48</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The award recognizes participants in the Focus Program who have demonstrated academic excellence, community leadership, and been granted admittance to a graduate program.]]></teaser>  <type>news</type>  <sentence><![CDATA[The award recognizes participants in the Focus Program who have demonstrated academic excellence, community leadership, and been granted admittance to a graduate program.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2021-02-17T00:00:00-05:00</dateline>  <iso_dateline>2021-02-17T00:00:00-05:00</iso_dateline>  <gmt_dateline>2021-02-17 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>585962</item>      </media>  <hg_media>          <item>          <nid>585962</nid>          <type>image</type>          <title><![CDATA[Kantwon Rogers 2]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[_MG_4285.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/_MG_4285.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/_MG_4285.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/_MG_4285.jpg?itok=rz5dBUez]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1484253211</created>          <gmt_created>2017-01-12 20:33:31</gmt_created>          <changed>1484253211</changed>          <gmt_changed>2017-01-12 20:33:31</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="643612">  <title><![CDATA[Georgia Tech Research Highlights Premier Artificial Intelligence Conference]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Georgia Tech faculty and student researchers will figure prominently into the proceedings of the <a href="https://aaai.org/Conferences/AAAI-21/">35<sup>th</sup> AAAI Conference on Artificial Intelligence</a>, being held virtually from Feb. 2-9.</p><p>Twenty-three members of the Georgia Tech community contributed to 11 papers that will be presented at the conference, while two longtime contributors will join the ranks of the prestigious AAAI Fellows program.</p><p><a href="http://ic.gatech.edu/">School of Interactive Computing</a> Chair <strong>Ayanna Howard</strong> and Professor <strong>Ashok Goel</strong> join <a href="http://cc.gatech.edu/">College of Computing</a> Dean <strong>Charles Isbell</strong> (elected in 2019) and Regents&rsquo; Professor Emerita <strong>Janet Kolodner</strong> (elected in 1992) are 2021 inductees to the fellowship, giving the Institute four members. The program recognizes individuals who have made significant, sustained contributions to the field of artificial intelligence.</p><p>[<strong>Related news:</strong> <a href="https://www.cc.gatech.edu/news/643355/ic-professors-howard-goel-named-2021-aaai-fellows">IC Professors Howard, Goel Named 2021 AAAI Fellows</a>]</p><p>Notable research among the eight papers accepted to AAAI 2021 includes work from a multi-institution team working to understand and improve forecasting models of influenza-like illnesses like Covid-19. Effective forecasting is even more challenging amidst the current pandemic, when counts are affected by various factors such as symptomatic similarities.</p><p>The approach in this paper steers historical forecasting models to new scenarios where the flu and Covid-19 co-exist, demonstrating success in adaptation without sacrificing overall performance.</p><p>Georgia Tech&rsquo;s <strong>Alexander Rodr&iacute;guez</strong> and <strong>B. Aditya Prakash</strong> are co-authors on the paper, along with <strong>Nikhil Muralidhar</strong>, <strong>Anika Tabassum</strong>, and <strong>Naren Ramakrishnan</strong> of Virginia Tech, and <strong>Bijaya Adhikari </strong>of the University of Iowa.</p><p>[<strong>Related news:</strong> <a href="https://www.cc.gatech.edu/news/642638/research-team-wins-two-covid-19-challenges-one-week">Research Team Wins Two Covid-19 Challenges in One Week</a>]</p><p>Explore Georgia Tech&rsquo;s presence in this visualization and view a list of papers below.</p><p><a href="https://public.tableau.com/views/AAAI2021-GeorgiaTechAIresearch/Dashboard1?:language=en&amp;:display_count=y&amp;:origin=viz_share_link:showVizHome=no">INTERACTIVE VISUALIZATION: Georgia Tech @ AAAI 20201</a></p><ul><li><a href="https://www.medrxiv.org/content/10.1101/2020.09.28.20203109v2">DeepCOVID: An Operational Deep Learning-driven Framework for Explainable Real-time COVID-19 Forecasting</a> (Alexander Rodr&iacute;guez, Anika Tabassum, Jiaming Cui, Jiajia Xie, Javen Ho, Pulak Agarwal, Bijaya Adhikari, B. Aditya Prakash)<br />&nbsp;</li><li><a href="https://www.medrxiv.org/content/10.1101/2020.09.28.20203109v2">Semantic MapNet: Building Alocentric SemanticMaps and Representations from Egocentric Views</a> (Vincent Cartillier, Zhile Ren, Neha Jain, Stefan Lee, Irfan Essa, Dhruv Batra)<br />&nbsp;</li><li><a href="https://arxiv.org/pdf/2009.11407.pdf">Steering a Historical Disease Forecasting Model Under a Pandemic: Case of Flu and COVID-19</a> (Alexander Rodr&iacute;guez, Nikhil Muralidhar, Bijaya Adhikari, Anika Tabassum, Naren Ramakrishnan, B. Aditya Prakash)<br />&nbsp;</li><li><a href="https://arxiv.org/pdf/2009.11407.pdf">Bias and Variance of Post-processing in Differential Privacy</a> (Keyu Zhu, Pascal Van Hentenryck, Ferdinando Fioretto)<br />&nbsp;</li><li>Branch and Price for Bus Driver Scheduling with Complex Break Constraints (Lucas Kletzander, Nysret Musliu, Pascal Van Hentenryck)<br />&nbsp;</li><li>Detecting and Adapting to Novelty in Games (Xiangyu Peng, Jonathan Balloch, Mark Riedl)<br />&nbsp;</li><li><a href="https://arxiv.org/pdf/2009.12562.pdf">Differentially Private and Fair Deep Learning: A Lagrangian Dual Approach</a> (Cuong Tran, Ferdinando Fioretto, Pascal Van Hentenryck)<br />&nbsp;</li><li><a href="https://arxiv.org/pdf/2010.00685.pdf">How to Motivate Your Dragon: Teaching Goal-Driven Agents to Speak and Act in Fantasy Worlds</a>&nbsp;(Prithviraj Ammanabrolu, Jack Urbanek, Margaret Li, Arthur Szlam, Tim Rocktaschel, Jason Weston)<br />&nbsp;</li><li><a href="https://arxiv.org/pdf/2009.00829.pdf">Automated Storytelling via Causal, Commonsense Plot Ordering</a>&nbsp;(Prithviraj Ammanabrolu, Wesley Cheung, William Broniec, Mark Riedl)<br />&nbsp;</li><li><a href="https://arxiv.org/pdf/1902.06007.pdf">Encoding Human Domain Knowledge to Warm Start Reinforcement Learning</a>&nbsp;(Andrew Silva, Matthew Gombolay)</li><li>&nbsp;</li><li><a href="https://arxiv.org/abs/2101.06351">Weakly-Supervised Hierarchical Models for Predicting Persuasive Strategies in Good-faith Textual Requests</a> (Jiaao Chen, Diyi Yang)</li></ul>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1611926692</created>  <gmt_created>2021-01-29 13:24:52</gmt_created>  <changed>1612194510</changed>  <gmt_changed>2021-02-01 15:48:30</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Eighteen members of the Georgia Tech community contributed to eight papers that will be presented virtually at AAAI 2021, while two longtime contributors will join the ranks of the prestigious AAAI Fellows program.]]></teaser>  <type>news</type>  <sentence><![CDATA[Eighteen members of the Georgia Tech community contributed to eight papers that will be presented virtually at AAAI 2021, while two longtime contributors will join the ranks of the prestigious AAAI Fellows program.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2021-01-29T00:00:00-05:00</dateline>  <iso_dateline>2021-01-29T00:00:00-05:00</iso_dateline>  <gmt_dateline>2021-01-29 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>643611</item>          <item>643694</item>      </media>  <hg_media>          <item>          <nid>643611</nid>          <type>image</type>          <title><![CDATA[Artificial Intelligence]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[artificial-intelligence-4469138_1280.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/artificial-intelligence-4469138_1280.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/artificial-intelligence-4469138_1280.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/artificial-intelligence-4469138_1280.jpg?itok=wYW4x4S2]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Artificial Intelligence]]></image_alt>                    <created>1611926616</created>          <gmt_created>2021-01-29 13:23:36</gmt_created>          <changed>1611926616</changed>          <gmt_changed>2021-01-29 13:23:36</gmt_changed>      </item>          <item>          <nid>643694</nid>          <type>image</type>          <title><![CDATA[AAAI 2021 Visualization]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[aaai_viz.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/aaai_viz.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/aaai_viz.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/aaai_viz.jpg?itok=7AzuYYWQ]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Georgia Tech at AAAI 2021]]></image_alt>                    <created>1612194422</created>          <gmt_created>2021-02-01 15:47:02</gmt_created>          <changed>1612194422</changed>          <gmt_changed>2021-02-01 15:47:02</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="643355">  <title><![CDATA[IC Professors Howard, Goel Named 2021 AAAI Fellows]]></title>  <uid>33939</uid>  <body><![CDATA[<p>School of Interactive Computing Chair <strong>Ayanna Howard</strong> and Professor <strong>Ashok Goel</strong> were both named <a href="https://www.aaai.org/Awards/fellows.php">2021 Fellows by the Association for the Advancement of Artificial Intelligence</a> (AAAI).</p><p>The AAAI Fellows program recognizes individuals who have made significant, sustained contributions &ndash; usually over at least a 10-year period &ndash; to the field of artificial intelligence (AI).</p><p>Goel&rsquo;s research, which spans about 35 years, has connected fields of AI, cognitive science, and human cognition. Increasingly, it has merged the fields of AI and education, culminating in his lab&rsquo;s groundbreaking work on <a href="https://emprize.gatech.edu/">Jill Watson</a>, a virtual teaching assistant that can answer student questions in discussion forums for online classes. This trailblazing work has been recognized by numerous media outlets across the globe and has enormous long-term implications for the future of education.</p><p>&ldquo;This is an exciting time for AI research into cognitive systems,&rdquo; Goel said. &ldquo;In one direction, my research uses the needs of human learning to ground and inspire novel AI techniques and tools. In the other, it uses AI theories and methods to provide new insights into human cognition and behavior.&rdquo;</p><p>The team responsible for the advancement of Jill Watson and additional AI techniques for education, called emPrize, <a href="https://www.cc.gatech.edu/news/631981/team-makes-semifinals-global-ai-competition">advanced to the semifinals of the international XPrize AI competition in 2020</a>.</p><p>Howard, <a href="https://www.cc.gatech.edu/news/641685/renowned-roboticist-departing-georgia-tech-new-position">who was recently named the next Dean of Engineering at The Ohio State University</a>, has performed similarly impactful research over her time in the field. As the director of the Human-Automation Systems Lab (HumAnS) at Georgia Tech, she has led research in conceptualizing humanized intelligence, the process of embedding human cognitive capability into the control path of autonomous systems.</p><p>Specifically, the lab studies how human-inspired techniques, such as soft computing methodologies, sensing, and learning can be used to enhance the autonomous capabilities of intelligent systems. This has impact in both virtual AI and robotics, and has led to enterprises like <a href="http://zyrobotics.com/">Zyrobotics</a>, the company Howard co-founded that produces mobile therapy and educational products for children with differing needs.</p><p>Additionally, she has been a spokesperson for the importance of ethical research in the field.</p><p>&ldquo;We&rsquo;re at such a critical moment in the development of artificial intelligence,&rdquo; Howard said. &ldquo;There is incredible possibility, but equally daunting challenges. It&rsquo;s an honor to be recognized for the work we are doing in this field, but it&rsquo;s far from over. My hope is that I can inspire future researchers to pursue impactful and ethical advancements in the field.&rdquo;</p><p>Eight others aside from Goel and Howard were also selected to the fellowship program for 2021 and will be recognized at the <a href="https://aaai.org/Conferences/AAAI-21/">2021 AAAI conference</a>, being held virtually Feb. 2-9.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1611340903</created>  <gmt_created>2021-01-22 18:41:43</gmt_created>  <changed>1611340903</changed>  <gmt_changed>2021-01-22 18:41:43</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The AAAI Fellows program recognizes individuals who have made significant, sustained contributions to the field of artificial intelligence (AI).]]></teaser>  <type>news</type>  <sentence><![CDATA[The AAAI Fellows program recognizes individuals who have made significant, sustained contributions to the field of artificial intelligence (AI).]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2021-01-22T00:00:00-05:00</dateline>  <iso_dateline>2021-01-22T00:00:00-05:00</iso_dateline>  <gmt_dateline>2021-01-22 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>643352</item>      </media>  <hg_media>          <item>          <nid>643352</nid>          <type>image</type>          <title><![CDATA[Ashok Goel and Ayanna Howard]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Ashok Goel and Ayanna Howard.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Ashok%20Goel%20and%20Ayanna%20Howard.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Ashok%20Goel%20and%20Ayanna%20Howard.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Ashok%2520Goel%2520and%2520Ayanna%2520Howard.png?itok=UtUtQ-xW]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Ashok Goel and Ayanna Howard]]></image_alt>                    <created>1611340547</created>          <gmt_created>2021-01-22 18:35:47</gmt_created>          <changed>1611340547</changed>          <gmt_changed>2021-01-22 18:35:47</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="181920"><![CDATA[cc-research; ic-ai-ml; ic-robotics]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="643307">  <title><![CDATA[IC Associate Professor Wins 2021 ACM-W Rising Star Award]]></title>  <uid>33939</uid>  <body><![CDATA[<p><a href="http://ic.gatech.edu/">School of Interactive Computing</a> Associate Professor <strong>Munmun De Choudhury</strong> was named a winner of the <a href="https://women.acm.org/awards/rising-star-award/">2021 ACM-W Rising Star Award</a>.</p><p>The award, bestowed by the Association for Computing Machinery, recognizes a woman whose early-career research has had a significant impact on the computing discipline, as measured by factors like society impact, frequent citation of work, or creation of a new research area.</p><p>De Choudhury will receive a framed certificate and a $1,000 stipend for the recognition, which is in its first year of existence and will be given out annually. She will be recognized for the award at a research conference to be named later.</p><p>&ldquo;I feel deeply honored for this recognition and owe my successes to my wonderful students and collaborators, as well as the intellectual freedom provided by Georgia Tech&rsquo;s College of Computing that has helped trailblaze interdisciplinary research in computing, like mine, for years,&rdquo; she said.</p><p>De Choudhury&rsquo;s work leverages large-scale online social data and advances in machine learning to help answer fundamental questions relating to our social lives. Chief among them lie within the field of mental health care &ndash; understanding mental health, improving access to care, and more. Her work has been recognized by a number of other awards, including 13 best paper and honorable mention paper awards from the ACM and AAAI, as well as publications such as the New York Times, BBC, NPR, and others.</p><p>In addition to the personal appreciation, De Choudhury stressed the importance of recognizing the work of under-represented researchers in the computing field.</p><p>&ldquo;I&rsquo;d like to commend the efforts of ACM-W for creating this new opportunity to celebrate the research of a group under-represented in the computing field,&rdquo; she said. &ldquo;There is a long way to go when it comes to computing making significant positive impact on a pervasive societal problem like mental health. Still, this award serves as a valuable encouragement for the next frontier of my research program.&rdquo;</p><p>De Choudhury leads the <a href="http://socweb.cc.gatech.edu/">Social Dynamics and Wellbeing Lab</a>. Research from the lab, both past and current, can be explored in more detail on its website.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1611259070</created>  <gmt_created>2021-01-21 19:57:50</gmt_created>  <changed>1611259070</changed>  <gmt_changed>2021-01-21 19:57:50</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The award recognizes a woman whose early-career research has had a significant impact on the computing discipline.]]></teaser>  <type>news</type>  <sentence><![CDATA[The award recognizes a woman whose early-career research has had a significant impact on the computing discipline.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2021-01-21T00:00:00-05:00</dateline>  <iso_dateline>2021-01-21T00:00:00-05:00</iso_dateline>  <gmt_dateline>2021-01-21 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>587685</item>      </media>  <hg_media>          <item>          <nid>587685</nid>          <type>image</type>          <title><![CDATA[Munmun De Choudhury]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[munmun portrait_horz.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/munmun%20portrait_horz.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/munmun%20portrait_horz.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/munmun%2520portrait_horz.jpg?itok=wrtogdb-]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Georgia Tech Assistant Professor Munmun De Choudhury]]></image_alt>                    <created>1487686001</created>          <gmt_created>2017-02-21 14:06:41</gmt_created>          <changed>1487783642</changed>          <gmt_changed>2017-02-22 17:14:02</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="182015"><![CDATA[cc-research; ic-ai-ml; ic-hcc; ic-social-computing]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="642143">  <title><![CDATA[Q&A: De'Aira Bryant Discusses Her Experience Programming a Robot for the Movie Superintelligence]]></title>  <uid>33939</uid>  <body><![CDATA[<p><strong>De&rsquo;Aira Bryant</strong> didn&rsquo;t come to Georgia Tech to work in the movie industry. Her interests lie within the field of robotics, where she works on projects that will increase the quality of human life.</p><p>Being in the heart of Atlanta, however, the burgeoning heart of the film industry, comes with a few perks. Last year, Bryant was able to take advantage of one when she was contacted by representatives from the production crew of <em>Superintelligence</em>. The movie stars Melissa McCarthy as a woman who must prove to an artificial intelligence that humanity is worth saving and was recently released on HBO Max.</p><p>For the movie, Bryant was asked to program a Nao, a humanoid robot she uses in the <a href="https://humanslab.ece.gatech.edu/">Human-Automation Systems (HumAnS) Lab</a> run by her advisor, School of Interactive Computing Chair Ayanna Howard. Read about Bryant&rsquo;s experience programming the biggest star on the set.</p><p><strong>How did this opportunity to work with <em>Superintelligence</em> come about, and what was the experience like?</strong></p><p>The production team reached out to the College of Computing. They were interested in having a robot for a scene and needed someone who could program the Nao to match the scene they had written. They reached out to Dr. Howard because they knew she had that type of robot, and she reached out to me because I&rsquo;m the person who does most of the customized programming for this particular robot. If there&rsquo;s a script or movements or whatever, I&rsquo;m the choreographer.</p><p>It was exciting. I was like, &ldquo;Oh my goodness, this is for a movie.&rdquo; I had no idea what it was about, but I was just excited to be a part of it. They asked if their ideas were possible and the production team was like, &ldquo;We don&rsquo;t know what it can do, but we think it looks cool. Can you make it do this?&rdquo; We talked on the phone, and then I went to work.</p><p><strong>How long did you have to program it?</strong></p><p>I had about a week to get it ready. I had this idea of what they wanted, and I just tried to program it as best as I could.</p><p><strong>So, tell me about the day of. What was it like being on set?</strong></p><p>I took the robot to the Klaus Advanced Computing Building. They were filming in there. It was so exciting to see everything. I had to tell the robot to go on their cue, so I was sitting right behind the camera. I got to meet Melissa McCarthy and some of the other stars, and I got a few pictures with them that I&rsquo;m excited to finally be able to share with everyone. Everyone was so welcoming and understanding that the robot needed some time. I like to say that the robot was the biggest superstar on the set. It had its moments where it was like, &ldquo;I&rsquo;m not ready yet. My joint isn&rsquo;t quite ready to do this movement.&rdquo; They were understanding and eager to learn. They wanted their own pictures with the robot and everything, and had their own questions that I was excited to answer.</p><p><strong>A lot of non-roboticists or AI researchers&rsquo; first experiences with robots is in mass media like movies or TV shows, and normally its some dystopian or disaster scenario. How seriously did you take that responsibility or opportunity to portray the lighter, more realistic side?</strong></p><p>I think for a lot of people, robots &ndash; especially these humanoid ones &ndash; have been largely portrayed negatively. They focus on disaster cases they may never happen in the next 100 years, if ever. There hasn&rsquo;t been a lot of mass media attention that focuses on more positive use cases. I take that very seriously in our work, just knowing that we focus on people, on children that can benefit from the technology and have it improve their quality of life. It&rsquo;s important to show those cases to affect the narrative. But we also want to highlight the concerns that are just. Things like bias and ethics of using robotics in certain domains. Those are real things that people are working to mitigate now, so we can bring people closer to what the field actually looks like by highlighting both.</p><p>Every time I teach kids or teach a class, I start out by showing what robots can actually do. I show videos of them falling over or something like that to illustrate that those terminators or killer robots, that doesn&rsquo;t happen right now. But there are some other issues that are real and current and pressing, and here&rsquo;s how we address them.</p><p><strong>Being at Georgia Tech with movies filmed nearby has offered these kinds of neat opportunities. How neat is it to have this platform?</strong></p><p>My friends think it&rsquo;s so much cooler that I helped work on a movie that is going to be on HBO Max than for me to have some paper published at this really prestigious conference. The movie resonates with them more, so it&rsquo;s an opportunity to have a connection. They can relate to the technology in a way that is natural to them and ask questions, and I can share more about robotics and my work. That&rsquo;s how we get people interested in the field.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1608073783</created>  <gmt_created>2020-12-15 23:09:43</gmt_created>  <changed>1608073783</changed>  <gmt_changed>2020-12-15 23:09:43</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[De'Aira Bryant programmed a robot for a scene in the movie Superintelligence. She discusses her experience in this Q&A.]]></teaser>  <type>news</type>  <sentence><![CDATA[De'Aira Bryant programmed a robot for a scene in the movie Superintelligence. She discusses her experience in this Q&A.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-12-15T00:00:00-05:00</dateline>  <iso_dateline>2020-12-15T00:00:00-05:00</iso_dateline>  <gmt_dateline>2020-12-15 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>642140</item>      </media>  <hg_media>          <item>          <nid>642140</nid>          <type>image</type>          <title><![CDATA[De'Aira Bryant Superintelligence]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[BryantSuperintelligence2.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/BryantSuperintelligence2.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/BryantSuperintelligence2.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/BryantSuperintelligence2.jpg?itok=ioVEhMje]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[De'Aira Bryant works on the set of the movie Superintelligence]]></image_alt>                    <created>1608072918</created>          <gmt_created>2020-12-15 22:55:18</gmt_created>          <changed>1608072918</changed>          <gmt_changed>2020-12-15 22:55:18</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="182940"><![CDATA[cc-research; ic-ai-ml; ic-robotics; ic-hcc]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="642142">  <title><![CDATA[Sehoon Ha Part of $500k Grant to Make Safer, More Deployable Robots]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Safety is arguably the biggest barrier to large-scale deployability of humanoid assistive robots.</p><p>Large, heavy, and with the potential to suddenly fall over all mean that the risk to humans has remained too high to place this technology in homes, hospitals, retail spaces, or care facilities.</p><p>In 2016, however, researchers at UCLA posed a solution: What if we made robots that just couldn&rsquo;t fall down? Now, researchers at Georgia Tech, in collaboration with UCLA and the University of Southern California, are working to develop a new class of locomotion systems that could enable this technology to become a larger part of our daily lives.</p><p>&ldquo;We have lots of robots,&rdquo; said Sehoon Ha, an assistant professor in Georgia Tech&rsquo;s School of Interactive Computing and a co-principal investigator on the project. &ldquo;But they aren&rsquo;t in our house or in our stores. It&rsquo;s mainly because of safety. I have a young daughter. I wouldn&rsquo;t be comfortable with a full-sized humanoid robot in my house.&rdquo;</p><p>Previously, UCLA developed a new class of robots called &ldquo;buoyancy-assisted robots.&rdquo; Instead of the human-like hardware that was bulky, heavy, and subject to the pitfalls of gravity, these legged robots remained erect thanks to a body made of helium balloons.</p><p>&ldquo;Even though there is some mechanical or motor error, it never falls,&rdquo; Ha said. &ldquo;It never breaks. It&rsquo;s super light. Even if it might collide with you, it doesn&rsquo;t fall and it can&rsquo;t hurt you.&rdquo;</p><p>Creating a new class of locomotion systems has a couple of challenges: designing a new hardware that is cheap and safe and developing an algorithm that supports locomotion and collaboration. This grant will support development of novel frameworks that address a fundamentally new family of legged robots and empower them with reliable locomotion skills.</p><p>&ldquo;The main philosophy is to deploy the reinforcement learning on real hardware,&rdquo; Ha said. &ldquo;This buoyancy-assisted robot is subject to a relatively larger magnitude of drag forces. It&rsquo;s hard to simulate it. There&rsquo;s a discrepancy between simulation and the real world. We want to collect real-world experience and limit the reality gap.&rdquo;</p><p>The technology could help carry out a search and rescue in a disaster relief zone or answer a question in a retail space. The new project, funded by a $500,000 grant from the National Science Foundation&rsquo;s National Robotics Initiative, will help create new locomotion control systems using reinforcement learning to improve the state of this technology.</p><p>Already cheaper than its bulkier counterparts, these robots could be as inexpensive as a couple hundred dollars produced at scale, Ha said.</p><p>&ldquo;Now you might imagine a scenario where you could drop 1,000 of these into a disaster area to carry our search and rescue missions,&rdquo; he said.</p><p>The grant runs for four years and research from the project will be open-source to encourage additional collaboration. The grant will also support a competition for middle and high school students using the low-cost platforms to foster student interest in the field.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1608073378</created>  <gmt_created>2020-12-15 23:02:58</gmt_created>  <changed>1608073378</changed>  <gmt_changed>2020-12-15 23:02:58</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Researchers at Georgia Tech, in collaboration with UCLA and the University of Southern California, are working to develop a new class of locomotion systems that could enable buoyancy-assisted robots to become a larger part of our daily lives.]]></teaser>  <type>news</type>  <sentence><![CDATA[Researchers at Georgia Tech, in collaboration with UCLA and the University of Southern California, are working to develop a new class of locomotion systems that could enable buoyancy-assisted robots to become a larger part of our daily lives.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-12-15T00:00:00-05:00</dateline>  <iso_dateline>2020-12-15T00:00:00-05:00</iso_dateline>  <gmt_dateline>2020-12-15 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>642141</item>      </media>  <hg_media>          <item>          <nid>642141</nid>          <type>image</type>          <title><![CDATA[Sehoon Ha]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[sehoon.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/sehoon.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/sehoon.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/sehoon.jpg?itok=vnZXPkeU]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Sehoon Ha]]></image_alt>                    <created>1608073322</created>          <gmt_created>2020-12-15 23:02:02</gmt_created>          <changed>1608073322</changed>          <gmt_changed>2020-12-15 23:02:02</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="181920"><![CDATA[cc-research; ic-ai-ml; ic-robotics]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="641437">  <title><![CDATA[New Grant Helps Researchers Bring Cybersecurity into the Physical World]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Imagine if you could physically feel a threat to your digital security &ndash; perhaps a vibration on your wrist to alert you to nearby danger. What kinds of precautions would you take if you felt these digital threats the same way you felt those of the physical world?</p><p>Like carrying a can of pepper spray when walking down a dark alleyway &ndash; or avoiding the alleyway altogether &ndash; a new project out of Georgia Tech&rsquo;s <a href="http://ic.gatech.edu">School of Interactive Computing</a> (IC) aims to connect this abstract world of cybersecurity and privacy with concrete physical environments to promote better security behavior.</p><p>&ldquo;In the real world, we have these corporeal sensations that give us cues on how to act,&rdquo; said IC Assistant Professor <strong>Sauvik Das</strong>, the principal investigator on the project. &ldquo;If you feel a cold breeze on your cheek, you may decide to wear a scarf. If you are walking down a dark alleyway, you may become more alert and aware of your surroundings. It&rsquo;s a different story in the present state of cybersecurity and privacy.&rdquo;</p><p>That current state is mostly limited to a warning when you&rsquo;re leaving a secure network on your computer or a pop-up box that might caution against proceeding to a specific website. But what about the digital threats we face daily when proceeding throughout our daily routines, perusing the internet on our phones or walking through a crowded airport?</p><p>There are no corporeal sensory perception cues that indicate what is threatening or worthy of our attention. Similarly, we don&rsquo;t have affordances that allow us to manipulate digital interfaces in ways that will better protect us against these threats that we find salient.</p><p>&ldquo;That&rsquo;s the idea here,&rdquo; Das said. &ldquo;We want to solve this abstraction problem by physically alerting people to threats and giving them means to defend against them.&rdquo;</p><p>The project presents three solutions to the digital abstraction problem &ndash; Spidey Sense, Bit Whisperer, and Horcrux. Each aims to solves a specific branch of the problem: notifying you to threats, giving you more effective means to defend against threats, and providing means to better govern shared resources.</p><h3><strong>Spidey Sense</strong></h3><p>Spidey Sense uses a wristband that integrates with modern Apple watches that can squeeze the wrist in programmable patterns to notify the wearer of perceived digital threats.</p><p>The idea is that people might not feel the threat through visual communication design the same way they might when walking down a dark alleyway at night.</p><p>&ldquo;How can we similarly communicate that threat?&rdquo; Das poses. &ldquo;This field of affective haptics was a good bridge.&rdquo;</p><h3><strong>Bit Whisperer</strong></h3><p>So, what do you do when you know threats exist? In the real world, one might intuit that to block entry into a room they could place a heavy object in front of a door or that to communicate secure information they might need to whisper. This project aims to present similar options for digital information.</p><p>&ldquo;It&rsquo;s like whispering through the digital world,&rdquo; Das said.</p><p>To transfer data from one smart device to another, one might use Bluetooth. But one can&rsquo;t see the bits traveling through the air as they are communicated. Bit Whisperer uses physical objects, like a table, to communicate.</p><p>Using inaudible sound frequencies that can be generated through smartphones, data is transmitted through the physical surface from one device to other devices on the same surface. Anyone off the surface can&rsquo;t receive the data without physically placing their device on it, making it much more challenging for would-be attackers.</p><h3><strong>Horcrux</strong></h3><p>Horcrux is a more abstract project at present. It aims to assist individuals governing shared digital resources. Current state of the art provides point-and-click resources, but those make it impossible to multitask and challenging to specify access controls.</p><p>This project, like the others, aims to provide physical tools that can be manipulated by hand to make it easier to specify access.</p><p>The idea now is a mat where play pieces like figurines can represent people or resources that people own.</p><p>&ldquo;Think of a castle where you can move figurines through different accesses,&rdquo; Das said. &ldquo;These tangible interfaces allow for more interaction, more multitasking, and visible physical representations for what everyone has access to.&rdquo;</p><p>These projects are being funded by a $500,000 grant from the National Science Foundation. IC Professor <strong>Gregory Abowd</strong> is a co-principal investigator on the grant, and Ph.D. student <strong>Youngwook Do</strong> is a key contributor.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1605800311</created>  <gmt_created>2020-11-19 15:38:31</gmt_created>  <changed>1605800311</changed>  <gmt_changed>2020-11-19 15:38:31</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A new project out of Georgia Tech’s School of Interactive Computing (IC) aims to connect the abstract world of cybersecurity and privacy with concrete physical environments to promote better security behavior.]]></teaser>  <type>news</type>  <sentence><![CDATA[A new project out of Georgia Tech’s School of Interactive Computing (IC) aims to connect the abstract world of cybersecurity and privacy with concrete physical environments to promote better security behavior.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-11-19T00:00:00-05:00</dateline>  <iso_dateline>2020-11-19T00:00:00-05:00</iso_dateline>  <gmt_dateline>2020-11-19 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>626044</item>      </media>  <hg_media>          <item>          <nid>626044</nid>          <type>image</type>          <title><![CDATA[Cybersecurity stock image]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Cybersecurity_stock_image.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Cybersecurity_stock_image.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Cybersecurity_stock_image.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Cybersecurity_stock_image.jpg?itok=wkg46t70]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Stock photo of stylized padlock icons surrounded by a word cloud of information security terms.]]></image_alt>                    <created>1568223064</created>          <gmt_created>2019-09-11 17:31:04</gmt_created>          <changed>1568223064</changed>          <gmt_changed>2019-09-11 17:31:04</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="182941"><![CDATA[cc-research; ic-cybersecurity; ic-hcc]]></keyword>      </keywords>  <core_research_areas>          <term tid="145171"><![CDATA[Cybersecurity]]></term>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="641381">  <title><![CDATA[Need a Note Taker? This AI Can Help.]]></title>  <uid>33939</uid>  <body><![CDATA[<p>A new tool that uses artificial intelligence is bringing notetaking up to speed and may help future digital assistants ease fears of ever missing a meeting again.</p><p>It&rsquo;s an age-old problem: We are inundated with informal forms of communication like phone calls, remote video conferences, text conversations on group messaging platforms like Slack or Microsoft Teams. Remembering key points of each discussion can at times be overwhelming, not to mention the stress caused by missing a meeting or seeing a couple hundred messages stack up while you were out for lunch.</p><p>This digital solution, developed by Georgia Tech researchers and being presented in a paper this week at the <a href="https://2020.emnlp.org/">2020 Conference on Empirical Methods in Natural Language Processing</a>, can assuage those concerns by generating summaries of informal conversations. Using a subset of machine learning called natural language processing, the method identifies conversational structure using particular keywords.</p><p>&ldquo;Think about informal conversational structure: It has an opening, problem statements, discussions, a conclusion,&rdquo; said <strong>Diyi Yang</strong>, an assistant professor in the School of Interactive Computing and a co-author on the paper. &ldquo;We want to mine those structures to teach the model what may be informative within the conversation for generating better summaries.&rdquo;</p><p>Words like any variation of &ldquo;hello&rdquo; or &ldquo;good,&rdquo; for example, might indicate that it is a greeting. Other action words likely indicate some kind of intention, and dates or times a discussion and conclusion on plans. Knowing this, the model can represent the unstructured conversation better to craft an accurate summary.</p><p>These types of summaries are more important now than ever. More individuals all over the world are working or attending school remotely. More discussions are being handled over the phone or video conferencing, plans being made through applications like Microsoft Teams. Previous research on the subject has focused on formal content like books, papers, or news articles, but the existing body of work on informal language is relatively sparse.</p><p>&ldquo;This is applicable now more than ever because of where we are,&rdquo; Yang said. &ldquo;There&rsquo;s so much online and text conversation, and we have way too much information. We need help storing it in a shorter and more structured way. If you&rsquo;re away from your laptop for 30 minutes, it&rsquo;s important to be able to get a quick summary of what you missed.&rdquo;</p><p>Challenges still exist. There are problems with referral in the conversation, or calling back to a previous discussion point later in a meeting. There are also typos or slang, repetition, interruption or changes in role, language changes that can interfere with the model&rsquo;s ability to determine structure. These are items Yang and her collaborator are continuing to address moving forward.</p><p>&ldquo;This is a great starting point,&rdquo; Yang said.</p><p>The work is presented in the paper <a href="https://arxiv.org/pdf/2010.01672.pdf"><em>Multi-View Sequence-to-Sequence Models with Conversational Structure for Abstractive Dialogue Summarization</em></a>. The paper is co-authored by Yang and <strong>Jiaao Chen</strong>, a second-year Ph.D. student in the School of Interactive Computing.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1605632628</created>  <gmt_created>2020-11-17 17:03:48</gmt_created>  <changed>1605632628</changed>  <gmt_changed>2020-11-17 17:03:48</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A new AI tool that summarizes unstructured conversational language could help future digital assistants ease fears of ever missing a meeting again.]]></teaser>  <type>news</type>  <sentence><![CDATA[A new AI tool that summarizes unstructured conversational language could help future digital assistants ease fears of ever missing a meeting again.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-11-17T00:00:00-05:00</dateline>  <iso_dateline>2020-11-17T00:00:00-05:00</iso_dateline>  <gmt_dateline>2020-11-17 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>641380</item>      </media>  <hg_media>          <item>          <nid>641380</nid>          <type>image</type>          <title><![CDATA[Taking Notes]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Note taking photo.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Note%20taking%20photo.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Note%20taking%20photo.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Note%2520taking%2520photo.jpg?itok=uPR0lqn4]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[A stack of notes on a table]]></image_alt>                    <created>1605631344</created>          <gmt_created>2020-11-17 16:42:24</gmt_created>          <changed>1605631344</changed>          <gmt_changed>2020-11-17 16:42:24</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="181639"><![CDATA[cc-research; ic-ai-ml]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="640793">  <title><![CDATA[Georgia Tech Researchers Contribute 13 Papers to Premier Visualization Conference]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Georgia Tech contributed to 13 papers and two workshops this week at <a href="http://ieeevis.org/year/2020/welcome">IEEE VIS 2020</a>, the premier forum for advances in theory, methods, and applications of visualization and visual analytics.</p><p>The conference highlights research from universities, government, and industry around the world. It is comprised of three separate events: IEEE Visual Analytics Science and Technology (VAST), IEEE Information Visualization (InfoVis), and IEEE Scientific Visualization (SciVis). Like other conferences throughout the Covid-19 pandemic, VIS was held virtually.</p><p>Georgia Tech&rsquo;s research was highlighted by one Best Paper Honorable Mention titled <em>Mapping Researchers with PeopleMap</em>. The paper &ndash; authored by <strong>Jon Saad-Falcon</strong>, <strong>Omar Shaikh</strong>, <strong>Zijie J. Wang</strong>, <strong>Austin P. Wright</strong>, <strong>Sasha Richardson</strong>, and <strong>Polo Chau</strong> &ndash; presents an open-source interactive tool that uses natural language processing to create visual maps for researchers based on their research interests and publications.</p><p>&ldquo;Discovering research expertise at universities can be a difficult task,&rdquo; the paper contends. &ldquo;Directories routinely become outdated, and few help in visually summarizing researchers&rsquo; work or supporting the exploration of shared interests among researchers. This results in lost opportunities for both internal and external entities to discover new connections, nurture research collaboration, and explore the diversity of research.&rdquo;</p><p>The paper also received a VAST Poster Research Award.</p><p>Also of note, new School of Computational Science &amp; Engineering Chair <strong>Haesun Park</strong> received recognition for a 2010 IEEE VAST Paper. The paper received a Test of Time Award, recognizing it for continued contributions to the visual analytics and visualization community. The paper is titled <em>iVisClassifier: An Interactive Visual Analytics System for Classification Based on Supervised Dimension Reduction</em> and co-authored by <strong>Jaegul Choo</strong>, <strong>Hanseung Lee</strong>, and <strong>Jaeyeon Kihm</strong>.</p><p>School of Interactive Computing Ph.D. student <strong>Emily Wall</strong>, who is advised by Associate Professor <strong>Alex Endert</strong>, was also recognized with the VGTC Outstanding Dissertation Honorable Mention for her work <em>Detecting and Mitigating Human Bias in Visual Analytics</em>.</p><p>&ldquo;People are susceptible to a multitude of biases, including perceptual biases and illusions; cognitive biases like confirmation bias or anchoring bias; and social biases like racial or gender bias that are borne of cultural experiences and stereotypes,&rdquo; Wall contends. &ldquo;As humans are an integral part of data analysis and decision making in many domains, their biases can be injected into and even amplified by models and algorithms.&rdquo;</p><p>Her work aims to develop a better understanding of the role human bias plays in visual data analysis by defining bias, detecting bias, and mitigating bias.</p><p>Explore more about Georgia Tech&rsquo;s contributions to IEEE VIS at the links below, or visit the <a href="http://vis.gatech.edu/">Georgia Tech Visualization Lab</a>. You can follow the lab on Twitter at <a href="https://twitter.com/GT_Vis">@GT_Vis</a>.</p><p><strong>Georgia Tech at IEEE VIS 2020</strong></p><p><strong>Papers</strong></p><ul><li><a href="https://arxiv.org/abs/2007.15832">SafetyLens: Visual Data Analysis of Functional Safety of Vehicles (Arpit Narechania, Ahsan Qamar, and Alex Endert)</a></li><li><a href="https://nl4dv.github.io/nl4dv/">NL4DV: A Toolkit for Generating Analytic Specifications for Data Visualization from Natural Language Queries (Arpit Narechania, Arjun Srinivasan, and John Stasko)</a></li><li><a href="https://arjun010.github.io/individual-projects/databreeze.html">Interweaving Multimodal Interaction with Flexible Unit Visualizations for Data Exploration (Arjun Srinivasan, Bongshin Lee, and John Stasko)</a></li><li><a href="https://terrancelaw.github.io/publications/data_insight_interviews_vis20.pdf">What are Data Insights to Professional Visualization Users? (Po-Ming Law, Alex Endert, and John Stasko)</a></li><li><a href="https://terrancelaw.github.io/publications/auto_insights_vis20.pdf">Characterizing Automated Data Insights (Po-Ming Law, Alex Endert, and John Stasko)</a></li><li><a href="https://arxiv.org/abs/2004.15004">CNN Explainer: Learning Convolutional Neural Networks with Interactive Visualization (Zijie J. Wang, Robert Turko, Omar Shaikh, Haekyu Park, Nilaksh Das, Fred Hohman, Minsuk Kahng, Duen Horng (Polo) Chau)</a></li><li><a href="https://arxiv.org/abs/2009.02608">Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks (Nilaksh Das, Haekyu Park, Zijie J. Wang, Fred Hohman, Robert Firstman, Emily Rogers, Duen Horng (Polo) Chau)</a></li><li><a href="https://poloclub.github.io/papers/20-vis-ganlabeval.pdf">How Does Visualization Help People Learn Deep Learning? Evaluating GAN Lab with Observational Study and Log Analysis (Minsuk Kahng, Duen Horng (Polo) Chau)</a></li><li><a href="https://arxiv.org/abs/2009.00091">Mapping Researchers with PeopleMap (Jon Saad-Falcon, Omar Shaikh, Zijie J. Wang, Austin P. Wright, Sasha Richardson, Duen Horng (Polo) Chau)</a></li><li><a href="https://gtvalab.github.io/files/legion.pdf">LEGION: Visually compare modeling techniques for regression (Subhajit Das, Alex Endert)</a></li><li><a href="https://gtvalab.github.io/files/cava_dataaug.pdf">CAVA: A Visual Analytics System for Exploratory Columnar Data Augmentation Using Knowledge Graphs (Dylan Cashman, Shenyu Xu, Subhajit Das, Florian Heimerl, Cong Liu, Shah Rukh Humayoun, Michael Gleicher, Alex Endert, Remco Chang)</a></li><li>A Comparative Analysis of Industry Human-AI Interaction Guidelines (Austin P. Wright, Zijie J. Wang, Haekyu Park, Grace Guo, Fabian Sperrle, Mennatallah El-Assady, Alex Endert, Daniel Keim, Duen Horng (Polo) Chau)</li><li><a href="https://trexvis.github.io/Workshop2020/papers/Coscia.pdf">Toward A Bias-Aware Future for Mixed Initiative Visual Analytics (Adam Coscia, Duen Horng (Polo) Chau, Alex Endert)</a></li></ul><p><strong>Recognitions</strong></p><ul><li><a href="https://www.cc.gatech.edu/~hpark/papers/choo_vast10_v1.pdf">iVisClassifier: an Interactive Visual Analytics System for Classification Based on Supervised Dimension Reduction (Jaegul Choo, Hanseung Lee, Jaeyeon Kihm and Haesun Park)</a></li><li><a href="https://smartech.gatech.edu/handle/1853/63597">Detecting and Mitigating Human Bias in Visual Analytics (Emily Wall (Advisor: Alex Endert))</a></li></ul><p><strong>Workshops</strong></p><ul><li>MoVIS &#39;20 (Organizers: Clio Andris, Somayeh Dodge, Alan MacEachren)</li><li>VISxAI &#39;20 (Organizers: Adam Perer, Duen Horng (Polo) Chau, Fred Hohman, Hendrik Strobelt, Mennatallah El-Assady)</li></ul>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1604032917</created>  <gmt_created>2020-10-30 04:41:57</gmt_created>  <changed>1604032917</changed>  <gmt_changed>2020-10-30 04:41:57</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[IEEE VIS highlights research from universities, government, and industry around the world.]]></teaser>  <type>news</type>  <sentence><![CDATA[IEEE VIS highlights research from universities, government, and industry around the world.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-10-30T00:00:00-04:00</dateline>  <iso_dateline>2020-10-30T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-10-30 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>640792</item>      </media>  <hg_media>          <item>          <nid>640792</nid>          <type>image</type>          <title><![CDATA[Georgia Tech at IEEE VIS 2020]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Screen Shot 2020-10-30 at 12.34.13 AM.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Screen%20Shot%202020-10-30%20at%2012.34.13%20AM.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Screen%20Shot%202020-10-30%20at%2012.34.13%20AM.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Screen%2520Shot%25202020-10-30%2520at%252012.34.13%2520AM.png?itok=cP3BBnmU]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Georgia Tech at IEEE VIS 2020]]></image_alt>                    <created>1604032582</created>          <gmt_created>2020-10-30 04:36:22</gmt_created>          <changed>1604032582</changed>          <gmt_changed>2020-10-30 04:36:22</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="186124"><![CDATA[cc-research; ic-ai-ml; ic-hcc; ic-social-computing; ic-visualization]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="640199">  <title><![CDATA[Ivan Allen College of Liberal Arts and the College of Computing Launch New Ethics Center]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Building on years of experience in research and education in ethics and technology, the College of Computing and the Ivan Allen College of Liberal Arts have launched the Ethics, Technology, and Human Interaction Center (ETHIC<sup>x</sup>).</p><p>The new Center &mdash; pronounced &ldquo;ethics&rdquo; &mdash; will advance ethics-in-technology-centered research, education, and engagement at the Georgia Institute of Technology in collaboration with communities, government, non-governmental organizations, and industry. The office of the Executive Vice President for Research provided significant funds over a three-year period to seed the Center.</p><p>&ldquo;We must foster Georgia Tech&rsquo;s strengths in ethics, responsible research, and the development of emerging technologies in collaborative ways,&rdquo; said Raheem Beyah, Georgia Tech&rsquo;s vice president for interdisciplinary research. &ldquo;ETHIC<sup>x </sup>&nbsp;will provide the necessary environment to support this work and Georgia Tech&rsquo;s mission to advance technology and improve the human condition.&rdquo;</p><p>The Colleges already have in-depth research and education experience addressing technology-related ethics questions. For instance, the School of Public Policy founded the Center for Ethics and Technology more than 12 years ago to foster a critical inquiry culture and deliberation about technology-related ethical issues. Faculty in that Center research ethical issues in the design of emerging contact tracing technologies, design ethics, social justice theory, and criticism broadly, and their relationship to emerging technologies such as smart cities, self-driving cars, and smart assistants, and a platform for fostering reflection and self-correcting reasoning in teaching and deliberation. The College of Computing also has created thriving research and educational initiatives such as the Ethical AI professional development course and the Law, Policy, and Ethics Initiative for Machine Learning @ GATECH.</p><p>The new Center will build on those strengths and position the Georgia Institute of Technology to become the leader in framing ethical concerns in technology, including fairness, accountability, transparency, social justice, and technological change.</p><h2>Anticipating New Ethical Challenges</h2><p>&ldquo;ETHIC<sup>x</sup> will be a place for robust, multidisciplinary research and a place to engage in systematic ethical analyses,&rdquo; said Kaye Husbands Fealing, dean of the Ivan Allen College of Liberal Arts and co-director of the new Center. &ldquo;It also will be a place for communities, corporations, governments, technologists, educators, and others to discuss and find solutions to complex ethical issues in science and technology.&rdquo;</p><p>The Center will conduct research in ethics and emerging technologies, frame ethical questions, solutions in ethics and technology, and social justice and equity. Interdisciplinary and community-based research also will be emphasized.</p><p>Educational initiatives will include investigating and designing curricula for ethics training that can be woven throughout students&rsquo; educational journeys and for employees at affiliated companies.</p><p>&rdquo;Responsibility is a core value of everything we do in the College of Computing at Georgia Tech. That means focusing on our communities and examining the impacts, both positive and negative, of our research and curricula,&rdquo; said Charles Isbell, dean and John P. Imlay, Jr. chair of the College of Computing. &ldquo;It means reaching across disciplines to collaborate with experts in other fields&nbsp;who&nbsp;can inform our own technological developments. We find solutions for tomorrow&rsquo;s problems, which means we have to anticipate the new ethical challenges we will face. This Center will help us do that.&rdquo;</p><h2>New Center Builds on Deep Experience</h2><p>Ayanna Howard, chair in the School of Interactive Computing, joins Husbands Fealing as co-director of the new Center.</p><p>&ldquo;In the School of Interactive Computing, we encourage all of our faculty and student researchers to think critically about the new challenges their research presents and offer strategies to mitigate any potential negative impact on society,&rdquo; Howard said. &ldquo;Good innovation isn&rsquo;t just about developing new technologies; it&rsquo;s about developing solutions to problems that can make the world a better, more equitable, and more inclusive place.&rdquo;</p><p>Georgia Tech launched the School of Interactive Computing in anticipation of the need for interdisciplinary research in computer science, liberal arts, and more. Faculty members examine diverse ethical challenges, including misinformation, content moderation, free speech on social platforms, data privacy and security, virtual reality, wearable computing devices, and robo-ethics.</p><p>Faculty and students throughout the Ivan Allen College of Liberal Arts engage in interdisciplinary research collaborations on ethics and emerging technologies, including in areas such as engineering, the environment, bioethics, responsible innovation, research ethics, the <em>ethical</em>&nbsp;and political dimensions of design and technology, and more.</p><p>&ldquo;In the Ivan Allen College, careful consideration of the impacts of technology on people, and of people on technology, is a central part of our curriculum and values,&rdquo; said Justin Biddle, an associate professor in the School of Public Policy, director of the Center for Ethics and Technology, and a member of the new Center&rsquo;s leadership team. &ldquo;With innovation today often outpacing our ability to understand its consequences, and widespread questions regarding the relations between technology, equity, and social justice, this kind of thinking is more important than ever.&rdquo;</p><p>Faculty in both Colleges also have initiated discussions on the social and ethical implications of emerging technologies&nbsp;across campus and beyond. These include the <a href="https://ethics.gatech.edu/techdebates"><em>TechDebates on Emerging Technologies</em></a><em>, </em>the <a href="https://ethics.gatech.edu/sparks-forum">Sparks Forum on Ethics and Engineering</a>, the Machine Learning@GT Seminar Series, and the <a href="http://techfutures.lmc.gatech.edu/">Ethics and Technological Futures</a> series developed by Nassim Parvin and Susana Morris in the <a href="https://lmc.gatech.edu">School of Literature, Media, and Communication</a>. Ellen Zegura, a professor in the School of Computer Science, also leads a Mozilla grant aimed at embedding ethics in computer science classes through role play.</p><h2>&#39;Where the Best of Sciences and Humanities Meet&#39;</h2><p>Deven Desai, associate professor and area coordinator for Law and Ethics at Scheller College of Business, also will assume a key leadership role at ETHIC<sup>x</sup>. He said the new Center will &ldquo;build and deepen technology-related ethics scholarship and research across Georgia Tech.</p><p>&ldquo;Scheller College&rsquo;s focus on law and ethics is part of how we train future business leaders, the people who take innovation and bring it to market,&rdquo; said Desai, who is also associate director for Law, Policy, and Ethics for Machine Learning at GA Tech (ML@GATECH), an interdisciplinary research center.</p><p>&ldquo;ETHICx will be a place where the best of science and humanities meet to challenge and push to find the unasked, important questions. In that friction and fun, the best questions about the problems we face and the best answer about how to solve them so that everyone can benefit will come out,&rdquo; he said.</p><p>Other members of the new Center&rsquo;s key leadership team include Jason Borenstein, director of graduate research ethics programs in the School of Public Policy; Betsy DiSalvo, director of the human-centered computing Ph.D. program and associate professor in the School of Interactive Computing; Michael Hoffmann, a professor in the School of Public Policy; and Nassim Parvin, an associate professor in the School of Literature, Media, and Communication.</p><p>A launch event is planned for November, during Ethics Awareness Week, with a forum to identify key challenges in technology ethics. The Center will soon announce details.</p><p>For more information about ETHIC<sup>x</sup>, contact Husbands Fealing at <a href="mailto:dean@gatech.edu">dean@gatech.edu</a> or Howard at <a href="mailto:ah260@gatech.edu">ah260@gatech.edu</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1602687660</created>  <gmt_created>2020-10-14 15:01:00</gmt_created>  <changed>1602695752</changed>  <gmt_changed>2020-10-14 17:15:52</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The new Center will advance ethics-in-technology-centered research, education, and engagement at the Georgia Institute of Technology.]]></teaser>  <type>news</type>  <sentence><![CDATA[The new Center will advance ethics-in-technology-centered research, education, and engagement at the Georgia Institute of Technology.]]></sentence>  <summary><![CDATA[<p>The new Center will advance ethics-in-technology-centered research, education, and engagement at the Georgia Institute of Technology in collaboration with communities, government, non-governmental organizations.</p>]]></summary>  <dateline>2020-10-13T00:00:00-04:00</dateline>  <iso_dateline>2020-10-13T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-10-13 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[michael.pearson@iac.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Michael Pearson<br />michael.pearson@iac.gatech.edu</p><p>David Mitchell<br />david.mitchell@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>640176</item>      </media>  <hg_media>          <item>          <nid>640176</nid>          <type>image</type>          <title><![CDATA[ETHICx Center graphic]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[ETHICx graphic.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/ETHICx%20graphic.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/ETHICx%20graphic.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/ETHICx%2520graphic.jpg?itok=KmTJTtUm]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1602623629</created>          <gmt_created>2020-10-13 21:13:49</gmt_created>          <changed>1602623629</changed>          <gmt_changed>2020-10-13 21:13:49</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="186032"><![CDATA[ETHICx]]></keyword>          <keyword tid="186033"><![CDATA[Ethics Technology and Human Interaction Center]]></keyword>          <keyword tid="1616"><![CDATA[Ivan Allen College of Liberal Arts]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39511"><![CDATA[Public Service, Leadership, and Policy]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71871"><![CDATA[Campus and Community]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="639092">  <title><![CDATA[Georgia Tech Receives Google Grant to Study Impact of Pandemic Information Seeking on Vulnerable Populations]]></title>  <uid>33939</uid>  <body><![CDATA[<p><a href="http://gatech.edu">Georgia Tech</a> will receive $155,000 from <a href="https://ai.google/social-good/">Google&rsquo;s Covid-19 AI for Social Good</a> program to investigate patterns and impact of pandemic information-seeking amongst vulnerable populations, such as older adults, low-income households, and Black and Hispanic adults. These populations have experienced disproportionately high rates of Covid-19-related death, severe sickness, and life disruptions like job loss.</p><p>Factors like higher rates of underlying health problems, reduced access to health care, and structural inequities shape access to critical resources. These same populations, however, also often have less access to the types of online information designed to improve health outcomes.</p><p>This project, led by principal investigator <strong>Andrea Grimes Parker</strong>, an associate professor in the <a href="http://ic.gatech.edu">School of Interactive Computing</a>&nbsp;and member of the <a href="http://ipat.gatech.edu">Institute for People and Technology</a>, will investigate how vulnerable and marginalized populations use technology for information seeking during the Covid-19 pandemic, as well as the impact of information exposure on their psychological wellbeing over time.</p><p>&ldquo;The Covid-19 pandemic has brought further attention to systemic disparities in health that have long existed in the United States,&rdquo; Parker said. &ldquo;Within a public health crisis, the information that people are exposed to has huge implications for how attitudes around the pandemic are shaped, how people respond, and thus the course of the pandemic.</p><p>&ldquo;Our work will provide both qualitative and quantitative evidence of the particular ways in which Covid-19 information exposure is tied to outcomes such as mental health in those most vulnerable to Covid-19 mortality and morbidity.&rdquo;</p><p>Researchers will examine this information exposure over time. Their&nbsp;findings will help to shape recommendations for crisis information communication, particularly online, in the future. This work builds upon existing work by Parker and collaborators at Northeastern University.</p><p>Parker and colleagues Professors <strong>Miso Kim</strong> and Dr. <strong>Jacqueline Griffin</strong> began their collaboration investigation how well crisis apps &ndash; mobile apps designed to provide help during emergency situations &ndash; support older adults. This work was published at the 2020 ACM Conference on Human Factors in Computing Systems.</p><p>When the pandemic began, they expanded their focus to additional groups of vulnerable to poor health, such as low-income and racial and ethnic minority populations. The team, in collaboration with Professor <strong>Stacy Marsella</strong>, also expanded their focus beyond crisis apps, designing a survey to investigate information seeking practices in vulnerable populations amidst the pandemic.</p><p>This survey has been distributed to over 600 individuals in Massachusetts and Georgia to date. Parker&rsquo;s new Google funding will enable the team to iterate on and expand the dissemination of this survey, conduct longitudinal analyses, and compliment the quantitative analysis with a qualitative component that will help unpack the nuances behind information-seeking practices and resulting Covid-19 attitudes, behaviors, and mental health outcomes.</p><p>This funding is part of Google.org&rsquo;s $100 million commitment to Covid-19 relief efforts.&nbsp;Organizations receiving funds were selected through a competitive review. Funding focus areas include health equity, disease spread monitoring and forecasting, frontline health worker support, secondary public health effects, and privacy-preserving contact tracing efforts.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1600112797</created>  <gmt_created>2020-09-14 19:46:37</gmt_created>  <changed>1600112797</changed>  <gmt_changed>2020-09-14 19:46:37</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Populations including older adults, low-income households, and Black and Hispanic adults have disproportionately high fatality rates, as well as less access to critical pandemic information.]]></teaser>  <type>news</type>  <sentence><![CDATA[Populations including older adults, low-income households, and Black and Hispanic adults have disproportionately high fatality rates, as well as less access to critical pandemic information.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-09-14T00:00:00-04:00</dateline>  <iso_dateline>2020-09-14T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-09-14 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>639090</item>      </media>  <hg_media>          <item>          <nid>639090</nid>          <type>image</type>          <title><![CDATA[Covid-19 Google Grant]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[coronavirus-4981906_1920.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/coronavirus-4981906_1920.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/coronavirus-4981906_1920.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/coronavirus-4981906_1920.jpg?itok=u64mGjcj]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Two women wearing masks during Covid-19 pandemic]]></image_alt>                    <created>1600112099</created>          <gmt_created>2020-09-14 19:34:59</gmt_created>          <changed>1600112099</changed>          <gmt_changed>2020-09-14 19:34:59</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="184821"><![CDATA[cc-research; ic-hcc; ic-ai-ml; COVID-19]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="639077">  <title><![CDATA[Georgia Tech Part of $5 Million Grant to Develop AI Tech Supporting Individuals With Autism Spectrum Disorder in the Workplace]]></title>  <uid>33939</uid>  <body><![CDATA[<p>The <a href="http://nsf.gov">National Science Foundation</a> has awarded a $5 million grant to a multi-university team of researchers that includes <a href="http://gatech.edu">Georgia Tech</a> to create novel artificial intelligence technology that trains and supports individuals with Autism Spectrum Disorder (ASD) in the workplace.</p><p>The investment follows a successful $1 million, nine-month pilot grant to the same team, which also includes Yale University, Cornell University, Vanderbilt University, and the Vanderbilt University Medical Center. Georgia Tech&rsquo;s portion of the grant is $500,000.</p><p>Led by co-principal investigator Professor <strong>Jim Rehg</strong> of the <a href="http://ic.gatech.edu">School of Interactive Computing</a>, Georgia Tech will develop methods for assessing nonverbal communication behaviors during face-to-face social interactions such as job interviews.</p><p>&ldquo;Our innovative approach uses an unobtrusive wearable camera to record social behaviors, which are then analyzed using computer vision and deep learning models,&rdquo; Rehg said. &ldquo;Our automated analysis will allow job seekers to get feedback on their communication skills as part of our team&rsquo;s integrated approach to job interview coaching.&rdquo;</p><p>The project, which is part of the NSF&rsquo;s <a href="https://www.nsf.gov/od/oia/convergence-accelerator/">Convergence Accelerator</a> program, addresses an underutilized U.S. talent pool that poses a &ldquo;critical but overlooked public health and economic challenge: how to include individuals with ASD&rdquo; in the workforce, according to Vanderbilt Professor <strong>Nilanjan Sarkar</strong> who is leading the project team.</p><p>Consider:</p><ul><li>One in 54 people in the United States has ASD;</li><li>Each year 70,000 young adults with ASD leave high school and face grim employment prospects;&nbsp;</li><li>More than 8 in 10 adults with ASD are either unemployed or underemployed, a significantly higher rate than adults with other developmental disabilities;</li><li>The estimated lifetime cost of supporting an individual with ASD and limited employment prospects $3.2 million.&nbsp;</li><li>The total estimated cost of caring for Americans with ASD was $268 billion in 2015 and projected to grow to $461 billion in 2025.</li><li>An estimated $50,000 per person per year could be contributed back into society when individuals with ASD are employed.</li></ul><p>&ldquo;We want to harness the power of AI, stakeholder engagement and convergent research to include neurodiverse individuals in the 21<sup>st</sup> century workforce,&rdquo; Sarkar said. &ldquo;We feel that there is a big opportunity to turn great societal cost into great societal value.&rdquo;</p><p>For this project, organizational, clinical and implementation experts are integrated with engineering teams to pave the way for real-world impact. The multi-university, multi-disciplinary team already has commitments from major employers to license some of the technology and tools developed.</p><p>Researchers will address three themes:</p><ul><li>Individualized assessment of unique abilities and appropriate job-matching</li><li>Tailored understanding and ongoing support related to social communication and interaction challenge</li><li>Tools to support job candidates, employees and employers.</li></ul><p>Already, notable private-sector companies that employ people with ASD have committed to using at least one of the technologies developed under this program: Auticon, The Precisionists, Ernst &amp; Young and SAP among them.</p><p>Two other companies, Floreo and Tipping Point Media, will make their existing VR modules available for adaptation to the program. Microsoft, which has a long-standing interest in hiring people with ASD, is involved as well and provided seed funding and access to cloud services for technology integration.</p><p>The five technologies can be used separately or as an integrated system, and the work has broader potential beyond ASD to expand employment access. In the U.S. alone, an estimated 50 million people have ASD, attention-deficit/ hyper-activity disorder, learning disability or other neuro-diverse conditions.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1600105921</created>  <gmt_created>2020-09-14 17:52:01</gmt_created>  <changed>1600105921</changed>  <gmt_changed>2020-09-14 17:52:01</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech will develop methods for assessing nonverbal communication behaviors during face-to-face social interactions such as job interviews.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech will develop methods for assessing nonverbal communication behaviors during face-to-face social interactions such as job interviews.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-09-14T00:00:00-04:00</dateline>  <iso_dateline>2020-09-14T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-09-14 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>590844</item>      </media>  <hg_media>          <item>          <nid>590844</nid>          <type>image</type>          <title><![CDATA[Child Study Lab Autism Research]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Autism5.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Autism5.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Autism5.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Autism5.jpg?itok=JYiOtuat]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Lab coordinator Audrey Southerland, along with undergraduate assistants, leads data collection at the Child Study Lab.]]></image_alt>                    <created>1493061979</created>          <gmt_created>2017-04-24 19:26:19</gmt_created>          <changed>1493061979</changed>          <gmt_changed>2017-04-24 19:26:19</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="182941"><![CDATA[cc-research; ic-cybersecurity; ic-hcc]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="638703">  <title><![CDATA[Welcome New IC Faculty: Seven Join School from Variety of Research Areas]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Each year, the School of Interactive Computing conducts a rigorous search for the brightest minds to carry forward its academic and research initiatives. This year, IC welcomes seven&nbsp; to that mission. Take a quick glance at the new research&nbsp;coming to the School in 2020.</p><p><strong>Sehoon Ha</strong></p><p>Ph.D. in Computer Science, Georgia Tech 2015</p><p>Research interests: Robotics, Artificial Intelligence, Character Animation</p><p>Ha&rsquo;s research lies at the intersection of computer graphics and robotics, including physics-based animation, deep reinforcement learning, and computational robot design. Specifically, he has published work that addresses the need for more intelligent control software in robotics to improve agility, robustness, efficiency, and safety. In the long term, he aims to develop robotic companions for the home, search-and-rescue robots for disaster recovery scenes, and custom medical surgery robots that are tailored to individual patients.</p><p><strong>Jennifer Kim</strong></p><p>Ph.D. in Computer Science, University of Illinois, Urbana-Champaign 2019</p><p>Research interests: Human-Computer Interaction, Interactive Systems, Health Care</p><p>Kim&rsquo;s research investigates and develops interactive systems as communication artifacts to address various health-related challenges such as financial burdens of medical costs, difficulties in understanding behaviors of people with neurological disorders, and online health misinformation.</p><p><strong>Chris Le Dantec</strong></p><p>Ph.D. in Human-Centered Computing, Georgia Tech 2011</p><p>Research interests: Digital Media, Science and Technology Studies</p><p>Le Dantec is interested in developing community-based design practices that support new forms of collective action through production and use of civic data. Specifically, his research has direct impact on how policy makers and citizens work together to address issues of community engagement, social justice, urban transportation, and development.</p><p><strong>Andrea Grimes Parker</strong></p><p>Ph.D. in Human-Centered Computing, Georgia Tech 2011</p><p>Research interests: Human-Computer Interaction, Computer Supported Cooperative Work, Health Informatics</p><p>Grimes Parker designs and evaluates the impact of software tools that help people manage their health and wellness with a particular focus on equity. She studies racial, ethnic and economic health disparities, and the social context of health management. Through technology design, her research examines intrapersonal, social, cultural, and environmental factors that influence a person&rsquo;s ability and desire to make healthy decisions.</p><p><strong>Alan Ritter</strong></p><p>Ph.D. in Computer Science and Engineering, University of Washington 2013</p><p>Research interests: Natural Language Processing, Information Extraction, Machine Learning</p><p>Ritter&rsquo;s research aims to solve challenging technical problems that can help machines learn to read vast quantities of text with minimal supervision. Past work included a system that reads millions of tweets for mentions of new software vulnerabilities. This tool spotted critical security flaws in software. He is also interested in data-driven dialogue agents that can converse with people more naturally.</p><p><strong>Sashank Varma</strong></p><p>Ph.D. in Cognitive Studies, Vanderbilt University 2006</p><p>Research interests: Abstract Mathematical Thinking, Memory Systems Supporting Language Processing, Computational Models of High-Level Cognition</p><p>Varma&rsquo;s research investigates complex forms of cognition that are uniquely human from multiple disciplinary perspectives. Primarily, this involves mathematical cognition, where he investigates how people use symbols systems to understand abstract mathematical concepts, how they develop intuitions about and insights into mathematics, and the mental mechanisms shared between reasoning and algorithmic thinking.</p><p><strong>Wei Xu</strong></p><p>Ph.D. in Computer Science, New York University 2014</p><p>Research Interests: Natural Language Processing, Machine Learning, Social Media</p><p>Xu&rsquo;s recent work focuses on methods to understand the varied expressions in human language and to generate paraphrases for applications, such as reading and writing assistive technology. She has also worked on crowdsourcing, summarization, and information extraction for user-generated data, such as Twitter and StackOverflow.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1599066809</created>  <gmt_created>2020-09-02 17:13:29</gmt_created>  <changed>1599066809</changed>  <gmt_changed>2020-09-02 17:13:29</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Take a quick glance at the new research coming to the School of Interactive Computing in 2020.]]></teaser>  <type>news</type>  <sentence><![CDATA[Take a quick glance at the new research coming to the School of Interactive Computing in 2020.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-09-02T00:00:00-04:00</dateline>  <iso_dateline>2020-09-02T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-09-02 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>638702</item>      </media>  <hg_media>          <item>          <nid>638702</nid>          <type>image</type>          <title><![CDATA[New IC faculty 2020]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[New IC Faculty 2020.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/New%20IC%20Faculty%202020.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/New%20IC%20Faculty%202020.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/New%2520IC%2520Faculty%25202020.png?itok=S8Iw3lLg]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Sashank Varma, Sehoon Ha, Chris Le Dantec, Wei Xu, Alan Ritter, Andrea Grimes Parker, Jennifer Kim]]></image_alt>                    <created>1599066470</created>          <gmt_created>2020-09-02 17:07:50</gmt_created>          <changed>1599066470</changed>          <gmt_changed>2020-09-02 17:07:50</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="181216"><![CDATA[cc-research]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="638689">  <title><![CDATA[IC Student Ceara Byrne Trades Dog Toys for Masks to Chip in on Covid Relief]]></title>  <uid>33939</uid>  <body><![CDATA[<p>What do dog toys have to do with Covid-19 pandemic relief? Leave it to a Georgia Tech student to find a connection.</p><p>School of Interactive Computing Ph.D. student <strong>Ceara Byrne</strong>, whose primary research focuses on instrumenting dog toys with various sensors to measure canine behavior, found a way to contribute to the cause when she was approached by a fellow Georgia Tech student for assistance in 3D printing.</p><p><strong>Lee Whitcher</strong>, a Ph.D. student in the <a href="https://www.ae.gatech.edu/">Daniel Guggenheim School of Aerospace Engineering</a>, had already joined colleagues from the <a href="https://gtri.gatech.edu/">Georgia Tech Research Institute</a> and <a href="https://www.me.gatech.edu/">George W. Woodruff School of Mechanical Engineering</a> to design and manufacture personal protective equipment (PPE) like face shields to supplement the available supplies in the Atlanta area.</p><p>The work from GTRI and ME assisted in hospitals, and Whitcher&rsquo;s work &ndash; a non-profit called <a href="http://AtlantaBeatsCOVID.com">Atlanta Beats COVID</a> &ndash; aimed to design and produce masks and ventilators that could be produced by non-engineers wherever they are needed.</p><p>To do that, Whitcher and his partners needed a 3D printer that could cast the negatives for the masks. Georgia Tech&rsquo;s <a href="https://gvu.gatech.edu/">GVU</a> Prototyping Lab in the Technology Square Research Building had just what they needed. So did Byrne.</p><p>Byrne has been using the Prototyping Lab&rsquo;s printer for a while now to develop negatives of the silicone dog toys she uses in her research. Byrne&rsquo;s work involves studying behavior in canines to understand temperament for service animals.</p><p>&ldquo;I was inspired by a friend from high school who grew up on a ranch,&rdquo; Byrne said. &ldquo;She and I got involved in 4-H. When I came back for a master&rsquo;s degree, I started working with <strong>Thad Starner</strong> and <strong>Melody Jackson</strong> on the FIDO project. I started noticing these aspects of the data that were reflective of dog temperament like drive and how they tackle activities. It really interested me.&rdquo;</p><p>Part of the research was to find good ways to measure that temperament beyond just visual observation. One solution was to place sensors into toys to take measurements as the dog played with it.</p><p>&ldquo;I&rsquo;ve used the Prototyping Lab to 3D print my negative molds so that I can silicone cast the positives like balls and tug toys,&rdquo; Byrne said. &ldquo;It&rsquo;s a long process of finding the right silicones, materials, hardness.&nbsp; For the toys, I went through three or four different molds to find the right way to actually cast the parts. It was a lot of experimenting.&rdquo;</p><p>That experimentation made her uniquely prepared to chip in with Whitcher&rsquo;s project when Covid-19 hit. Looking for a way to develop the right mold for easy do-it-yourself mask production, Whitcher turned to Byrne for assistance.</p><p>&ldquo;There are a number of aspects to it,&rdquo; Byrne said. &ldquo;How do you de-gas some of the silicone? When you have a mask, you can&rsquo;t have the bubbles in the mold because you need a seal. How do you do it with the vacuum? If there&rsquo;s no vacuum available, what are some easier ways? How do we make these negatives properly, and how many can you cast at once? What are the environmental aspects when you do it from home?&rdquo;</p><p>These are all questions Byrne has had to explore when it comes to her dog toys. The experience proved useful in the mask production, as well.</p><p>Byrne was happy to get involved in pandemic relief assistance. She has brothers and sisters-in-law who are doctors.</p><p>&ldquo;They&rsquo;ve been amazing in helping around the community,&rdquo; she said. &ldquo;My brother is making masks, which I think is fascinating. He&rsquo;s a radiation oncologist and has built respiratory masks with the Pancreatic Cancer Foundation. So, I wanted to help out in any way that I could, as well.&rdquo;</p><p>Being at Georgia Tech, she said, made the collaboration a natural occurrence.</p><p>&ldquo;That&rsquo;s what makes Georgia Tech unique, right?&rdquo; she said. &ldquo;We can collaborate across these disciplines that maybe don&rsquo;t connect to each other on the surface.&rdquo;</p><p>Read more about the relief effort, how to request PPE, and how to get involved at <a href="http://AtlantaBeatsCOVID.com">AtlantaBeatsCOVID.com</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1599000797</created>  <gmt_created>2020-09-01 22:53:17</gmt_created>  <changed>1599000797</changed>  <gmt_changed>2020-09-01 22:53:17</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Byrne, whose work uses a 3D printer to make dog toys, is using her expertise to help in mask production.]]></teaser>  <type>news</type>  <sentence><![CDATA[Byrne, whose work uses a 3D printer to make dog toys, is using her expertise to help in mask production.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-09-01T00:00:00-04:00</dateline>  <iso_dateline>2020-09-01T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-09-01 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>638688</item>      </media>  <hg_media>          <item>          <nid>638688</nid>          <type>image</type>          <title><![CDATA[Ceara Byrne]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[heart-innovation.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/heart-innovation.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/heart-innovation.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/heart-innovation.jpg?itok=aXFSoV9_]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[ceara byrne]]></image_alt>                    <created>1598997204</created>          <gmt_created>2020-09-01 21:53:24</gmt_created>          <changed>1598997204</changed>          <gmt_changed>2020-09-01 21:53:24</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://ae.gatech.edu/news/2020/04/what-engineers-do-crisis]]></url>        <title><![CDATA[What Engineers Do in a Crisis]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="185769"><![CDATA[cc-research; ic-hcc; COVID-19]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="637711">  <title><![CDATA[Two IC Grads Earn Sigma Xi Best Ph.D. Thesis Awards]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Recent Georgia Tech Ph.D. graduates <strong>Caitlyn Seim</strong> and <strong>Aishwarya Agrawal</strong>, both from the School of Interactive Computing, were awarded the 2020 Sigma Xi Best Ph.D. Thesis Award. They were two of just 10 Ph.D. students at Georgia Tech recognized with the honor.</p><p>Seim&rsquo;s thesis, titled <em>Wearable Vibrotactile Stimulation: How Passive Stimulation Can Train and Rehabilitate</em>, presents a technique in which a vibrating wearable device is used to retrain motor function following debilitating occurrences of spinal fracture or stroke. Now a postdoc at Stanford University and a fellow with the National Institutes of Health, Seim is currently working with stroke survivors to develop accessible and functional wearable devices to reduce physical disability in both the upper and lower limbs.</p><p>&ldquo;Lately, I have also developed new mechanical tools to assess hand and arm function when there are no quantitative clinical tests available,&rdquo; Seim said. &ldquo;I plan to continue research on wearable and ubiquitous systems for health, accessibility, and training.&rdquo;</p><p>In Agrawal&rsquo;s thesis, titled <em>Visual Question Answering and Beyond</em>, she explores a multi-modal artificial intelligence task called visual question answering. In this task, given an image and natural language question about it, a machine is programmed to automatically produce an accurate natural language answer. The applications of VQA include aiding visually impaired users in understanding their surroundings, aiding analysts in examining large quantities of surveillance data, teaching children through interactive demos, interacting with personal AI assistants, and making visual social media content more accessible.</p><p>Now at DeepMind and soon to be an assistant professor at the University of Montreal and Mila, an AI research institute, Agrawal intends to equip current VQA systems with better skills to move toward artificial general intelligence.</p><p>&ldquo;In the long term, I am excited about science fiction becoming reality, when we all have smart virtual assistants that can see and talk and serve as an aid to visually impaired users,&rdquo; she said.</p><p>The eight other recipients of the Georgia Tech Sigma Xi Best Ph.D. Thesis Award were Mingue Kim (ECE), Ming Zhao (Chemistry), Andres Caballero (BME), Ke (Chris) Liu (CEE), Monica McNerney (ChBE), Chris Sugino (ME), Hamid Reza Seyf (ME), and Eric Tervo (ME).</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1597067162</created>  <gmt_created>2020-08-10 13:46:02</gmt_created>  <changed>1597067162</changed>  <gmt_changed>2020-08-10 13:46:02</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[They were two of just 10 Ph.D. students at Georgia Tech recognized with the honor.]]></teaser>  <type>news</type>  <sentence><![CDATA[They were two of just 10 Ph.D. students at Georgia Tech recognized with the honor.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-08-10T00:00:00-04:00</dateline>  <iso_dateline>2020-08-10T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-08-10 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>637710</item>      </media>  <hg_media>          <item>          <nid>637710</nid>          <type>image</type>          <title><![CDATA[Aishwarya Agrawal and Caitlyn Seim]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Personal Vlog YouTube Thumbnail.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Personal%20Vlog%20YouTube%20Thumbnail.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Personal%20Vlog%20YouTube%20Thumbnail.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Personal%2520Vlog%2520YouTube%2520Thumbnail.png?itok=Dq6mvFJQ]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Aishwarya Agrawal and Caitlyn Seim]]></image_alt>                    <created>1597067128</created>          <gmt_created>2020-08-10 13:45:28</gmt_created>          <changed>1597067128</changed>          <gmt_changed>2020-08-10 13:45:28</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="181639"><![CDATA[cc-research; ic-ai-ml]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="637122">  <title><![CDATA[Georgia Tech, 6 Collaborators Receive $5.9 Million NIH Grant for a National Center in AI-based mHealth Research]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Georgia Tech researchers will develop more effective and personalized treatment approaches for chronic health conditions under a new grant from the <a href="http://nih.gov">National Institutes of Health</a>.</p><p>The NIH is issuing $5.9 million in funding for a new national biomedical technology&nbsp;resource center (BTRC), called the mHealth Center for Discovery, Optimization &amp; Translation of Temporally-Precise Interventions (mDOT). Georgia Tech, one of seven collaborators on the project, will receive $500,000, and mDOT&nbsp;will be headquartered at the MD2K Center of Excellence at The University of Memphis.</p><p>One of the biggest drivers of the nation&rsquo;s rising healthcare spending is providing care for patients with chronic diseases, many of which are linked to daily behaviors such as dietary choices, sedentary behavior, stress, and addiction. The mDOT Center will be a new national technology resource for improving people&rsquo;s health and wellness. It will conduct cutting-edge AI research to produce easily deployable wearables, apps for wearables and smartphones, and a companion cloud system. mDOT&rsquo;s innovative technology will enable patients to initiate and sustain the healthy lifestyle choices necessary to prevent and/or successfully manage the growing burden of multiple chronic conditions.</p><p>Led by <strong>Jim Rehg</strong>, a Professor in Georgia Tech&rsquo;s <a href="http://ic.gatech.edu">School of Interactive Computing</a>, Georgia Tech&rsquo;s project will focus on analyzing streams of biomarker data to enable the development of more effective, personalized treatment approaches for chronic health conditions like smoking and physical activity. To achieve this, the team will develop machine learning methods that can discover important risk factors from sensor data and identify effective intervention targets.</p><p>&ldquo;Consider developing an intervention to help people who are trying to quit smoking by providing personalized strategies for managing risk factors that are known to precipitate relapse,&rdquo; Rehg said. &ldquo;Researchers and practitioners would use our tools to analyze biomarker data and characterize the patterns that lead to relapse and identify potential intervention targets.&rdquo;</p><p>The collaboration can then use the tools provided by the other teams to develop and tailor an effective personalized stress intervention and deliver it efficiently on a mobile device. <strong>Omer Inan</strong>, a faculty member in Georgia Tech&rsquo;s <a href="http://ece.gatech.edu">School of Electrical and Computer Engineering</a>, will also collaborate with the team, leveraging work on novel non-invasive biosensors that detect cardiovascular changes in heart failure. Working alongside the mDOT team will enhance the ability to develop and deploy interventions based on his novel wearable sensors.</p><p>&ldquo;Researchers and industry innovators can leverage mDOT&rsquo;s technological resources to create the next generation of mHealth technology that is highly personalized to each user, transforming people&rsquo;s health and wellness,&rdquo; said <strong>Santosh Kumar</strong>, the lead investigator of mDOT, who is the director of MD2K Center of Excellence and Lillian &amp; Morrie Moss Chair of Excellence Professor of Computer Science at the University of Memphis.</p><p>To ensure mDOT&rsquo;s innovative technology can be used by scientists to solve real-world problems, mDOT will be working closely with over a dozen other federally-funded projects to engage in joint technology development, testing, and large-scale real-life deployment. To ensure that mDOT&rsquo;s technological resources can fuel innovation in the health technology industry, the mDOT Center is establishing a new industry consortium to provide access to mDOT&rsquo;s latest research and seek feedback to inform its ongoing research.</p><p>The mDOT Center will be administered by the National Institute of Biomedical Imaging and Bioengineering (NIBIB).</p><p>&ldquo;The mDOT Center will be the first<a href="https://www.nibib.nih.gov/research-funding/biomedical-technology-resource-centers">&nbsp;BTRC</a>&nbsp;focused on developing innovative mHealth technologies. It is positioned to empower scientists to discover, personalize, and deliver temporally-precise mHealth interventions and treatments, ensuring that health and wellness tools are delivered at the right moment, via the right personal device, and is optimized to have the most influence,&rdquo; said mDOT&rsquo;s program officer&nbsp;<strong>Tiffani Lash</strong>, director of the NIBIB program inConnected Health.</p><p>The multidisciplinary mDOT team consists of leading researchers in artificial intelligence (AI), mobile computing, wearable sensors, privacy, and precision medicine from Harvard University, Georgia Institute of Technology, The Ohio State University, The University of Massachusetts-Amherst, The University of Memphis (lead), The University of California at Los Angeles, and The University of California at San Francisco.</p><p><strong>About MD2K:</strong>&nbsp;The Center of Excellence for Mobile Sensor Data-to-Knowledge (MD2K), headquartered (in FedEx Institute of Technology) at The University of Memphis, was established in 2014 by a grant from National Institutes of Health (NIH) under its Big-Data-To-Knowledge (BD2K) initiative. It has developed mobile sensor big data technologies to improve health and wellness. MD2K&rsquo;s open-source software platforms for smartphones and the cloud are used across the nation to conduct scientific studies.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1595277387</created>  <gmt_created>2020-07-20 20:36:27</gmt_created>  <changed>1595277387</changed>  <gmt_changed>2020-07-20 20:36:27</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[he NIH is issuing $5.9 million in funding for a new national biomedical technology resource center (BTRC), called the mHealth Center for Discovery, Optimization & Translation of Temporally-Precise Interventions (mDOT).]]></teaser>  <type>news</type>  <sentence><![CDATA[he NIH is issuing $5.9 million in funding for a new national biomedical technology resource center (BTRC), called the mHealth Center for Discovery, Optimization & Translation of Temporally-Precise Interventions (mDOT).]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-07-20T00:00:00-04:00</dateline>  <iso_dateline>2020-07-20T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-07-20 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>592632</item>      </media>  <hg_media>          <item>          <nid>592632</nid>          <type>image</type>          <title><![CDATA[Rehg-Jim]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Rehg-Jim250.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Rehg-Jim250.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Rehg-Jim250.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Rehg-Jim250.jpg?itok=wYCBihGD]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[James Rehg]]></image_alt>                    <created>1497298524</created>          <gmt_created>2017-06-12 20:15:24</gmt_created>          <changed>1497298713</changed>          <gmt_changed>2017-06-12 20:18:33</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="182525"><![CDATA[cc-research; ic-hcc; ic-ai-ml]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="636549">  <title><![CDATA[C4G BLIS Update Improves Usability, Could Prove Useful in Fight Against Disease Outbreaks]]></title>  <uid>33939</uid>  <body><![CDATA[<p>An update to a laboratory information system used in countries across Africa is improving usability and could prove critical in response to future disease outbreaks.</p><p>In 2010, a group of researchers at <a href="http://gatech.edu/">Georgia Tech</a>, the CDC, and Ministries of Health in several African countries launched an open-source laboratory management system as part of the <a href="https://ptc.gatech.edu/computing-for-good-college-of-computing">College of Computing&rsquo;s Computing-for-Good</a> (C4G) initiative. Designed to be ultra-configurable to meet variable needs of labs across developing countries with minimal training for staff, it quickly grew to become one of C4G&rsquo;s biggest success stories.</p><p>More than 10 nations in sub-Saharan Africa adopted the program, called the <a href="http://blis.cc.gatech.edu/">Basic Laboratory Information System</a> (BLIS), giving areas with little or poor internet connectivity an easy-to-use system for many who had minimal computing experience. These countries, which had over 1 million patients at the time, were using paper-based systems to manage information on disease spread, local illnesses, and much more. As information and communications technologies have expanded in the area, however, many labs gained a standardized reports system that could track prevalence rates of infections, slowing their spread.</p><p>But a lot can change in just 10 years. What was once designed for personal computing interfaces is now desired for a wide range of new platforms. Although laptops are still the device of choice for the majority of nurses &ndash; 79.6 percent reported in a study of a Nigerian hospital -- smartphones and tablets have seen a steady increase. The coming years will include many more innovations that render even those obsolete.</p><p>As users in the global south aspire to embrace mobile computing in clinical settings, a flexible interface, adaptable to everchanging applications, is needed.</p><p>Enter: <strong>Jung Wook Park</strong> and <strong>Aditi Shah</strong>, a Ph.D. student in Georgia Tech&rsquo;s <a href="http://ic.gatech.edu/">School of Interactive Computing</a> (IC) and former master&rsquo;s student in the <a href="http://scs.gatech.edu/">School of Computer Science</a> (SCS), respectively. Along with SCS Professor <strong>Santosh Vempala</strong> and IC Principal Research Scientist <strong>Rosa Arriaga</strong>, Park and Shah published research updating the current interface of C4G BLIS</p><p>Their updates focused on a handful of key areas, primarily mobile support. A responsive user interface framework supporting various screen sizes and resolutions was developed and evaluated by real users at hospitals in Africa currently using BLIS.</p><p>They compared user experience with the current interface on desktops and smartphones with a proposed interface on both and found that there was a significant improvement on both the desktop and smartphone.</p><p>&ldquo;When you bring in a new system, they may feel uncomfortable with it,&rdquo; Park said. &ldquo;If we didn&rsquo;t do a great job, you might get the same score or lower at the beginning. Over time, we saw improvements of 32 and 34 percent on desktops and smartphones.&rdquo;</p><p>Shah, now at Microsoft, offered plenty of help in the development of the system, and her experience with a visual impairment allowed her to provide perspective on accessibility, as well.</p><p>The implications of this research extend far beyond ease of use for nurses, however. Park identified a growing problem across the globe in health care: communication. As the current pandemic can illustrate, viruses and diseases can spread quickly across many different populations. It isn&rsquo;t sufficient to have just local data to mount an appropriate response; teams around the world must be able to rapidly share information.</p><p>A system like C4G BLIS, with its improved user interface that can be used across multiple platforms depending on the local needs of various communities, can help that communication.</p><p>&ldquo;If you notice something locally and maybe other areas of the country or continent notice something, how do you know if it is a pandemic?&rdquo; Park posed. &ldquo;You need to be able to share that information to manage the spread. By turning these local systems into a standardized cloud-based system, we can improve communication.&rdquo;</p><p>Already, Vempala said, he has heard reports from many labs that have adapted the flexible system to keep track of COVID-19 data in their communities.</p><p>The paper is titled <em>Redesigning a Basic Laboratory Information System for the Global South</em>, and was presented at the International Telecommunication Union Kaleidoscope conference, earning a Best Paper award.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1593116530</created>  <gmt_created>2020-06-25 20:22:10</gmt_created>  <changed>1593116530</changed>  <gmt_changed>2020-06-25 20:22:10</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A system that has helped bring digital record keeping to hospitals across Africa has received a needed update for new platforms like smartphones and tablets.]]></teaser>  <type>news</type>  <sentence><![CDATA[A system that has helped bring digital record keeping to hospitals across Africa has received a needed update for new platforms like smartphones and tablets.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-06-25T00:00:00-04:00</dateline>  <iso_dateline>2020-06-25T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-06-25 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>636548</item>      </media>  <hg_media>          <item>          <nid>636548</nid>          <type>image</type>          <title><![CDATA[Jung Wook Park and Aditi Shah]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Shah and Park Image.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Shah%20and%20Park%20Image.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Shah%20and%20Park%20Image.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Shah%2520and%2520Park%2520Image.png?itok=9c6uMWHV]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Jung Wook Park and Aditi Shah]]></image_alt>                    <created>1593116182</created>          <gmt_created>2020-06-25 20:16:22</gmt_created>          <changed>1593116182</changed>          <gmt_changed>2020-06-25 20:16:22</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="184890"><![CDATA[cc-research; ic-hcc]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="636275">  <title><![CDATA[Robots Gain Ability to Master Object Manipulation with Context-Aware Technique]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Georgia Institute of Technology researchers have developed one of the most robust research methods currently available to allow robots to correctly pick up common objects based on how they should be used.</p><p>Whereas humans might touch a hot pan on a stove once and never forget the lesson, it&rsquo;s more complex to train robots to apply such universal knowledge across different situations.</p><p>The new technique, called CAGE, or Context-Aware Grasping Engine, takes into consideration a range of factors &ndash; such as the task the object will be used for, whether the object is full or empty, what it&rsquo;s made of, and its shape &ndash; so that a robot can learn the right way to grasp various objects in a given context. For example, it allows a robot to learn not to hold a hot cup of tea by its opening, or to handle a cooking pot differently based on whether it just left a stovetop or a cabinet.</p><p>&ldquo;In order for robots to effectively perform object manipulation, a broad sense of contexts, including object and task constraints, needs to be accounted for,&rdquo; said&nbsp;<strong>Weiyu Liu</strong>, lead researcher on CAGE and Ph.D. student in robotics.</p><p>Using CAGE, a robot is able to apply what it has learned to objects it&rsquo;s never seen.&nbsp; For example, if trained to grasp a spatula by the handle to make a scooping motion, the robot is able to generalize this knowledge and know to grasp a mug by the handle and use it to scoop &mdash; if that was the programmed task &mdash; even if the robot has never encountered a mug before.</p><p>The research team, from the Robot Autonomy and Interactive Learning (RAIL) lab at Georgia Tech, validated their approach against three existing methods for teaching robots to handle objects. The team used a novel dataset consisting of 14,000 grasps for 44 objects, 7 tasks, and 6 different object states (e.g. objects contained solids, liquids, or were empty).</p><p>CAGE outperformed the other methods in a simulation by statistically significant margins, according to the researchers, highlighting the model&rsquo;s ability to collectively reason about contextual information.</p><p>CAGE had an 86 percent success rate when averaged across tests looking at how well it identified context-aware grasps and if the model could generalize to new objects a robot had not seen previously. Among the existing methods, the highest success rate averaged 69 percent.</p><p>Liu said that the team&rsquo;s model can rank grasp &ldquo;candidates&rdquo; for various contexts, ensuring that more suitable candidates are ranked higher than less suitable ones given a context. So a robot might, for example, learn to hand a sharp metal knife to a person handle-first, but hand over a plastic knife in any orientation due to its relative safety.</p><p>A final experiment evaluated the effectiveness of CAGE using a Fetch robot equipped with a camera, moving arm, and a parallel-jaw gripper. It performed almost perfectly in making a judgement on how to grasp objects for several distinct tasks, including scooping, pouring, lifting, and handing over an object, among others. If there was no suitable grasp for the given situation, the robot made no attempt in all cases.</p><p>The work, developed by Liu,&nbsp;<strong>Angel Daruna</strong>, and&nbsp;<strong>Sonia Chernova</strong>, was accepted into the&nbsp; International Conference on Robotics and Automation, taking place virtually this June. The paper is titled&nbsp;<a href="http://rail.gatech.edu/assets/files/Liu_ICRA20.pdf" rel="noopener noreferrer" target="_blank">CAGE: Context-Aware Grasping Engine</a>&nbsp;and the research data is publicly available at&nbsp;<a href="https://github.com/wliu88/rail_semantic_grasping">https://github.com/wliu88/rail_semantic_grasping</a>.</p><p><em>This work is supported in part by NSF IIS 1564080, NSF GRFP DGE-1650044, and ONR N000141612835. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors.</em></p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1592341318</created>  <gmt_created>2020-06-16 21:01:58</gmt_created>  <changed>1592342110</changed>  <gmt_changed>2020-06-16 21:15:10</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Institute of Technology researchers have developed one of the most robust research methods currently available to allow robots to correctly pick up common objects based on how they should be used.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Institute of Technology researchers have developed one of the most robust research methods currently available to allow robots to correctly pick up common objects based on how they should be used.]]></sentence>  <summary><![CDATA[<p>Georgia Institute of Technology researchers have developed one of the most robust research methods currently available to allow robots to correctly pick up common objects based on how they should be used.</p>]]></summary>  <dateline>2020-06-16T00:00:00-04:00</dateline>  <iso_dateline>2020-06-16T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-06-16 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu?subject=CAGE%20algorithm%3B%20ICRA%202020">Joshua Preston</a><br />Research Communications Manager<br />GVU Center and College of Computing</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>636277</item>          <item>636276</item>      </media>  <hg_media>          <item>          <nid>636277</nid>          <type>image</type>          <title><![CDATA[One Step Closer to Domestic Robots | ICRA 2020]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[robot coffee graphic_mercury.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/robot%20coffee%20graphic_mercury.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/robot%20coffee%20graphic_mercury.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/robot%2520coffee%2520graphic_mercury.png?itok=UbtlmJiF]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1592341649</created>          <gmt_created>2020-06-16 21:07:29</gmt_created>          <changed>1592341649</changed>          <gmt_changed>2020-06-16 21:07:29</gmt_changed>      </item>          <item>          <nid>636276</nid>          <type>image</type>          <title><![CDATA[Sonia Chernova with robot arm]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[sonia chernova.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/sonia%20chernova.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/sonia%20chernova.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/sonia%2520chernova.jpg?itok=nCEmH-ut]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1592341598</created>          <gmt_created>2020-06-16 21:06:38</gmt_created>          <changed>1592341598</changed>          <gmt_changed>2020-06-16 21:06:38</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.youtube.com/watch?v=EnHUHQv8hr0&amp;feature=emb_logo]]></url>        <title><![CDATA[CAGE: Context-Aware Grasping Engine]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="636196">  <title><![CDATA[ML@GT Faculty Members Will Discuss Projects Related to Covid-19 Relief During Virtual Panel]]></title>  <uid>34773</uid>  <body><![CDATA[<p>The coronavirus (Covid-19) pandemic has wreaked havoc on the world, spurring researchers across disciplines into action to help human-kind. Four researchers affiliated with the <a href="http://ml.gatech.edu/">Machine Learning Center at Georgia Tech (ML@GT)</a> and one <a href="https://omscs.gatech.edu/">Online Master of Science in Computer Science (OMSCS)</a> student examined different aspects of the virus&rsquo; impact. From creating forecasting models to studying the psychological impact of the disease, these researchers are helping people understand the virus.</p><p>On June 24, ML@GT faculty members <strong>Srijan Kumar </strong>(School of Computational Science and Engineering,) <strong>Aditya Prakash </strong>(School of Computational Science and Engineering,) <strong>Munmun De Choudhury </strong>(School of Interactive Computing,) <strong>Nicoleta Serban&nbsp;</strong>(H. Milton Stewart School of Industrial and Systems Engineering,) and OMSCS student <strong>Kenneth Miller</strong> will participate in a virtual panel discussing their work. The panel will be moderated by ML@GT executive director <strong>Irfan Essa</strong>.</p><p>Panelists will give individual presentations before participating in a general question-and-answer segment with audience members.</p><ul><li>Kumar and De Choudhury will share details of their work regarding the <a href="http://ml.gatech.edu/hg/item/635397">psychological impact of Covid-19</a>. Kumar will also discuss his work examining <a href="https://www.cc.gatech.edu/news/635858/predicting-hate-crimes-targeting-asian-americans-amid-covid-19-outbreak">hate and counter-hate messages on Twitter against Asian Americans</a> during the pandemic.</li><li>Prakash is a member of the Center for Disease Control and Prevention&rsquo;s (CDC) forecasting team, and will share their <a href="https://www.cc.gatech.edu/news/635849/forecasting-covid-19-pandemic-united-states">new data-driven approach to disease forecasting</a>.</li><li>Serban&rsquo;s presentation will focus on her work creating an <a href="https://www.georgiahealthnews.com/2020/05/georgia-tech-model-predicts-spike-covid-cases-deaths/">agent-based simulation&nbsp;forecasting model</a>. This model captures the progression of the disease in an individual and in households, schools, communities, and workplaces.</li><li>A lawyer by day and OMSCS student by night, Miller participated in a Kaggle challenge using natural language processing and machine learning to <a href="https://www.cc.gatech.edu/news/635081/omscs-student-uses-machine-learning-help-understand-covid-19">help doctors and scientists read the most important studies</a> related to Covid-19.</li></ul><p>The panel will take place virtually via a Bluejeans Event at 11 a.m. on June 24 and is open to the public. <a href="https://primetime.bluejeans.com/a2m/register/sfpbpsgg">Registration is required</a>.</p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1591969253</created>  <gmt_created>2020-06-12 13:40:53</gmt_created>  <changed>1592250730</changed>  <gmt_changed>2020-06-15 19:52:10</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Members of the ML@GT community will discuss their Covid-19 related research efforts in a panel discussion on June 24, 2020.]]></teaser>  <type>news</type>  <sentence><![CDATA[Members of the ML@GT community will discuss their Covid-19 related research efforts in a panel discussion on June 24, 2020.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-06-12T00:00:00-04:00</dateline>  <iso_dateline>2020-06-12T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-06-12 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allie.mcfadden@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>636195</item>      </media>  <hg_media>          <item>          <nid>636195</nid>          <type>image</type>          <title><![CDATA[Members of the ML@GT community will discuss their Covid-19 related research efforts in a panel discussion on June 24, 2020.]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Using Machine Learning to Respond to Covid-19.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Using%20Machine%20Learning%20to%20Respond%20to%20Covid-19.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Using%20Machine%20Learning%20to%20Respond%20to%20Covid-19.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Using%2520Machine%2520Learning%2520to%2520Respond%2520to%2520Covid-19.png?itok=oemqhAhB]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Members of the ML@GT community will discuss their Covid-19 related research efforts in a panel discussion on June 24, 2020.]]></image_alt>                    <created>1591969094</created>          <gmt_created>2020-06-12 13:38:14</gmt_created>          <changed>1591969094</changed>          <gmt_changed>2020-06-12 13:38:14</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="431631"><![CDATA[OMS]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="636173">  <title><![CDATA[Research Conference Shows Social Challenges are Manifested, Magnified, and Mitigated Online at Pivotal Time for Nation]]></title>  <uid>27592</uid>  <body><![CDATA[<p>The value of online mental health communities, how crisis events are described differently over time on social media, and refining how cyberbullying is detected and classified are major topics of research by Georgia Institute of Technology researchers at this week&rsquo;s International Conference on Web and Social Media (ICWSM), taking place virtually. It was originally scheduled to be held in Atlanta near the Georgia Tech campus.</p><p>Over 220 academics at the 14<sup>th</sup> annual event are convening and discussing work that is especially relevant during a time of an ongoing global health crisis and social unrest that has taken root across the United States.</p><p>Research in the conference proceedings include many topics directly addressing social ills and injustices that are magnified online as well as potential ways to better understand and mitigate them.</p><p>Several College of Computing faculty, current and former students, and postdoctoral researchers are part of the organizing committee. <strong>Munmun De Choudhury</strong> (Interactive Computing) is serving as the general chair of the conference this year. Former Human-Centered Computing PhD student <strong>Stevie Chancellor</strong> is workshop chair, former Computer Science PhD student <strong>Tanushree Mitra</strong> is tutorials chair, current CS PhD student <strong>Koustuv Saha</strong> is web chair, and current postdoc <strong>Talayeh Aledavood</strong> is local/social chair. CoC faculty <strong>Diyi Yang</strong> (Interactive Computing) and <strong>Srijan Kumar</strong> (Computational Science and Engineering) are data challenge chairs.</p><p>One of the two keynotes at the conference is by IC faculty <strong>Amy Bruckman</strong>.</p><p>Georgia Tech has three papers in this year&rsquo;s program:&nbsp;</p><ul><li>A study in causal inference by CS PhD student <strong>Koustuv Saha</strong> that tests what leads to favorable psychosocial outcomes in mental health forums.<br /><em>Link: </em><a href="https://aaai.org/ojs/index.php/ICWSM/article/view/7326"><em>https://aaai.org/ojs/index.php/ICWSM/article/view/7326</em></a></li><li>A paper by HCC PhD student <strong>Ian Stewart</strong>, with advisors <strong>Diyi Yang</strong> and <strong>Jacob Eisenstein</strong>, that intends to gather a sharper view of &ldquo;collective attention&rdquo; on social media. Looking at descriptive details for a crisis event, researchers find that the information needed to describe that event changes as time goes on.<br /><em>Link: </em><a href="https://aaai.org/ojs/index.php/ICWSM/article/view/7331"><em>https://aaai.org/ojs/index.php/ICWSM/article/view/7331</em></a></li><li>A socially-inspired approach to detect cyberbullying online, by incoming PhD student <strong>Caleb Ziems</strong>. The paper proposes new criteria for cyberbullying (e.g. harmful intent) and finds that both text and social features help prediction. This paper has been recognized with an Honorable Mention Award, given to a total of eight papers this year.<br /><em>Link: </em><a href="https://aaai.org/ojs/index.php/ICWSM/article/view/7345"><em>https://aaai.org/ojs/index.php/ICWSM/article/view/7345</em></a><br />&nbsp;</li></ul><p>For details about more research and to read the organizing committee&rsquo;s full statement on the commitment to Black Lives Matter, fighting structural racism, and promoting inclusion and equity, go to <a href="https://www.icwsm.org/2020/index.html">https://www.icwsm.org/2020/index.html</a>. In the wake of current events in the United States, the conference made 20 registration fee waivers available for Black scholars and individuals from other marginalized groups throughout the world, and provided scheduling flexibility to speakers and attendees participating in the Shutdown STEM walkout on June 10.</p><p>The conference is sponsored by the Association for the Advancement of Artificial Intelligence.</p><p>&nbsp;</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1591888855</created>  <gmt_created>2020-06-11 15:20:55</gmt_created>  <changed>1591889141</changed>  <gmt_changed>2020-06-11 15:25:41</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The value of online mental health communities, how crisis events are described differently over time on social media, and refining how cyberbullying is detected and classified are major topics of research from Georgia Tech at ICWSM 2020.]]></teaser>  <type>news</type>  <sentence><![CDATA[The value of online mental health communities, how crisis events are described differently over time on social media, and refining how cyberbullying is detected and classified are major topics of research from Georgia Tech at ICWSM 2020.]]></sentence>  <summary><![CDATA[<p>The value of online mental health communities, how crisis events are described differently over time on social media, and refining how cyberbullying is detected and classified are major topics of research by Georgia Institute of Technology researchers at this week&rsquo;s International Conference on Web and Social Media (ICWSM 2020).</p>]]></summary>  <dateline>2020-06-10T00:00:00-04:00</dateline>  <iso_dateline>2020-06-10T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-06-10 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu?subject=ICWSM%202020">Joshua Preston</a><br />Research Communications Manager<br />GVU Center and College of Computing</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>636174</item>      </media>  <hg_media>          <item>          <nid>636174</nid>          <type>image</type>          <title><![CDATA[International Conference on Web and Social Media (ICWSM 2020)]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[ICWSM 2020.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/ICWSM%202020.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/ICWSM%202020.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/ICWSM%25202020.png?itok=5CNUhOx1]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1591888971</created>          <gmt_created>2020-06-11 15:22:51</gmt_created>          <changed>1591888971</changed>          <gmt_changed>2020-06-11 15:22:51</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="636110">  <title><![CDATA[Robotics Research Includes Advances in Systems Design, Applications, and other Key Areas]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Roboticists from around the world, including researchers from the Georgia Institute of Technology, are publishing their latest work at the 2020 IEEE International Conference on Robotics and Automation (ICRA), a two-week live online virtual affair&nbsp;the first part of June, with research activities continuing through August.</p><p><a href="https://icra.cc.gatech.edu/" target="_blank">Georgia Tech is a leading contributor</a> with 42 papers that include research covering more than 60 subfields within robotics. Deep learning in robotics and automation is one of the top areas of research among authors from the College of Computing and College of Engineering, the two largest contributors to Georgia Tech&rsquo;s research at ICRA.</p><p>Deep learning is computing&rsquo;s strongest area while mechanism design is where engineering excels based on the number of authors in the areas.</p><p><strong>Top 3 subfields for computing authors:</strong></p><ul><li>Deep Learning</li><li>Semantic Scene Understanding</li><li>Physically Assistive Devices and Network Devices (tie)</li></ul><p><strong>Top 3 subfields for engineering authors:</strong></p><ul><li>Mechanism Design</li><li>Learning and Adaptive Systems</li><li>Multi-Robot Systems</li></ul><p>The diversity of Georgia Tech robotics research at ICRA is also represented in the number of academic units and disciplines involved in advancing the field. Multidisciplinary teams often come together to tackle challenges that require a unique combination of skill sets.</p><p>There are nearly 100 Georgia Tech authors with work at ICRA. Explore the people, research, and trends from our community at <a href="https://icra.cc.gatech.edu/">https://icra.cc.gatech.edu/</a></p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1591732965</created>  <gmt_created>2020-06-09 20:02:45</gmt_created>  <changed>1591733183</changed>  <gmt_changed>2020-06-09 20:06:23</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Roboticists from around the world, including researchers from the Georgia Institute of Technology, are publishing their latest work at the 2020 IEEE International Conference on Robotics and Automation (ICRA).]]></teaser>  <type>news</type>  <sentence><![CDATA[Roboticists from around the world, including researchers from the Georgia Institute of Technology, are publishing their latest work at the 2020 IEEE International Conference on Robotics and Automation (ICRA).]]></sentence>  <summary><![CDATA[<p>Roboticists from around the world, including researchers from the Georgia Institute of Technology, are publishing their latest work at the 2020 IEEE International Conference on Robotics and Automation (ICRA), a two-week live online virtual affair&nbsp;the first part of June, with research activities continuing through August.</p>]]></summary>  <dateline>2020-06-09T00:00:00-04:00</dateline>  <iso_dateline>2020-06-09T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-06-09 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu?subject=ICRA">Joshua Preston</a><br />Research Communications Manager<br />College of Computing</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>636111</item>      </media>  <hg_media>          <item>          <nid>636111</nid>          <type>image</type>          <title><![CDATA[ICRA 2020]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[viz-icra_authors-by-kw.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/viz-icra_authors-by-kw.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/viz-icra_authors-by-kw.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/viz-icra_authors-by-kw.png?itok=a_wUCyY9]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1591733133</created>          <gmt_created>2020-06-09 20:05:33</gmt_created>          <changed>1591733133</changed>          <gmt_changed>2020-06-09 20:05:33</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="636082">  <title><![CDATA[Dellaert Awarded IEEE ICRA Milestone Award]]></title>  <uid>34773</uid>  <body><![CDATA[<p><strong>Frank Dellaert</strong>, a professor in the&nbsp;<a href="https://ic.gatech.edu/">School of Interactive Computing</a>, and affiliated with the&nbsp;<a href="http://ml.gatech.edu/">Machine Learning Center at Georgia Tech (ML@GT)</a>&nbsp;and&nbsp;<a href="https://gvu.gatech.edu/">GVU Center</a>, has been honored with the IEEE ICRA Milestone Award at the&nbsp;<a href="https://www.icra2020.org/">2020 IEEE International Conference on Robotics and Automation (ICRA.)</a></p><p>The award recognizes the most influential ICRA paper published between 1998-2002 and selected&nbsp;<a href="https://www.ri.cmu.edu/pub_files/pub1/dellaert_frank_1999_2/dellaert_frank_1999_2.pdf"><em>Monte Carlo Localization for Mobile Robots</em></a>&nbsp;as this year&rsquo;s recipient. Dellaert conducted this work during his Ph.D studies at Carnegie Mellon University with&nbsp;<strong>Dieter Fox, Wolfram Burgard</strong>, and&nbsp;<strong>Sebastian Thrun</strong>.</p><p>&ldquo;It is a great honor to be recognized, but receiving a &rsquo;20 years on&rsquo; milestone award also makes you feel old!&rdquo; said Dellaert.</p><p>The paper was accepted to ICRA in 1999 and introduced the Monte Carlo Localization (MLC) method or particle filter localization, which represents the probability density involved in maintaining a set of samples that are randomly drawn from it. This method is faster, more accurate, and less memory-intensive than earlier grid-based methods and allows a robot to be localized without knowledge of its starting location.</p><p>MCL is simple to apply to the robotics domain, leading to its popularity. It is now taught in every robotics 101 class around the world. Many mobile robots, including commercial efforts, rely on MCL for localizing.</p><p>&ldquo;Simplicity is key for acceptance and you cannot predict which of your research will have the most impact. This paper was a result of me procrastinating on my Ph.D. thesis which is a paper almost nobody read. It is an enormous honor that MCL has made a lasting impact on our field,&rdquo; said Dellaert.</p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1591715351</created>  <gmt_created>2020-06-09 15:09:11</gmt_created>  <changed>1591715351</changed>  <gmt_changed>2020-06-09 15:09:11</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The award recognizes the most influential ICRA paper published between 1998-2002 and selected Monte Carlo Localization for Mobile Robots as this year’s recipient. ]]></teaser>  <type>news</type>  <sentence><![CDATA[The award recognizes the most influential ICRA paper published between 1998-2002 and selected Monte Carlo Localization for Mobile Robots as this year’s recipient. ]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-06-09T00:00:00-04:00</dateline>  <iso_dateline>2020-06-09T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-06-09 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allie.mcfadden@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>636081</item>      </media>  <hg_media>          <item>          <nid>636081</nid>          <type>image</type>          <title><![CDATA[Frank Dellaert, a professor in the School of Interactive Computing, and affiliated with the Machine Learning Center at Georgia Tech (ML@GT) and GVU Center, has been honored with the IEEE ICRA Milestone Award at the 2020 IEEE International Conference on Ro]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[frank-dellaert2.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/frank-dellaert2.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/frank-dellaert2.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/frank-dellaert2.jpeg?itok=mSPZ9FYL]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1591715211</created>          <gmt_created>2020-06-09 15:06:51</gmt_created>          <changed>1591715211</changed>          <gmt_changed>2020-06-09 15:06:51</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="134"><![CDATA[Student and Faculty]]></category>          <category tid="135"><![CDATA[Research]]></category>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="134"><![CDATA[Student and Faculty]]></term>          <term tid="135"><![CDATA[Research]]></term>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="635397">  <title><![CDATA[NSF Grant to Fund Georgia Tech Research into Psychological Impact of COVID-19]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Arguably the most visible of all prescriptions to the COVID-19 pandemic this year have been guidelines or imposed restrictions commonly referred to as &ldquo;social distancing.&rdquo; Less physical contact, the thinking goes, means a lowered risk of viral transmission.</p><p>Like the virus itself, however, stress and anxiety stemming from overconsumption of news or other media can spread through social networks. As the mental health fallout becomes clearer, are some similar social media distancing recommendations needed to stem the flow through the online world?</p><p>A multidisciplinary team of researchers at Georgia Tech, Washington University-St. Louis, and the University of Wisconsin-Madison argue that these mental health implications of the pandemic are equally important, and <a href="https://www.nsf.gov/awardsearch/showAward?AWD_ID=2027689">a new grant from the National Science Foundation (NSF) has recently funded new research to that effect</a>.</p><p>&ldquo;It&rsquo;s not just the fear and anxiety that I might get infected or I might infect or know someone who is infected,&rdquo; said <strong>Munmun De Choudhury</strong>, an associate professor in Georgia Tech&rsquo;s <a href="http://ic.gatech.edu/">School of Interactive Computing</a> and the co-principal investigator on the project. &ldquo;It&rsquo;s all of these things around it that are furthering the psychological impact. It&rsquo;s very different from other kinds of illnesses or pandemics because of the uncertainty of the crisis. We simply don&rsquo;t know how long we are into it.&rdquo;</p><p>The grant is funded by the NSF&rsquo;s Rapid Response Project program, which is intended for research that addresses an immediate need within society. It has provided $200,000 toward the yearlong project.</p><p>The research will combine investigations in two separate environments: the online world, where news, personal posts, videos, and other media are shared rampantly across social networks, and the offline real world, where the epidemiological data about the spread of the virus or economic data about the financial fallout can be measured.</p><p>For the former, they will use social media data from various popular social platforms &ndash; Twitter, Reddit, and YouTube &ndash; to measure the spread of information and how consumers of it express themselves in terms of anxiety or fear, or what they are saying about their own psychological wellbeing.</p><p>&ldquo;How often are people expressing anger or fear or blaming someone through their posts?&rdquo; said <strong>Srijan Kumar</strong>, an assistant professor in Georgia Tech&rsquo;s <a href="http://cse.gatech.edu/">School of Computational Science and Engineering</a> and the other co-principal investigator. &ldquo;We&rsquo;ll develop new classifiers using natural language processing that will help us classify social posts into two categories: either anxiety-inducing or anxiety itself.&rdquo;</p><p>This is new territory, according to De Choudhury. Although there have been other pandemics such as the 1918 influenza epidemic, none of this magnitude have taken place during the digital/social age. And while social media provides an important mechanism for staying informed and remaining in contact with friends and loved ones during the difficult social distancing measures, overexposure could result in negative mental health consequences.</p><p>&ldquo;There is probably a sweet spot,&rdquo; De Choudhury said. &ldquo;Just like we need physical distancing in the real world, we probably need to practice distancing from social media or online information to an extent to avoid consuming too much anxiety-inducing media, while also staying informed.</p><p>&ldquo;If I say something, it doesn&rsquo;t just affect me. It affects all the people who read my posts. If they share it or if they post something, then it affects all of their social neighbors. It can be an outward ripple that affects people. We want to measure that, how they spread through social networks.&rdquo;</p><p>They&rsquo;ll compare that data with the other element: the offline world. Currently, people in New York City are likely more stressed and anxious in a different way than people in Georgia. New York has been the epicenter of the viral outbreak in the United States, meaning that much of the anxiety locally stems from the virus itself.</p><p><em>Will I contract the virus? Will someone I know contract the virus? Can I go to the store for groceries? How much disinfecting is required when I return home?</em></p><p>And then, you can tease out that geographical data. How are higher-income individuals stressed in comparison to lower-income? What about differences along racial lines? Data has shown higher mortality rates in African-Americans, for example, which leads to different fears than those in other communities.</p><p>In U.S. cities where there is also sufficient social media data, they will examine this offline data to see rates of infection, fatalities, when shelter-in-place was imposed, and more.</p><p>The final piece will be what they will do with this information. The goal is to create tools for social platforms to provide coping techniques or guidelines for use.</p><p>&ldquo;Maybe that might include encouraging you to limit the amount of time you spend on social media,&rdquo; Kumar said. &ldquo;Or, maybe you step out and do something with family members. Some kind of physical activity. Then we can begin to examine how people react to these messages. Do we see that their anxiety levels are coming down, or not?&rdquo;</p><p>&ldquo;In this time, we have a very unique lens to study this pandemic in a whole new light as opposed to other events of a global scale,&rdquo; De Choudhury said. &ldquo;There is no guarantee this won&rsquo;t come back. And even if it doesn&rsquo;t, something else will. Being able to have these tools built and available will better prepare us for the future.&rdquo;</p><p>For more coverage of Georgia Tech&rsquo;s response to the coronavirus pandemic, please visit our <a href="https://helpingstories.gatech.edu/">Responding to COVID-19 page</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1589560810</created>  <gmt_created>2020-05-15 16:40:10</gmt_created>  <changed>1591276187</changed>  <gmt_changed>2020-06-04 13:09:47</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A multidisciplinary team of researchers has received a grant from the NSF to study the mental health outcomes of COVID-19 through examination of social media activity and geographic epidemiological data.]]></teaser>  <type>news</type>  <sentence><![CDATA[A multidisciplinary team of researchers has received a grant from the NSF to study the mental health outcomes of COVID-19 through examination of social media activity and geographic epidemiological data.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-05-15T00:00:00-04:00</dateline>  <iso_dateline>2020-05-15T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-05-15 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>635396</item>      </media>  <hg_media>          <item>          <nid>635396</nid>          <type>image</type>          <title><![CDATA[Munmun De Choudhury and Srijan Kumar]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[NSF RAPID GRANT - Munmun and Srijan.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/NSF%20RAPID%20GRANT%20-%20Munmun%20and%20Srijan.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/NSF%20RAPID%20GRANT%20-%20Munmun%20and%20Srijan.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/NSF%2520RAPID%2520GRANT%2520-%2520Munmun%2520and%2520Srijan.png?itok=xn0eHXsB]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Munmun De Choudhury and Srijan Kumar]]></image_alt>                    <created>1589560736</created>          <gmt_created>2020-05-15 16:38:56</gmt_created>          <changed>1589560736</changed>          <gmt_changed>2020-05-15 16:38:56</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="184821"><![CDATA[cc-research; ic-hcc; ic-ai-ml; COVID-19]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="635593">  <title><![CDATA[IC Students Support Innovation in India through 'MakerGhat']]></title>  <uid>33939</uid>  <body><![CDATA[<p><strong>Azra Ismail</strong> was working with health workers in Delhi, India, when she had a realization. What she saw from locals in the community was that there was an intense desire for societal impact from many workers &ndash; and the ideas to go with it &ndash; but an absence of resources necessary to fully realize the innovation.</p><p>&ldquo;The experience that these health workers had in these communities provided unique perspectives and ideas that produced the kinds of ideas that could be relevant,&rdquo; said Ismail, now a Ph.D. student in Georgia Tech&rsquo;s <a href="http://ic.gatech.edu">School of Interactive Computing</a>.</p><p>&ldquo;But because they were the lowest rung on the health infrastructure and were low income or low social class, those ideas weren&rsquo;t recognized and represented.&rdquo;</p><p>Around the same time, <a href="http://cc.gatech.edu">College of Computing</a> alumnus <strong>Aditya Vishwanath</strong>, now a doctoral student at Stanford University, had a similar realization. He was working with Asha Mumbai, a non-profit in a low-resourced slum in India&rsquo;s biggest city, using virtual reality to see how students appropriated and made sense of it. Like Ismail, he recognized a group of students who had unique viewpoints and drive, but too few resources to realize them.</p><p>Knowing how important it is to support innovation from those who understand the specific needs of a community, the two of them founded <a href="https://makerghat.org/space">MakerGhat,</a> a non-profit with the mission to take ideas from concept to creation and application where they are needed most: the communities they serve.</p><p>Situated in an impoverished neighborhood in Mumbai, MakerGhat is a community lab in which local students, young and old, can join to receive education and resources to put their ideas into practice. Makers join through subscription or scholarships if they are unable to afford membership. In exchange, they receive access to support ranging from an electronics room, a 3D printing and PC workstation, a science lab, a woodworking shop, and a design and workshop studio.</p><p>The space is intentionally unsophisticated. Enter the space, and you may find a mish-mash of supplies and painting on the walls, a far cry from the labs of the nearby Indian Institute of Technology-Bombay, one of the top technological universities in India.</p><p>&ldquo;We want people to be encouraged to try things and not afraid to break it,&rdquo; Ismail said. &ldquo;We don&rsquo;t want something that people are afraid to use.&rdquo;</p><p>If a maker can&rsquo;t find what they are looking for, they can turn to connections within the community to meet the need. Heavier equipment, for example, might require a trip to the local smith for welding.</p><p>&ldquo;Students coming in have family members in these other industries, so it sets up an informal infrastructure where the students know where to go for a specific need,&rdquo; Ismail said.</p><p>The model has resulted in a number of tangible outputs. In Summer 2019, a handful of interns from Georgia Tech, Stanford, and Smith College were able to take advantage of the Denning Global Engagement Seed Fund to fund their travel to India. Interns were there not just to teach or run the lab, but to co-learn with locals. Collaborations between the technical expertise of the interns and the locally-significant knowledge of the makers resulted in a handful of innovations that addressed local needs.</p><p>One collaboration resulted in a system that could compact plastic bottles to assist in a waste management challenge in Mumbai. Workers who collect waste locally and transport to recycling plants to sell to companies or government institutions face challenges transporting plastic bottles, the most common waste item, which take up a lot of space.</p><p>Another created a community mapping platform to help identify local resources. Makers and interns went into the community and conducted surveys to find needs specific to different geographies.</p><p>&ldquo;A big part of this is engaging with the community to identify needs, current status quos, and how to approach the challenge,&rdquo; Ismail said. &ldquo;This happens in the schools too. What are the gaps that need to be addressed, and how can we help address them?&rdquo;</p><p>MakerGhat serves about 300 students weekly, ranging from young to old &ndash; it is open to any age or background. Many come from STEM fields, but others may be interested in math or art or fashion design.</p><p>&ldquo;It&rsquo;s a melting pot,&rdquo; Ismail said.</p><p>The goal is to turn MakerGhat into an incubator. As the first class of students graduates from the program, they will move on to other sources of education or work. Ismail said that she and her collaborators &ndash; which includes Vishwanath, a team programmer, local leaders in finances and project resources, and a group of 10 or so volunteers &ndash; want to help build companies from the ideas and innovations that formed at MakerGhat.</p><p>&ldquo;The mission is to actually transform these students and community members into entrepreneurs,&rdquo; Ismail said. &ldquo;We want to take these creations to the next level and help them scale beyond their own community.&quot;</p><p>That might mean launching new MakerGhat centers elsewhere. The goal is to make the model of the original open-source so that other communities can replicate &ndash; in India and beyond. While it may play out different in each location depending on the community&rsquo;s needs, the organizational structure would be the same.</p><p>&ldquo;There&rsquo;s a misconception that great innovation only comes from these big tech companies or big universities,&rdquo; Ismail said. &ldquo;But we want to challenge that narrative. Many of the great ideas that can make significant impacts on society come from the people in these communities of need themselves.&rdquo;</p><p>Other members of the Georgia Tech community have contributed to the project. <strong>Neha Kumar</strong>, an assistant professor joint between the School of Interactive Computing and the Sam Nunn School of International Affiars, is an advisor. Students involved in a Makers-in-Residence program last summer were <strong>Ritesh Bhatt</strong>, <strong>Solum Onwuchekwa</strong>, and <strong>Josiah Mangiameli</strong>. <strong>Vishal Sharma</strong>, an incoming IC Ph.D. student, was also involved.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1590175035</created>  <gmt_created>2020-05-22 19:17:15</gmt_created>  <changed>1590175035</changed>  <gmt_changed>2020-05-22 19:17:15</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[MakerGhat is a local makerspace in India designed to cater specifically to low-resourced innovators.]]></teaser>  <type>news</type>  <sentence><![CDATA[MakerGhat is a local makerspace in India designed to cater specifically to low-resourced innovators.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-05-22T00:00:00-04:00</dateline>  <iso_dateline>2020-05-22T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-05-22 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>635592</item>      </media>  <hg_media>          <item>          <nid>635592</nid>          <type>image</type>          <title><![CDATA[MakerGhat]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[MakerGhat.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/MakerGhat.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/MakerGhat.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/MakerGhat.jpeg?itok=YMZIXw_e]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Makers paint walls at MakerGhat in India.]]></image_alt>                    <created>1590174252</created>          <gmt_created>2020-05-22 19:04:12</gmt_created>          <changed>1590174252</changed>          <gmt_changed>2020-05-22 19:04:12</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.cc.gatech.edu/content/researchers-work-kids-mumbai-examine-classroom-potential-virtual-reality]]></url>        <title><![CDATA[Researchers Work with Kids in Mumbai to Examine Classroom Potential of Virtual Reality]]></title>      </link>          <link>        <url><![CDATA[https://www.cc.gatech.edu/news/605000/vr-taking-students-where-once-only-ms-frizzle-and-magic-school-bus-could]]></url>        <title><![CDATA[VR Taking Students Where Once Only Ms. Frizzle and the Magic School Bus Could]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="184890"><![CDATA[cc-research; ic-hcc]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="634312">  <title><![CDATA[Machine Learning Technique Helps Wearable Devices Get Better at Diagnosing Sleep Disorders and Quality]]></title>  <uid>34773</uid>  <body><![CDATA[<p>Getting diagnosed with a sleep disorder or assessing quality of sleep is an often expensive and tricky proposition, involving sleep clinics where patients are hooked up to sensors and wires for monitoring.</p><p>Wearable devices, such as the Fitbit and Apple Watch, offer less intrusive and more cost-effective sleeping monitoring, but the tradeoff can be inaccurate or imprecise sleep data.</p><p>Researchers at the Georgia Institute of Technology are working to combine the accuracy of sleep clinics with the convenience of wearable computing by developing machine learning models, or smart algorithms, that provide better sleep measurement data as well as considerably faster, more energy-efficient software.&nbsp;</p><p>The team is focusing on electrical ambient noise&nbsp;that is emitted by devices but that is often not audible and can interfere with sleep sensors on a wearable gadget. Leave the TV on at night, and the electrical signal - not the infomercial in the background - might mess with your sleep tracker.</p><p><a href="https://cse.gatech.edu/news/616715/new-deep-learning-approach-improves-access-sleep-diagnostic-testing">[Related News:&nbsp;New Deep Learning Approach Improves Access to Sleep Diagnostic Testing]</a></p><p>These additional electrical signals are problematic for wearable devices that typically have only one sensor to measure a single biometric data point, normally heart rate. A device picking up signals from ambient electrical noise skews the data and leads to potentially misleading results.&nbsp;</p><p>&ldquo;We are building a new process to help train [machine learning] models to be used for the home environment and help address this and other issues around sleep,&rdquo; said&nbsp;<strong>Scott Freitas</strong>, a second-year machine learning Ph.D. student and co-lead author of a newly published&nbsp;<a href="https://arxiv.org/pdf/2001.11363.pdf" target="_blank">paper</a>.</p><p>The team employed adversarial training in tandem with spectral regularization, a technique that makes neural networks more robust to electrical signals in the input data. This means that the system can accurately assess sleep stages even when an EEG signal is corrupted by additional signals like a TV or washing machine.</p><p>Using machine-learning methods such as sparsity regularization, the new model can also compress the amount of time it takes to gather and analyze data, as well as increase energy efficiency of the wearable device.</p><p>The researchers are testing with a product worn on the head but hope to also integrate it into smartwatches and bracelets. Results would then be transmitted to a person&rsquo;s doctor to analyze and provide a diagnosis. This could result in fewer visits to the doctor, reducing the cost, time, and stress involved with receiving a sleep disorder diagnosis.</p><p>Another issue that the researchers are looking at is reducing the&nbsp;amount&nbsp;of sensors needed to accurately track sleep.&nbsp;</p><p>&ldquo;When someone visits a sleep clinic, they are hooked up to all kinds of monitors and wires to gather data ranging from brain activity on EEG&rsquo;s, heart rate, and more. Wearable tech only monitors heart rate with one sensor. The one sensor is more ideal and comfortable, so we are looking for a way to get more data without adding more wires or sensors,&rdquo; said&nbsp;<strong>Rahul Duggal</strong>, a second-year computer science Ph.D. student and co-lead author.</p><p>The team&rsquo;s work is published in the paper&nbsp;<em>REST: Robust and Efficient Neural Networks for Sleep Monitoring in the Wild</em>,&nbsp;accepted to the&nbsp;<a href="https://www2020.thewebconf.org/" target="_blank">International World Wide Web Conference&nbsp;(WWW)</a>, scheduled to take place April 20 through 24 in Taipei, Taiwan.</p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1586800028</created>  <gmt_created>2020-04-13 17:47:08</gmt_created>  <changed>1590067825</changed>  <gmt_changed>2020-05-21 13:30:25</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[ML@GT researchers are improving the accuracy and efficiency of devices used to track sleeping using machine learning techniques.]]></teaser>  <type>news</type>  <sentence><![CDATA[ML@GT researchers are improving the accuracy and efficiency of devices used to track sleeping using machine learning techniques.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-04-15T00:00:00-04:00</dateline>  <iso_dateline>2020-04-15T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-04-15 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allie.mcfadden@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>634311</item>      </media>  <hg_media>          <item>          <nid>634311</nid>          <type>image</type>          <title><![CDATA[ML@GT researchers are improving the accuracy and efficiency of devices used to track sleeping using machine learning techniques.]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[kinga-cichewicz-5NzOfwXoH88-unsplash.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/kinga-cichewicz-5NzOfwXoH88-unsplash.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/kinga-cichewicz-5NzOfwXoH88-unsplash.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/kinga-cichewicz-5NzOfwXoH88-unsplash.jpg?itok=en4Z4sps]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[woman sleeping]]></image_alt>                    <created>1586799743</created>          <gmt_created>2020-04-13 17:42:23</gmt_created>          <changed>1586799743</changed>          <gmt_changed>2020-04-13 17:42:23</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="184463"><![CDATA[sleep tracking]]></keyword>          <keyword tid="9167"><![CDATA[machine learning]]></keyword>          <keyword tid="2556"><![CDATA[artificial intelligence]]></keyword>          <keyword tid="365"><![CDATA[Research]]></keyword>      </keywords>  <core_research_areas>          <term tid="39451"><![CDATA[Electronics and Nanotechnology]]></term>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="634175">  <title><![CDATA[Four Machine Learning Faculty Members Earn Promotions and Tenure]]></title>  <uid>34773</uid>  <body><![CDATA[<p>Four faculty members at the <a href="http://ml.gatech.edu/">Machine Learning Center at Georgia Tech</a> have received promotions or been granted tenure.</p><p><strong>Jake Abernethy</strong> has been promoted to associate professor in the <a href="https://scs.gatech.edu/">School of Computer Science</a> and granted tenure. Abernethy&rsquo;s research focus is machine learning, where he enjoys discovering connections between optimization, statistics, and economics.</p><p>In 2011, he completed his Ph.D. at the University of California, Berkeley before becoming a Simons postdoctoral fellow for the following two years. After the water crisis in Flint, Mich., Abernethy worked on <a href="https://www.cc.gatech.edu/~jabernethy9/flint/">detecting lead contamination and infrastructure remediation</a>. Prior to studying and teaching machine learning, Abernethy performed comedy and juggling shows, opening for Sinbad and Dave Chappelle.</p><p><strong>Munmun De Choudhury</strong> has been promoted to associate professor in the<a href="https://ic.gatech.edu/"> School of Interactive Computing</a> and granted tenure. De Choudhury is also affiliated with the <a href="http://gvu.gatech.edu/">GVU</a> Center and <a href="http://ipat.gatech.edu/">Institute for People and Technology (IPaT)</a> and leads the <a href="http://socweb.cc.gatech.edu/">Social Dynamics and Wellbeing Lab (SocWeb Lab.)</a> De Choudhury studies problems at the intersection of computer science and social media, building computational methods and artefacts to help understand human behaviors and psychological states and how they manifest online.</p><p>Prior to joining Georgia Tech in 2014, De Choudhury was a postdoctoral researcher in the nexus group at Microsoft Research, Redmond. In 2011, she received her Ph.D. from Arizona State University, Tempe. After graduate school, De Choudhury spent time at Rutgers University and was a faculty associate with the Berkman Center for Internet and Society at Harvard University.</p><p><strong>Yajun Mei</strong> has been promoted to professor in the <a href="https://www.isye.gatech.edu/">H. Milton Stewart School of Industrial and Systems Engineering</a>. Mei&#39;s research interests include change-point problems and sequential analysis in mathematical statistics and sensor networks and information theory in engineering. Mei also examines longitudinal data analysis, random effects models, and clinical trials in biostatistics.</p><p>Mei received his Ph.D. in mathematics from the California Institute of Technology in 2003. He has also worked as a postdoc in biostatistics at the Fred Hutchinson Cancer Research Center. In 2010, Mei was awarded the National Science Foundation (NSF) CAREER Award and in 2008 was awarded Best Paper at FUSION. Mei was awarded the prestigious Abraham Wald Prize in Sequential Analysis in 2009.</p><p><strong>Alex Endert </strong>has been promoted to associate professor and granted tenure in the School of Interactive Computing. Endert directs the <a href="https://gtvalab.github.io/">Visual Analytics Lab</a> where he and his students apply fundamental research to&nbsp;domains including text analysis, intelligence analysis, cybersecurity, and decision-making, and explore novel user interaction techniques for visual analytics.</p><p>Endert earned his Ph.D. from Virginia Tech in 2012, and in 2013 his work on Semantic Interaction was awarded the IEEE VGTC VPG Pioneers Group Doctoral Dissertation Award, and the Virginia Tech Computer Science Best Dissertation Award. In 2018, Endert received the NSF CAREER Award.</p><p>Editors Note:&nbsp;<strong>Molei Tao</strong>&nbsp;has been promoted to associate professor with tenure in the School of Math. Tao is an applied and computational mathematician, designing algorithms for faster and more accurate computations and developing mathematical tools to analyze and design engineering systems or answer scientific questions.</p><p>He earned his Ph.D. in control and dynamical systems with a minor in physics from the California Institute of Technolgy where he also worked as a postdoctoral researcher. He is the 2011 recipient of the W.P. Carey Ph.D. Prize in Applied Mathematics and a 2019 NSF CAREER Award.</p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1586367437</created>  <gmt_created>2020-04-08 17:37:17</gmt_created>  <changed>1589223454</changed>  <gmt_changed>2020-05-11 18:57:34</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Four faculty members at the Machine Learning Center at Georgia Tech have received promotions or been granted tenure.]]></teaser>  <type>news</type>  <sentence><![CDATA[Four faculty members at the Machine Learning Center at Georgia Tech have received promotions or been granted tenure.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-04-08T00:00:00-04:00</dateline>  <iso_dateline>2020-04-08T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-04-08 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allie.mcfadden@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>634173</item>      </media>  <hg_media>          <item>          <nid>634173</nid>          <type>image</type>          <title><![CDATA[Four ML@GT faculty members earn promotions and tenure]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Spring 2020 ML Promotions.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Spring%202020%20ML%20Promotions.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Spring%202020%20ML%20Promotions.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Spring%25202020%2520ML%2520Promotions.png?itok=G89xMzOo]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Congratulations Alex, Jake, Munmun, and Yajun]]></image_alt>                    <created>1586367276</created>          <gmt_created>2020-04-08 17:34:36</gmt_created>          <changed>1586367276</changed>          <gmt_changed>2020-04-08 17:34:36</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="635208">  <title><![CDATA[Social Media and Wellbeing: Does Bias in Self-Reported Data Impact Research?]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Along with the development of each new technological platform comes a series of questions designed to understand its ultimate impact on users&rsquo; wellbeing or performance. It&rsquo;s like clockwork.</p><p><em>Does watching too much television rot your child&rsquo;s brain? How much is too much when it comes to video games? Is our time spent on social media impacting our mental health?</em></p><p>These are all important questions, but how they are asked matters to the ultimate conclusions we can draw. It is well-established that the most commonly used method in this area of research &ndash; user self-reports and survey questions &ndash; are prone to error. Now, new research from collaborators at Georgia Tech, Facebook, and the University of Michigan have shed light on the nature of error &ndash; that is to say whether user over or underestimate their data, who and which questions are more prone to error, and more.</p><p>Error in the data, said <a href="http://ic.gatech.edu">School of Interactive Computing</a> Ph.D. student&nbsp;<strong>Sindhu Ernala</strong>, can impact the inferences drawn from the data itself.</p><p>&ldquo;We know survey questions have several well-documented biases,&rdquo; Ernala said. &ldquo;People may not remember correctly. They can&rsquo;t keep up with their time. They remember recent things more accurately than those further in the past. All of this matters because error in measurement might impact the downstream inferences we make. Accurate assessments of social media use is critical because of the everyday impact it has on people&rsquo;s lives.&rdquo;</p><p>Indeed, Ernala and her collaborators found that these biases held up in many surveys. In a paper accepted to the <a href="http://chi.gatech.edu">2020 ACM Conference on Human Factors in Computing</a> (CHI), they picked 10 of the most common survey questions in prior literature that investigate time spent on Facebook. The questions were asked in a variety of ways: open ended or multiple choice, the frequency of visits or the total time spent. They asked these 10 questions in a survey to 50,000 random users in 15 countries around the world.</p><p>With self-reported data in hand, they compared it to the actual server logs at Facebook to see how it stacked up. Interestingly, people most often overestimated the time they spent on the platform and underestimated the number of times they visited. Specifically, in the 18-24 demographic, a common age range for research done at universities, there was even more error in self-reports.</p><p>&ldquo;This is important, because a lot of our research is done with these age samples,&rdquo; Ernala said.</p><p>With this information in mind, the researchers made a handful of recommendation in order to improve the data and, thus, the research around the data itself:</p><ol><li>As a researcher, if you are investigating time spent, consider using time tracking applications as an alternative to self-report time spent measures. These applications include things like Apple&rsquo;s screen time feature or Facebook&rsquo;s &ldquo;Your Time on Facebook.&rdquo;<br />&nbsp;</li><li>If researchers want to use surveys, which often makes sense, consider using the phrasing with the lowest error or multiple-choice questions.</li></ol><p>The researchers caution against using time spent self-reports directly, but rather interpret reports as noisy estimates of where someone falls on a distribution. More important when determining wellbeing outcomes is&nbsp;<em>how</em>&nbsp;users actually spend their time on the platform.</p><p>&ldquo;Social platforms change and user habits change over time,&rdquo; Ernala said. &ldquo;The questions now might not be the best questions five or 10 years from now. This is fluid, and we need to continue to look at this to make sure our past and future research is well-informed.&rdquo;</p><p>She and her collaborators hope to contribute positively to this ongoing process by providing some validated measures that can be used across studies, while understanding that these methods may change over time as user habits transform.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1588926987</created>  <gmt_created>2020-05-08 08:36:27</gmt_created>  <changed>1588926987</changed>  <gmt_changed>2020-05-08 08:36:27</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Error in the data, said School of Interactive Computing Ph.D. student Sindhu Ernala, can impact the inferences drawn from the data itself.]]></teaser>  <type>news</type>  <sentence><![CDATA[Error in the data, said School of Interactive Computing Ph.D. student Sindhu Ernala, can impact the inferences drawn from the data itself.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-05-08T00:00:00-04:00</dateline>  <iso_dateline>2020-05-08T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-05-08 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>624519</item>      </media>  <hg_media>          <item>          <nid>624519</nid>          <type>image</type>          <title><![CDATA[Social Media Logos]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Social Media logos.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Social%20Media%20logos.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Social%20Media%20logos.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Social%2520Media%2520logos.jpg?itok=9S-0Fb6s]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[A keyboard featuring different social media logos]]></image_alt>                    <created>1565805908</created>          <gmt_created>2019-08-14 18:05:08</gmt_created>          <changed>1565805908</changed>          <gmt_changed>2019-08-14 18:05:08</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://chi.gatech.edu]]></url>        <title><![CDATA[CHI 2020 at Georgia Tech]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="182508"><![CDATA[cc-research; ic-hcc; ic-social-computing]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="634469">  <title><![CDATA[IC Ph.D. Students Named 2020 Members of NSF Graduate Research Fellowship Program]]></title>  <uid>33939</uid>  <body><![CDATA[<p>A pair of <a href="http://ic.gatech.edu">School of Interactive Computing</a> students was selected as 2020 members of the N<a href="https://www.nsfgrfp.org/">ational Science Foundation Graduate Research Fellowship Program</a> (NSF GRFP).</p><p>First-year Ph.D. students <strong>Daniel Bolya</strong> (advised by <strong>Judy Hoffman</strong>) and <strong>Joanne Truong</strong> (advised by <strong>Dhruv Batra</strong> and <strong>Sonia Chernova</strong>) were recognized by the program, which supports graduate students pursuing research-based Master&rsquo;s and doctoral degrees at United States institutions.</p><p>The NSF GRFP provides financial support for three years, comprised of a $34,000 stipend per 12-month fellowship year, as well as a direct payment of $12,000 to Georgia Tech toward the cost of education for each of the three years of fellowship funding.</p><p>Bolya&rsquo;s work is in machine learning and computer vision. Recent work at Georgia Tech has focused on error profiling in instance segmentation and object detection models. His method, building upon previous work at MIT, is unique in that it captures all possible sources of error in a model, while properly weighing the importance of each. He plans to continue pursuit of faster methods of instance segmentation that can he make accessible. Current methods are not practical for many applications due to limits in speed, accuracy, and data efficiency. His research addresses this challenge.</p><p>&ldquo;This is not just about computer vision,&rdquo; he said in his research statement. &ldquo;Improving instance segmentation would impact the tech we use every day.&rdquo;</p><p>Like his work at MIT, called YOLACT, he plans to fully release the project open source once it is ready.</p><p>Truong&rsquo;s long-term research goal is to develop robots that can see, talk, reason, and act in complex human environments. Specifically, she will focus on a method called &ldquo;sim2robot transfer,&rdquo; which develops efficient domain adaptation techniques to enable pre-training of AI agents in simulators while ensuring that the learned skills generalize to a real robotic platform.</p><p>&ldquo;The overall goals of my research plan are to, one, break down the possible errors in simulation-to-reality transfer that result in a reality gap, and, two, close the loop between simulation and reality by using data collected on a real robot to finetune and optimize parameters in simulation,&rdquo; she said.</p><p>She worked on the first goal last fall, achieving optimization in simulator settings for sim2real predictivity. Currently, she is working on the second goal, developing domain adaptation techniques to enable low-shop adaptation between simulation and reality.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1587067366</created>  <gmt_created>2020-04-16 20:02:46</gmt_created>  <changed>1587067366</changed>  <gmt_changed>2020-04-16 20:02:46</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The NSF GRFP provides financial support for three years, comprised of a $34,000 stipend per 12-month fellowship year, as well as a direct payment of $12,000 to Georgia Tech toward the cost of education for each of the three years of fellowship funding.]]></teaser>  <type>news</type>  <sentence><![CDATA[The NSF GRFP provides financial support for three years, comprised of a $34,000 stipend per 12-month fellowship year, as well as a direct payment of $12,000 to Georgia Tech toward the cost of education for each of the three years of fellowship funding.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-04-15T00:00:00-04:00</dateline>  <iso_dateline>2020-04-15T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-04-15 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>634467</item>      </media>  <hg_media>          <item>          <nid>634467</nid>          <type>image</type>          <title><![CDATA[Daniel Bolya and Joanne Truong]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Joanne and Daniel.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Joanne%20and%20Daniel.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Joanne%20and%20Daniel.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Joanne%2520and%2520Daniel.png?itok=VWxfQLjH]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Daniel Bolya and Joanne Truong]]></image_alt>                    <created>1587067147</created>          <gmt_created>2020-04-16 19:59:07</gmt_created>          <changed>1587067147</changed>          <gmt_changed>2020-04-16 19:59:07</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="181639"><![CDATA[cc-research; ic-ai-ml]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="634055">  <title><![CDATA[Looking for Activities at Home? Try These Interactive Tools from IC Researchers]]></title>  <uid>33939</uid>  <body><![CDATA[<p>The world is on lockdown right now, and we&rsquo;re all searching for new ways to occupy our time inside. With only so many times you can re-watch The Office (oh, who are we kidding &ndash; maybe just one more time through&hellip;), we thought it would be fun to share some of the interactive tools from our own researchers&rsquo; workshops.</p><p>Below, you&rsquo;ll find just a couple of the tools you can interact with online, giving you opportunities from learning how to code to creating art. But this is only just a start &ndash; we&rsquo;d love to hear from you.</p><p>If you&rsquo;re a Georgia Tech student or faculty member, submit your interactive tools to communications officer David Mitchell at <a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a>. We&rsquo;ll add to the list, share with our audience, and help everyone find some enjoyment during a difficult time.</p><p><strong>Create Your Own Generative Art Pieces &ndash; </strong>submitted by Devi Parikh</p><p>Looking for a new piece of art for your wall? With this tool, you can flex your creative muscles. Choose a style, adjust the values, colors, and properties, and generate a piece that would fit in nicely in your home.</p><p>This work demonstrates a broader area of research into machine learning and creativity. The first piece of AI-generated art to go to auction sold for $432,500 in 2018.</p><p><strong>LINK: </strong><a href="https://cc.gatech.edu/~parikh/art.html">https://cc.gatech.edu/~parikh/art.html</a></p><p><strong>Interact with Visual Chatbot </strong>&ndash; submitted by Devi Parikh</p><p>Parikh&rsquo;s lab is doing research in an area called visual question answering. Developed in 2017, this demo allows you to upload an image and have a conversation with a chatbot about it. Pick out an image you&rsquo;ve taken or just grab one from the web and ask questions to see just how quickly and accurately this AI can perform the task. This research is key to developing agents that can reason about specific tasks in the real world.</p><p><strong>LINK: </strong><a href="http://demo-visualdialog.cloudcv.org/">http://demo-visualdialog.cloudcv.org/</a></p><p><strong>Learn to Code Using EarSketch and TunePad</strong> &ndash; submitted by Brian Magerko</p><p>Have you been dying to learn how to code? There&rsquo;s no time like the present. Without the benefit of a classroom setting to learn all the ins and outs, you might find a usable tool like EarSketch beneficial. EarSketch uses music to guide the learner. With sounds from the EarSketch library or your own uploads, along with Python or JavaScript to code, you can produce quality music online.</p><p>Like EarSketch, TunePad &ndash; developed in collaboration with Northwestern University &ndash; is a tool for creating music using the Python programming language. No knowledge in music or coding is required to get started. Get those musical juices flowing, and start creating.</p><p><strong>LINK: </strong><a href="http://earsketch.gatech.edu/">earsketch.gatech.edu</a></p><p><strong>Learn About Grasping Tasks Using this Online Tool </strong>&ndash; submitted by Samarth Brahmbhatt</p><p>This tool allows people to interactively explore how we grasp household objects. So, why is this important? Grasping is a key capability in the development of household robotics. In order to train robots how to grab and use items in the house, we need to identify the most efficient approach. Explore this tool, which includes items from an apple to a doorknob to a video game controller.</p><p><strong>LINK: </strong><a href="https://contactdb.cc.gatech.edu/contactdb_explorer.html">https://contactdb.cc.gatech.edu/contactdb_explorer.html</a></p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1585958447</created>  <gmt_created>2020-04-04 00:00:47</gmt_created>  <changed>1585958447</changed>  <gmt_changed>2020-04-04 00:00:47</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[These are just a couple of the tools you can interact with online, giving you opportunities from learning how to code to creating art.]]></teaser>  <type>news</type>  <sentence><![CDATA[These are just a couple of the tools you can interact with online, giving you opportunities from learning how to code to creating art.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-04-03T00:00:00-04:00</dateline>  <iso_dateline>2020-04-03T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-04-03 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>444971</item>      </media>  <hg_media>          <item>          <nid>444971</nid>          <type>image</type>          <title><![CDATA[EarSketch]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[static1.squarespace.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/static1.squarespace_0.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/static1.squarespace_0.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/static1.squarespace_0.png?itok=i5HineHX]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[EarSketch]]></image_alt>                    <created>1449256205</created>          <gmt_created>2015-12-04 19:10:05</gmt_created>          <changed>1475895184</changed>          <gmt_changed>2016-10-08 02:53:04</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="182940"><![CDATA[cc-research; ic-ai-ml; ic-robotics; ic-hcc]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="633985">  <title><![CDATA[Pitch Perfect: GT Computing Undergrads Provide Automated Training Upgrade for Softball Team]]></title>  <uid>33939</uid>  <body><![CDATA[<p>There&rsquo;s a classic story that former Atlanta Braves pitching coach Leo Mazzone used to share about Hall-of-Famer Greg Maddux, one of the smartest hurlers of all time. Although the exact details have changed in retelling over time, it goes something like this:</p><p>Maddux, a meticulous documenter of pitch sequences and batter results throughout his career, once explained to Mazzone in between innings that the leadoff batter in the following frame would pop out to third base on the fourth pitch of the at-bat. He&rsquo;d start him with a fastball, change speeds for strike two, waste a pitch outside, and then induce the popup on a one-ball, two-strike count. Sure enough, a few minutes later, Maddux did exactly as he&rsquo;d said.</p><p>There are a couple of lessons here: One, Maddux was a wizard. Many pitchers over time have tried to replicate his impeccable approach to the game, but few have ever succeeded at that level; two, pitch sequence matters &ndash; perhaps more than how overpowering your fastball is or how sharp the break is on your curve.</p><p>Capitalizing on this intuition, a group of undergraduate students at <a href="http://gatech.edu">Georgia Tech</a> are working with the softball team to provide an automated upgrade to players&rsquo; training. Using the wealth of statistics kept by the team &ndash; pitch-by-pitch data for balls, strikes, types of pitches thrown, and results &ndash; they have trained an algorithm that can select the best pitch to throw in any given situation.</p><p>The tool is used by the coaches and pitchers for game planning purposes, generating daily reports after every game and practice to help inform coaches of trends in sequences and results.</p><p>&ldquo;In baseball and softball nowadays, data analytics has become such an incredibly important part of the game,&rdquo; said <strong>Jack Bennett</strong>, a third-year <a href="http://isye.gatech.edu">Industrial Engineering</a> student. &ldquo;Anything that can get them data to go into games more prepared. Technology is at the forefront of this.&rdquo;</p><p>They began using the approach during the 2019 season. Bennett and partners <strong>Zach Panzarino</strong> (third-year <a href="http://cc.gatech.edu">Computer Science</a>) and Ron Kushkuley (third-year <a href="http://coe.gatech.edu">Computer Engineering</a>) had demonstrated a similar capability at last year&rsquo;s Sports Innovation Hackathon using data for Atlanta Braves pitcher Mike Foltynewicz, finishing in third place. <strong>Doug Allvine</strong>, assistant athletics director for innovation at Georgia Tech, put the team in touch with softball coach <strong>Aileen Morales</strong>. Morales was interested, and the students were able to begin testing the approach.</p><p>It works like this: The softball team keeps track of its own data &ndash; not just player statistics, but pitch selections and results for every pitcher in every game throughout the season. That&rsquo;s a lot of data and can offer a lot of information. What happened when Pitcher X threw a 3-2 changeup to a lefthanded batter?</p><p>But it goes a little deeper than that. Panzarino, Bennett, and Kushkuley found that the pitch sequence is what matters most. That follows the standard strategic thinking &ndash; a slider away can be more effective if set up by an inside fastball on the previous pitch, for example. What the algorithm does, however, is consider the order each pitch is thrown in the at-bat and provide a score for which pitch will be most effective based on past data.</p><p>&ldquo;We leverage sequences, the count, outs, everything,&rdquo; Panzarino said. &ldquo;Looking at the current state and the previous pitches, it will score all the potential future routes a pitcher can choose. We give them reports before each game so that they can prepare, and then we look at success or failure after the game.&rdquo;</p><p>After a test run a year ago, the students have honed the technology and are working with the team again this year. Qualitatively speaking, they said they noticed results throughout the year.</p><p>&ldquo;When we first gave them our analysis, it would recommend certain stuff in certain situations,&rdquo; Bennett said. &ldquo;Maybe it would say a changeup should be thrown more in this situation. Then, when we&rsquo;d get postgame data later, we&rsquo;d see that more changeups were being thrown and were continuing to be effective.&rdquo;</p><p>&ldquo;When I first saw what they were developing, I was beyond impressed,&rdquo;&nbsp;Morales said. &ldquo;We are very meticulous with collecting data in our program and trying to find ways to learn more about what is and what is not working for our athletes. It&rsquo;s remarkable to see how they can take the data we had and leverage it in a way that allowed us to finetune our training.&rdquo;</p><p>Recently, at the 2020 Sports Innovation Hackathon, the group developed a similar solution for baseball. They received runner-up in the competition, and hope to connect further with the Georgia Tech baseball team in the future.</p><p>&ldquo;Tons of theory has been written on how pitchers should approach sequencing in games, but this is a model that can show you the data about how well that works,&rdquo; Panzarino said.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1585763841</created>  <gmt_created>2020-04-01 17:57:21</gmt_created>  <changed>1585763841</changed>  <gmt_changed>2020-04-01 17:57:21</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A group of undergraduate students at Georgia Tech are working with the softball team to provide an automated upgrade to players’ training.]]></teaser>  <type>news</type>  <sentence><![CDATA[A group of undergraduate students at Georgia Tech are working with the softball team to provide an automated upgrade to players’ training.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-04-01T00:00:00-04:00</dateline>  <iso_dateline>2020-04-01T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-04-01 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>520851</item>      </media>  <hg_media>          <item>          <nid>520851</nid>          <type>image</type>          <title><![CDATA[Softball]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[softball.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/softball_0.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/softball_0.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/softball_0.png?itok=TVooho3o]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Softball]]></image_alt>                    <created>1459789200</created>          <gmt_created>2016-04-04 17:00:00</gmt_created>          <changed>1475895289</changed>          <gmt_changed>2016-10-08 02:54:49</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="181639"><![CDATA[cc-research; ic-ai-ml]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="633834">  <title><![CDATA[Passing the Torch: Georgia Tech Roboticists Lead Future Generation of Women in the Field]]></title>  <uid>33939</uid>  <body><![CDATA[<p>There&rsquo;s a piece of advice <a href="http://gatech.edu">Georgia Tech</a> Ph.D. student <strong>De&rsquo;Aira Bryant</strong> recalls most often when it comes to her adviser, <a href="http://ic.gatech.edu">School of Interactive Computing</a> Chair <strong>Ayanna Howard</strong>.</p><p>&ldquo;You&rsquo;ve got to start somewhere,&rdquo; said Bryant, a robotics student in the school. &ldquo;I feel like whenever I&rsquo;m going through my research, the way I approach it is I have these grand ideas, and I have to break it down to this and this and that. I&rsquo;m the type who&rsquo;s normally working on four or five things at the same time. Dr. Howard always tells me: &lsquo;Okay, slow down. We have to start somewhere. We have to start somewhere so we have something to move toward.&rsquo;&rdquo;</p><p>It&rsquo;s an appropriate metaphor for Bryant, who began her career in computer science with no previous experience as an undergraduate student at the University of South Carolina. It also applies to all the other women in the field who, like Bryant, rely heavily on those who come before them and pass the torch to those who come after.</p><p>Unlike many robotic students, who have stories about being introduced through Lego Mindstorms that allow them to build and program their own Lego robots, the concepts of computer science and robotics were the furthest things from Bryant&rsquo;s mind. They were completely foreign ideas that she had never given a second thought during her time in middle school and high school.</p><p>&ldquo;That wasn&rsquo;t me,&rdquo; Byrant said.</p><p>Instead, she happened to take an Intro to Java class in her first year. There, she met her first collegiate mentor, <strong>Karina Liles</strong>. Liles was a graduate student who worked in a robotics lab and, after first semester, invited Bryant to come work with her as an undergraduate assistant. Bryant saw it as a part-time job and a place she could have her own desk. She wasn&rsquo;t thinking about it as much more than that.</p><p>&ldquo;I had no idea what her research was,&rdquo; Bryant said. &ldquo;I knew it was a robotics lab, so that was cool. And she was in education for low-resource communities. I came from a school that didn&rsquo;t offer computer science at all, so I found that appealing.&rdquo;</p><p>Once Bryant was introduced to the research process, asking and answering new questions, she was hooked. Collecting data, programming and testing robots then seeing children interact with them face-to-face.</p><p>&ldquo;It made all the difference,&rdquo; she said.</p><p>It was a start, but it was still a new world. Neither of her parents had earned four-year degrees, and her dad had passed away when she was in middle school. When she told her mom and grandmother that she was interested in computer science, she received some hesitancy back. But, while neither had experience in technology, they had raised her to be inquisitive and to seek out mentorship. That&rsquo;s exactly what Bryant did.</p><p>Thrown into the deep end, she relied on Karina and a handful of other women she came across at South Carolina or conferences like <a href="https://humanrobotinteraction.org/">Human Robot Interaction</a> and <a href="https://ghc.anitab.org/">Grace Hopper</a>.</p><p>&ldquo;I was drawn to women in the field, because the nurturing and the support from people who are also in an underrepresented group &ndash; whether it&rsquo;s gender or race or whatever &ndash; they can talk to you about those specific challenges that you might come across,&rdquo; Bryant said.</p><p>Eventually, that led her to Howard.</p><p>After her junior year in Columbia, Bryant applied to a program called <a href="https://cra.org/cra-wp/dreu/">Distributed Research Experiences for Undergrads</a> (DREU). For minority undergraduate students, DREU matches students with mentors who have signed up to take on undergrads into their labs over the summer. Although students can be matched with anyone in the United States, Bryant&rsquo;s mentor happened to be Howard.</p><p>&ldquo;I was so excited,&rdquo; said Bryant, who knew of Howard&rsquo;s research through her own work at South Carolina. The work she was doing with social robots for kids with autism aligned with Howard&rsquo;s, and it wasn&rsquo;t uncommon for the Georgia Tech professor&rsquo;s name to be cited in one of their papers. &ldquo;There was a student matched with a mentor in Hawaii, and everyone thought that was the luckiest one. I was like, &lsquo;No, I&rsquo;m pretty sure I got the best deal.&rsquo;&rdquo;</p><p>Bryant worked on a project in Howard&rsquo;s lab that summer with three other undergrads. Howard was immediately impressed with Bryant because of her unique programming ability.</p><p>&ldquo;I remember needing someone to program the robot, and she was just like, &lsquo;Oh, I can do it,&rsquo;&rdquo; Howard said. &ldquo;She impressed me right away, and when it was time for her to choose a graduate program I knew she&rsquo;d fit perfectly in our lab.&rdquo;</p><p>Their work together now is impacting individuals with disabilities, making technology work for everybody including those with motor or visual or hearing impairments. They are investigating robot gendering and its impact on human trust, and work toward inclusivity with programs like <a href="http://ai-4-all.org/">AI4All</a>.</p><p>Bryant is using the inspiration that Howard has provided to her and feels a responsibility to continue that for the next generation of women roboticists. She is humbled by people who now look up to her like she did to Howard, being left speechless by a young student who featured her for a school project on Black History Month.</p><p>When she goes to Grace Hopper, Bryant loves meeting the undergrads and passing on her advice about academics and the challenges women face in the field. She also watches to make sure they are asking questions or calls on those who look like they have a question, but are afraid to ask.</p><p>&ldquo;I remember being that person in the room,&rdquo; she said. &ldquo;Women don&rsquo;t just need representation, they need a voice. I want to be their champion, connect them to the right people.&rdquo;</p><p>And her biggest advice to them might sound familiar.</p><p>&ldquo;Just start,&rdquo; she said. &ldquo;You&rsquo;ve got to start somewhere.&rdquo;</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1585163943</created>  <gmt_created>2020-03-25 19:19:03</gmt_created>  <changed>1585163943</changed>  <gmt_changed>2020-03-25 19:19:03</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Ph.D. Student De'Aira Bryant uses the leadership of adviser Ayanna Howard to help guide her and future generations of women in robotics.]]></teaser>  <type>news</type>  <sentence><![CDATA[Ph.D. Student De'Aira Bryant uses the leadership of adviser Ayanna Howard to help guide her and future generations of women in robotics.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-03-25T00:00:00-04:00</dateline>  <iso_dateline>2020-03-25T00:00:00-04:00</iso_dateline>  <gmt_dateline>2020-03-25 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="http://david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>622962</item>      </media>  <hg_media>          <item>          <nid>622962</nid>          <type>image</type>          <title><![CDATA[De'Aira Bryant]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[unadjustednonraw_thumb_29ba.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/unadjustednonraw_thumb_29ba.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/unadjustednonraw_thumb_29ba.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/unadjustednonraw_thumb_29ba.jpg?itok=rQD6y3c1]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1562099179</created>          <gmt_created>2019-07-02 20:26:19</gmt_created>          <changed>1562099179</changed>          <gmt_changed>2019-07-02 20:26:19</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.cc.gatech.edu/news/628437/startup-zyrobotics-creates-more-opportunities-impact]]></url>        <title><![CDATA[Startup Zyrobotics Creates More Opportunities for Impact]]></title>      </link>          <link>        <url><![CDATA[https://www.youtube.com/watch?v=JBg7nZXb1Vo]]></url>        <title><![CDATA[Ph.D. Student Seeks to Help Children Through Robotics]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="182940"><![CDATA[cc-research; ic-ai-ml; ic-robotics; ic-hcc]]></keyword>      </keywords>  <core_research_areas>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="633041">  <title><![CDATA[Georgia Tech Kicks Off Atlanta's Biggest STEM Party March 6-7]]></title>  <uid>27592</uid>  <body><![CDATA[<p>The 2020 launch of the Atlanta Science Festival, &ldquo;<a href="https://arts.gatech.edu/content/2100-climate-odyssey" target="_blank">2100: A Climate Odyssey</a>&rdquo; at Georgia Tech&rsquo;s Ferst Center for the Arts, takes place March 6 and&nbsp;is designed as an&nbsp;&quot;immersive theatrical experience that transports audience members to a possible future that looks at life after a century of climate change.&quot;</p><p>The ASF opening weekend at Georgia Tech also includes one of the most unique experiences in music performance on March 7. The&nbsp;<a href="https://guthman.gatech.edu/" target="_blank">Guthman Musical Instrument Competition Concert</a>&nbsp;showcases nine&nbsp;global finalists playing unique instruments created for the competition. It&#39;s free and open to the public.<br /><br />New this year as part of&nbsp;the Guthman event is the&nbsp;<a href="https://guthman.gatech.edu/fair?mc_cid=5630f343d4&amp;mc_eid=%5bUNIQID%5d" target="_blank">Music, Art, and Technology Fair</a>, hosted by the&nbsp;<a href="http://www.music.gatech.edu/?mc_cid=5630f343d4&amp;mc_eid=%5BUNIQID%5D" target="_blank">Georgia Tech School of Music</a>&nbsp;and&nbsp;<a href="http://www.cycling74.com/?mc_cid=5630f343d4&amp;mc_eid=%5BUNIQID%5D" target="_blank">Cycling &rsquo;74</a>. It&#39;s&nbsp;a unique opportunity to share projects at the intersection of art and technology in a hands-on, interactive, science-fair format.</p><p>The GVU Center at Georgia Tech created interactive graphics to <a href="https://public.tableau.com/views/AtlantaScienceFestival2020/Dashboard1?%3Adisplay_count=y&amp;%3Aorigin=viz_share_link&amp;%3AshowVizHome=no">explore the two-week festival</a> and <a href="https://public.tableau.com/views/AtlantaScienceFestival2020-eventsatGeorgiaTech/Dashboard1?:display_count=y&amp;:origin=viz_share_link:showVizHome=no">find specific events connected to Georgia Tech</a>. The institute is one of the founding members of ASF.</p><p>&nbsp;</p><h4><strong>ASF events presented by or taking place at Georgia Tech:</strong></h4><p>&nbsp;</p><h4><strong><em>March 6</em></strong></h4><p><a href="https://atlantasciencefestival.org/launch/" target="_blank"><strong>2100: A Climate Odyssey</strong></a></p><p>Presenting Partners: Science ATL, The Weather Channel, the National Weather Service, Peachtree City, Out of Hand Theater.&nbsp;</p><p>Time and Location: 8 p.m., Ferst Center for the Arts at Georgia Tech</p><p>&nbsp;</p><h4><strong><em>March 7</em></strong></h4><p><strong><a href="https://atlantasciencefestival.org/events-2020/12-steam-at-tech-day/" target="_blank">STEAM at Tech Day</a>&nbsp;&nbsp;&nbsp; </strong></p><p>Presenting Partner: Georgia Tech CEISMC</p><p>Time and Location: 12 p.m., Clough Undergraduate Learning Commons at Georgia Tech</p><p>&nbsp;</p><p><a href="https://atlantasciencefestival.org/events-2020/14-pitch-your-future/" target="_blank"><strong>Pitch Your Future</strong></a></p><p>Presenting Partners:&nbsp;Institute for Electronics and Nanotechnology at Georgia Tech, Jimmy Carter Presidential Library</p><p>Time and Location: 1 p.m.,&nbsp;Carter Presidential Library &amp; Museum</p><p>&nbsp;</p><p><a href="https://atlantasciencefestival.org/events-2020/154-music-art-tech-fair/" target="_blank"><strong>Guthman Music, Art &amp; Technology Fair</strong></a></p><p>Presenting Partners: Georgia Tech&rsquo;s School of Music, Cycling &lsquo;74</p><p>Time and Location: 4 p.m., Ferst Center for the Arts at Georgia Tech</p><p>&nbsp;</p><p><a href="https://atlantasciencefestival.org/events-2020/21-guthman-musical-instrument-competition/" target="_blank"><strong>Guthman Musical Instrument Competition&nbsp; </strong></a></p><p>Georgia Tech&rsquo;s School of Music</p><p>Time and Location: 7 p.m., Ferst Center for the Arts at Georgia Tech</p><p>&nbsp;</p><h4><strong><em>March 10</em></strong></h4><p><a href="https://atlantasciencefestival.org/events-2020/40-project-change/" target="_blank"><strong>Project Change: STEM Teachers @ Tech Day</strong></a></p><p>Presenting Partner: Georgia Tech CEISMC&nbsp;&nbsp;&nbsp;&nbsp;</p><p>Time and Location: 9 a.m., Georgia Tech Student Center</p><p>&nbsp;</p><p><a href="https://atlantasciencefestival.org/events-2020/46-playing-mother-nature/" target="_blank"><strong>Playing Mother Nature: A Night of Simulating Earth Science Phenomena!</strong></a></p><p>Presenting Partner: Georgia Tech&rsquo;s School of Earth and Atmospheric Science&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</p><p>Time and Location: 7 p.m., Manuel&#39;s Tavern</p><p>&nbsp;</p><h4><strong><em>March 12</em></strong></h4><p><a href="https://atlantasciencefestival.org/events-2020/69-sober-science-speakeasy/" target="_blank"><strong>Sober Science Speakeasy</strong></a>&nbsp;&nbsp;</p><p>Presenting Partner: Georgia Tech&rsquo;s STEM Comm VIP team&nbsp;</p><p>Time and Location: 7:30 p.m., Coda Building, Midtown</p><p>&nbsp;</p><p><a href="https://atlantasciencefestival.org/events-2020/68-science-riot/" target="_blank"><strong>Science Riot</strong></a>&nbsp;&nbsp;&nbsp;</p><p>Presenting Partners: Georgia Tech, Science Riot</p><p>Time and Location: 7:30 p.m., Highland Inn Ballroom</p><p>&nbsp;</p><h4><strong><em>March 14</em></strong></h4><p><a href="https://atlantasciencefestival.org/events-2020/83-latino-college-stem-fair/" target="_blank"><strong>8th Annual Latino College &amp; STEM Fair</strong></a></p><p>Presenting Partners: Georgia Tech CEISMC Go-STEM&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</p><p>Time and Location: 9 a.m., Georgia Tech Student Center</p><p>&nbsp;</p><p><a href="https://atlantasciencefestival.org/events-2020/85-science-of-star-wars/" target="_blank"><strong>Science of Star Wars</strong></a>&nbsp;</p><p>Presenting Partner: Institute for Electronics and Nanotechnology at Georgia Tech</p><p>Time and Location: 10 a.m., Marcus Nanotechnology Building Atrium</p><p>&nbsp;</p><p><a href="https://atlantasciencefestival.org/events-2020/91-investigating-the-nanoscale/" target="_blank"><strong>Investigating the Nanoscale</strong></a>&nbsp;</p><p>Presenting Partner: Institute for Electronics and Nanotechnology at Georgia Tech</p><p>Time and Location: 11 a.m., Marcus Nanotechnology Building Atrium</p><p>&nbsp;</p><h4><strong><em>March 18</em></strong></h4><p><a href="https://atlantasciencefestival.org/events-2020/134-science-improv/" target="_blank"><strong>Science Improv</strong></a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</p><p>Presenting Partner: Georgia Institute of Technology</p><p>Time and Location: 7:30 p.m., Whole World Improv Theater</p><p>&nbsp;</p><p>&nbsp;</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1582818481</created>  <gmt_created>2020-02-27 15:48:01</gmt_created>  <changed>1583431390</changed>  <gmt_changed>2020-03-05 18:03:10</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The two-week Atlanta Science Festival will launch at Georgia Tech and bring diverse STEM programming to campus and metro area.]]></teaser>  <type>news</type>  <sentence><![CDATA[The two-week Atlanta Science Festival will launch at Georgia Tech and bring diverse STEM programming to campus and metro area.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-02-27T00:00:00-05:00</dateline>  <iso_dateline>2020-02-27T00:00:00-05:00</iso_dateline>  <gmt_dateline>2020-02-27 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu">Joshua Preston</a><br />Research Communications Manager<br /><em>GVU Center and College of Computing</em></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>633045</item>          <item>633242</item>      </media>  <hg_media>          <item>          <nid>633045</nid>          <type>image</type>          <title><![CDATA[ASF @ GT 2020]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[ASF_at_GT 2020_2.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/ASF_at_GT%202020_2.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/ASF_at_GT%202020_2.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/ASF_at_GT%25202020_2.png?itok=MTBChjiz]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1582819019</created>          <gmt_created>2020-02-27 15:56:59</gmt_created>          <changed>1582819087</changed>          <gmt_changed>2020-02-27 15:58:07</gmt_changed>      </item>          <item>          <nid>633242</nid>          <type>image</type>          <title><![CDATA[Guthman Finalists Map 2020]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[2020.guthman.contestants.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/2020.guthman.contestants.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/2020.guthman.contestants.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/2020.guthman.contestants.png?itok=c36TUTA0]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[World map showing where Guthman Competition finalists came from in 2015 through 2020.]]></image_alt>                    <created>1583263209</created>          <gmt_created>2020-03-03 19:20:09</gmt_created>          <changed>1583263209</changed>          <gmt_changed>2020-03-03 19:20:09</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://atlantasciencefestival.org/]]></url>        <title><![CDATA[Atlanta Science Festival]]></title>      </link>          <link>        <url><![CDATA[https://public.tableau.com/views/AtlantaScienceFestival2020/Dashboard1?%3Adisplay_count=y&amp;%3Aorigin=viz_share_link&amp;%3AshowVizHome=no]]></url>        <title><![CDATA[INTERACTIVE GRAPHIC: Atlanta Science Festival]]></title>      </link>          <link>        <url><![CDATA[https://public.tableau.com/views/Guthman2020/Dashboarddemo?:showVizHome=no]]></url>        <title><![CDATA[INTERACTIVE GRAPHIC: Guthman Musical Map]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39451"><![CDATA[Electronics and Nanotechnology]]></term>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="632082">  <title><![CDATA[Changing the Conversation: Georgia Tech Researchers Provide New Approach to Automated Story Generation]]></title>  <uid>33939</uid>  <body><![CDATA[<p>It&rsquo;s a situation familiar to anyone who&rsquo;s ever communicated with a voice assistant on a smart device. You pose a request: &ldquo;Hey Voice Assistant, tell me a story about Georgia Tech.&rdquo; More often than not, you get a related response &ndash; &ldquo;Georgia Tech is located in Atlanta, Georgia. Would you like me to provide you with directions?&rdquo; &ndash; but one with slightly unnatural language and only limited information.</p><p>Despite the enormous strides made in artificial intelligence to develop systems that can answer simple questions and requests, the kinds of natural conversational language humans have with each other when giving more complex directions or telling stories has thus far been out of reach.</p><p>Research from <a href="http://gatech.edu">Georgia Tech</a>&rsquo;s <a href="http://ic.gatech.edu">School of Interactive Computing</a>, however, provides a novel approach that improves the combination of automated story generation with natural language. The development is an important step in providing AI assistants the capability to more naturally converse with humans.</p><p>&ldquo;Let&rsquo;s think of a future version of Siri or Alexa, where you have a complex task that&rsquo;s not just &lsquo;Look this thing up on the internet,&rsquo; or &lsquo;Tell me what the weather is outside,&rsquo;&rdquo; said Mark Riedl, an associate professor at Georgia Tech and the faculty lead on the research. &ldquo;Maybe you want to plan your day or a birthday party. Think of the response like a little story, a narrative that conveys the requested information.</p><p>&ldquo;It&rsquo;s a missing capability in AI &ndash; they just don&rsquo;t understand us or communicate with us in the same ways that we understand each other.&rdquo;</p><p>Riedl and his team approached the challenge by viewing the exchange of information as stories &ndash; a series of events, one after the other, that leads to some conclusion. Past research on the topic identified patterns in language to identify how stories are constructed &ndash; namely that a verb generally changes the action and conveys a new event in a story.</p><p>&ldquo;By boiling down these stories drawn from the internet to essential verbs and actions, we can extract patterns from stories better,&rdquo; Riedl said. &ldquo;There are a lot of ways to talk about marriage, but at the end of the day someone is marrying someone else.&rdquo;</p><p>This paper, the third in the series, took the next step: If you take away all the words to identify the patterns in a story, you need to be able to put them back in naturally and intelligently in a way that humans are accustomed to. Put simply, it&rsquo;s like building an outline and then filling in the details.</p><p>The system works by building the outline through a neural network trained on sequencing events. With the help of story examples drawn from the internet, it applies machine learning to produce a series of events, one leading to the most likely next outcome. That outline guides a second neural network that applies natural language &ndash; grammar, syntax, spelling, everything else you need to make the story intelligible &ndash; to produce more elaborate sentences.</p><p>&ldquo;If you&rsquo;re asking for directions for how a birthday party should go, you don&rsquo;t want just &lsquo;Jill eats cake; Jill opens presents,&rsquo;&rdquo; Riedl said. &ldquo;You want something more akin to the stories we share as humans. It&rsquo;s actually more difficult for us to process information when it&rsquo;s delivered in a way we&rsquo;re not accustomed to.&rdquo;</p><p>The researchers found that an ensemble approach works the best. They use a series of five algorithms, each with different capabilities in accuracy and natural language generation, produces the best stories. Because one algorithm isn&rsquo;t uniformly better at all aspects of the task, it will be run through all five to find the highest confidence level of the sentence.</p><p>&ldquo;One technique might provide bland sentences, but is accurate with the actual content,&rdquo; Riedl said. &ldquo;Another might be very good at putting in a narrative flourish, but they fail more often. You want that nicer sentence, but you also want it to be able to catch mistakes in the content.&rdquo;</p><p>The ensemble approach scored significantly higher in human studies than the individual algorithms alone. Human trust in their AI and robot assistants, Riedl said, was key to adoption in the future.</p><p>&ldquo;The key is that you want to place that trust in your machine counterpart, but it has to earn that trust on correctness and accuracy,&rdquo; he said.</p><p>The paper is titled <a href="https://arxiv.org/abs/1909.03480"><em>Story Realization: Expanding Plot Events into Sentences</em></a>, and will be presented at the <a href="https://aaai.org/Conferences/AAAI-20/">34<sup>th</sup> AAAI Conference on Artificial Intelligence</a> on Feb. 7-12 in New York City. The research is funded under a grant from <a href="https://www.darpa.mil/">DARPA</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1580831804</created>  <gmt_created>2020-02-04 15:56:44</gmt_created>  <changed>1581101079</changed>  <gmt_changed>2020-02-07 18:44:39</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Research from Georgia Tech’s School of Interactive Computing provides a novel approach that improves the combination of automated story generation with natural language.]]></teaser>  <type>news</type>  <sentence><![CDATA[Research from Georgia Tech’s School of Interactive Computing provides a novel approach that improves the combination of automated story generation with natural language.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-02-04T00:00:00-05:00</dateline>  <iso_dateline>2020-02-04T00:00:00-05:00</iso_dateline>  <gmt_dateline>2020-02-04 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>632081</item>      </media>  <hg_media>          <item>          <nid>632081</nid>          <type>image</type>          <title><![CDATA[Amazon Alexa]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[alexa-alexa-talking-amazon-cortana-717235.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/alexa-alexa-talking-amazon-cortana-717235.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/alexa-alexa-talking-amazon-cortana-717235.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/alexa-alexa-talking-amazon-cortana-717235.jpg?itok=C-u6scQ0]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1580831771</created>          <gmt_created>2020-02-04 15:56:11</gmt_created>          <changed>1580831771</changed>          <gmt_changed>2020-02-04 15:56:11</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="1317"><![CDATA[News Briefs]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="181639"><![CDATA[cc-research; ic-ai-ml]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71881"><![CDATA[Science and Technology]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="631901">  <title><![CDATA[Jill Watson Team Reaches Semifinals in IBM AI XPrize Competition]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Algorithms that help answer the stream of questions college students have each semester might be welcome by any instructor who can offload FAQs to such an artificially intelligent teaching assistant (TA).</p><p>Jill Watson &ndash; Georgia Tech&rsquo;s AI designed explicitly for this purpose &ndash; turned four years old this January, with the AI&rsquo;s birthday coinciding with the announcement of the <a href="https://ai.xprize.org/prizes/artificial-intelligence/teams" target="_blank">10 semifinalists for IBM&rsquo;s AI XPrize competition</a>. Georgia Tech&rsquo;s <a href="https://ai.xprize.org/prizes/artificial-intelligence/teams/emprize" target="_blank">emPrize team</a>, led by Professor of Interactive Computing&nbsp;<strong>Ashok Goel </strong>and utilizing Jill Watson as the key technology, was named as one of the semifinalists.</p><p>The competition started in 2016, the year of Jill&rsquo;s arrival in a graduate computer science&nbsp;course at Georgia Tech, and has &ldquo;sought to accelerate the adoption of AI technologies and spark creative, innovative, and audacious demonstrations of the technology that are truly scalable to solve societal grand challenges.&rdquo; After nearly four calendar years, XPrize will name a winner in April.</p><p>As part of the GT emPrize team&rsquo;s work, the Jill Watson TA not only answers student questions about course requirements but can answer questions about another AI named <a href="http://vera.cc.gatech.edu/" target="_blank">VERA</a>, or the Virtual Ecological Research Assistant.</p><p>Jill helps users learn how to use VERA, a system which enables students in GT&rsquo;s Intro to Biology course (and online science seekers) to create their own ecological models from&nbsp;a&nbsp;web browser. Unlike the Jill Watson TA, which is currently used only by GT students, VERA is open to anyone with an internet connection.</p><p>Another part of emPrize&nbsp;is the Jill Social Agent, whose lead designer, <strong>Ida Camacho</strong>, is a recent&nbsp;alumna of Georgia Tech&rsquo;s Online Master of Science in Computer Science program (OMSCS) and understands the pressures and uncertainties of online learning.</p><p>The Jill Social Agent in essence gives students just starting online courses a chance at &ldquo;speed friending&rdquo;. If online students feel they have more peer support and connections from the start, this might&nbsp;translate into success in the course. Hear from Camacho on the <a href="https://www.spreaker.com/user/10751784/tu-ep10-jill-social-ai-online-learninG" target="_blank">Tech Unbound podcast with GVU Center</a>&nbsp;as she reveals some of her AI&rsquo;s design and the educational experience that informed her work on emPrize.</p><p>Learn more at&nbsp;<a href="http://emprize.gatech.edu/">http://emprize.gatech.edu/</a>&nbsp;or explore a <a href="https://public.tableau.com/views/JillWatsonTurns4/Dashboard?:display_count=y&amp;:origin=viz_share_link:showVizHome=no" target="_blank">timeline of Jill&#39;s evolution</a>.&nbsp;</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1580406195</created>  <gmt_created>2020-01-30 17:43:15</gmt_created>  <changed>1580407466</changed>  <gmt_changed>2020-01-30 18:04:26</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Jill Watson – Georgia Tech’s AI designed explicitly for answering student questions about specific courses - was named as one of 10 semifinalists in IBM’s AI XPrize competition.]]></teaser>  <type>news</type>  <sentence><![CDATA[Jill Watson – Georgia Tech’s AI designed explicitly for answering student questions about specific courses - was named as one of 10 semifinalists in IBM’s AI XPrize competition.]]></sentence>  <summary><![CDATA[<p>Jill Watson &ndash; Georgia Tech&rsquo;s AI designed explicitly for answering student questions about specific courses&nbsp;&ndash; turned four years old this January, with the AI&rsquo;s birthday coinciding with the announcement of the 10 semifinalists for IBM&rsquo;s AI XPrize competition. Georgia Tech&rsquo;s emprize team, led by Professor of Interactive Computing&nbsp;<strong>Ashok Goel </strong>and utilizing Jill Watson as the key technology, was named as one of the semifinalists.</p><p>&nbsp;</p>]]></summary>  <dateline>2020-01-30T00:00:00-05:00</dateline>  <iso_dateline>2020-01-30T00:00:00-05:00</iso_dateline>  <gmt_dateline>2020-01-30 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu">Joshua Preston</a><br />Research Communications Manager<br /><em>GVU Center and College of Computing</em></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>631547</item>      </media>  <hg_media>          <item>          <nid>631547</nid>          <type>image</type>          <title><![CDATA[Timeline: Jill Watson AI at 4]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Jill Timeline at 4yo.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Jill%20Timeline%20at%204yo.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Jill%20Timeline%20at%204yo.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Jill%2520Timeline%2520at%25204yo.png?itok=fHH0mYs7]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Timeline: Jill Watson AI at 4yo]]></image_alt>                    <created>1579883925</created>          <gmt_created>2020-01-24 16:38:45</gmt_created>          <changed>1580406385</changed>          <gmt_changed>2020-01-30 17:46:25</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="631545">  <title><![CDATA[Jill Watson, an AI Pioneer in Education, Turns 4]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Georgia Tech&rsquo;s most well-known artificially intelligent teaching assistant, Jill Watson, turns four years old this January. The brainchild of <strong>Ashok Goel</strong>, professor in Interactive Computing, and launched at the start of 2016, the virtual TA was introduced into one of the courses for the then-fledgling Online Master of Science in Computer Science (OMSCS) program, now one of Georgia Tech&rsquo;s largest graduate degree programs.</p><p>Students and faculty would be forgiven in thinking Jill Watson is a single teaching assistant. Each course that utilizes the Jill TA has its own custom &ldquo;knowledge base&rdquo; that the AI leverages to answer basic student questions 24/7.</p><h5><a href="https://public.tableau.com/views/JillWatsonTurns4/Dashboard?:display_count=y&amp;:origin=viz_share_link:showVizHome=no" target="_blank"><strong>Explore the Timeline of Jill&rsquo;s Growth</strong></a></h5><p>In addition, a new AI, the <strong>Jill Social Agent</strong>, was designed and launched in 2019 to explicitly connect students quickly and get them working together. The agent was developed in part as a response to high attrition rates that plague online learning in general.</p><p>The lead architect for the Jill Social Agent, <strong>Ida Camacho</strong>, OMSCS &rsquo;19, discusses&nbsp;the&nbsp;AI&nbsp;on an episode of the&nbsp;<a href="https://gvu.gatech.edu/tech-unbound-podcast">Tech Unbound Podcast</a> from the GVU Center. It&rsquo;s a fascinating inside look at Camacho&rsquo;s approach to building social structures for online education and her own journey as an OMSCS student.</p><p>Other major milestones from the Jill TA in 2019:</p><ul><li>Introduced in residential classroom for first time.</li><li>Deployed in first non-CS course (Intro to Biology).</li><li>Customized to train users on the <a href="http://vera.cc.gatech.edu/">VERA AI</a>, an ecology modeling system.</li><li>&nbsp;</li></ul><p>The new decade promises more educational advances made possible by the Jill Watson AI framework. Learn more at <a href="http://emprize.gatech.edu/">emprize.gatech.edu</a>.</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1579883424</created>  <gmt_created>2020-01-24 16:30:24</gmt_created>  <changed>1579886638</changed>  <gmt_changed>2020-01-24 17:23:58</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech’s most well-known artificially intelligent teaching assistant, Jill Watson, turns four years old this January.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech’s most well-known artificially intelligent teaching assistant, Jill Watson, turns four years old this January.]]></sentence>  <summary><![CDATA[<p>Georgia Tech&rsquo;s most well-known artificially intelligent teaching assistant, Jill Watson, turns four years old this January.</p>]]></summary>  <dateline>2020-01-24T00:00:00-05:00</dateline>  <iso_dateline>2020-01-24T00:00:00-05:00</iso_dateline>  <gmt_dateline>2020-01-24 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu">Joshua Preston</a><br />Research Communications Manager<br /><em>GVU Center and College of Computing</em></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>631547</item>      </media>  <hg_media>          <item>          <nid>631547</nid>          <type>image</type>          <title><![CDATA[Timeline: Jill Watson AI at 4]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Jill Timeline at 4yo.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Jill%20Timeline%20at%204yo.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Jill%20Timeline%20at%204yo.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Jill%2520Timeline%2520at%25204yo.png?itok=fHH0mYs7]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Timeline: Jill Watson AI at 4yo]]></image_alt>                    <created>1579883925</created>          <gmt_created>2020-01-24 16:38:45</gmt_created>          <changed>1580406385</changed>          <gmt_changed>2020-01-30 17:46:25</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://gvu.gatech.edu/news/ai-agent-breaks-down-social-barriers-online-education]]></url>        <title><![CDATA[A Closer Look at the Jill Social Agent]]></title>      </link>          <link>        <url><![CDATA[http://emprize.gatech.edu/]]></url>        <title><![CDATA[Georgia Tech Finalist in IBM AI XPrize Competition]]></title>      </link>          <link>        <url><![CDATA[https://www.spreaker.com/user/10751784/tu-ep10-jill-social-ai-online-learninG]]></url>        <title><![CDATA[Tech Unbound EP10: Online Education Gets a Social Boost with Artificial Intelligence]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39431"><![CDATA[Data Engineering and Science]]></term>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="630479">  <title><![CDATA[ML@GT Adds Six New Associate Directors to Leadership Team]]></title>  <uid>34773</uid>  <body><![CDATA[<p>The <a href="http://ml.gatech.edu/">Machine Learning Center at Georgia Tech (ML@GT)</a> continues to diversify and expand its leadership team. Starting in January the leadership team will add <strong>Deven Desai, Polo Chau, Mark Davenport, Yao Xie, Mark Riedl, </strong>and <strong>George Lan</strong> as associate directors<strong>.</strong></p><p>Desai, an associate professor in the <a href="https://www.scheller.gatech.edu/directory/faculty/desai/index.html">Scheller College of Business</a>, will be the center&rsquo;s first associate director for Legal, Policy, Ethics, and Machine Learning. Not a technologist by training, Desai will draw from his experience working at Princeton&#39;s Center for Information Technology Policy and Google as Academic Research Counsel to help policy makers, legal scholars and technologists work better together. This includes helping each party understand how a given technology works and what issues it might raise.</p><p>&ldquo;I am excited to be part of ML@GT because of the opportunity to be part of a world class group of thinkers and to connect our work to the world. &nbsp;I believe there is a need to bridge the worlds of technology and law, policy, and ethics,&rdquo; said Desai. &ldquo;ML@GT is poised to increase not only machine learning insights and breakthroughs but also the way in which machine learning is built and used to serve society. I am honored and thrilled to be part of building that future.&rdquo;</p><p>Xie, an associate professor in the <a href="https://www.isye.gatech.edu/">H. Milton Stewart School of Industrial Systems Engineering (ISyE),</a> is the first woman to join the leadership team. She will serve as the associate director for machine learning and data science where she will create better synergy between the ongoing research and education efforts between data science and machine learning as Georgia Tech builds a leading program in these areas.</p><p>&ldquo;I am particularly excited to work with the broader community of students and faculty on campus who are interested or involved with machine learning and data science and foster their participation,&rdquo; said Xie.</p><p>Lan, also an associate professor in ISyE has been appointed as the associate director for machine learning and statistics. In this role, Lan will promote research at the intersections between optimization, statistics, and machine learning and how they also apply in engineering. He will also help better facilitate communications for students coming from different home colleges or schools across campus.</p><p>&ldquo;I am excited to be joining the team with active and dynamic academic leaders. I look forward to working with them to address a diverse set of challenges that ML@GT faces, e.g., being adaptive to the priorities and criterions for our affiliated faculty members and students across different academic units,&rdquo; said Lan.</p><p>As the associate director for machine learning and artificial intelligence, Riedl, an associate professor in the <a href="https://ic.gatech.edu/">School of Interactive Computing</a>, will coordinate ML@GT&rsquo;s strategy with respect to the broader field of artificial intelligence.</p><p>&ldquo;Artificial intelligence and machine learning have the potential to radically change virtually every aspect of our lives. With thought and care, these technologies can be a force for good. Georgia Tech is well-positioned to be a major voice in how technology and policy shape the future,&rdquo; said Riedl.</p><p>With more corporations integrating machine learning and artificial intelligence into their businesses, the center&rsquo;s need for managing those relationships has increased significantly. Chau, an associate professor in the <a href="https://cse.gatech.edu/">School of Computational Science and Engineering</a>, will lead those relationships as the associate director for corporate relations for machine learning.</p><p>&ldquo;I enjoy bringing people together, connecting industry with Georgia Tech researchers, bridging disciplines and innovating at their intersections. I&rsquo;m excited to begin my new role as it will be a great way to help Georgia Tech further expand its national and global footprint,&rdquo; said Chau.</p><p>As the associate director for community and students, Davenport is charged with creating a tight-knit community among faculty and students. Davenport, an associate professor in the <a href="https://www.ece.gatech.edu/">School of Electrical and Computer Engineering</a>, will work closely with the center staff to coordinate events and other opportunities to increase discussion and collaboration between research units.</p><p>The six new members will join <a href="http://ml.gatech.edu/leadership">existing leadership members</a> <strong>Irfan Essa, Justin Romberg, Zsolt Kira, </strong>and <strong>Le Song. </strong></p><h4>About the Machine Learning Center at Georgia Tech</h4><p>The Machine Learning Center at Georgia Tech is an interdisciplinary research center bringing together more than 190 faculty members and 60 machine learning Ph.D. students from across the institute for meaningful collaboration and innovation in machine learning and artificial intelligence. Students and faculty are experts in areas including, but not limited, to computer vision, natural language processing, robotics, deep learning, ethics and fairness, computational finance, information security, and logistics and manufacturing. For more information, visit <a href="http://www.ml.gatech.edu">www.ml.gatech.edu</a></p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1578088517</created>  <gmt_created>2020-01-03 21:55:17</gmt_created>  <changed>1578315655</changed>  <gmt_changed>2020-01-06 13:00:55</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The Machine Learning Center at Georgia Tech enters the new year with an expanded leadership team. ]]></teaser>  <type>news</type>  <sentence><![CDATA[The Machine Learning Center at Georgia Tech enters the new year with an expanded leadership team. ]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2020-01-06T00:00:00-05:00</dateline>  <iso_dateline>2020-01-06T00:00:00-05:00</iso_dateline>  <gmt_dateline>2020-01-06 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allie.mcfadden@cc.gatech.edu&nbsp;</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>630495</item>          <item>630498</item>          <item>630501</item>          <item>630499</item>          <item>630496</item>          <item>630500</item>          <item>630497</item>      </media>  <hg_media>          <item>          <nid>630495</nid>          <type>image</type>          <title><![CDATA[ML@GT adds six new associate directors to the leadership team from across the institute.]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[ML_AssociateDirectors.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/ML_AssociateDirectors.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/ML_AssociateDirectors.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/ML_AssociateDirectors.png?itok=ZJl3UV3R]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[ML@GT adds six new associate directors to the leadership team]]></image_alt>                    <created>1578314978</created>          <gmt_created>2020-01-06 12:49:38</gmt_created>          <changed>1578315834</changed>          <gmt_changed>2020-01-06 13:03:54</gmt_changed>      </item>          <item>          <nid>630498</nid>          <type>image</type>          <title><![CDATA[Deven Desai, Associate Director for Legal, Policy, Ethics, and Machine Learning]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[desai_deven_profile.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/desai_deven_profile_0.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/desai_deven_profile_0.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/desai_deven_profile_0.jpg?itok=C-4JHtul]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Deven Desai]]></image_alt>                    <created>1578315260</created>          <gmt_created>2020-01-06 12:54:20</gmt_created>          <changed>1578315260</changed>          <gmt_changed>2020-01-06 12:54:20</gmt_changed>      </item>          <item>          <nid>630501</nid>          <type>image</type>          <title><![CDATA[Yao Xie, Associate Director for Machine Learning and Data Science ]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[yao_xie_2013_3.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/yao_xie_2013_3.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/yao_xie_2013_3.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/yao_xie_2013_3.jpg?itok=Gyg9aLXh]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Yao Xie]]></image_alt>                    <created>1578315482</created>          <gmt_created>2020-01-06 12:58:02</gmt_created>          <changed>1578315482</changed>          <gmt_changed>2020-01-06 12:58:02</gmt_changed>      </item>          <item>          <nid>630499</nid>          <type>image</type>          <title><![CDATA[George Lan, Associate Director for Machine Learning and Statistics]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[gl_2.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/gl_2.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/gl_2.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/gl_2.jpg?itok=d24L6Zim]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[George Lan]]></image_alt>                    <created>1578315328</created>          <gmt_created>2020-01-06 12:55:28</gmt_created>          <changed>1578315328</changed>          <gmt_changed>2020-01-06 12:55:28</gmt_changed>      </item>          <item>          <nid>630496</nid>          <type>image</type>          <title><![CDATA[Mark Riedl, Associate Director for Machine Learning and Artificial Intelligence]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[mark_riedl_007.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/mark_riedl_007.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/mark_riedl_007.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/mark_riedl_007.jpg?itok=SjtApm6d]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Mark Riedl]]></image_alt>                    <created>1578315077</created>          <gmt_created>2020-01-06 12:51:17</gmt_created>          <changed>1578315077</changed>          <gmt_changed>2020-01-06 12:51:17</gmt_changed>      </item>          <item>          <nid>630500</nid>          <type>image</type>          <title><![CDATA[Polo Chau, Associate Director for Corporate Relations for Machine Learning]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[polo_chau_550x688_01_2.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/polo_chau_550x688_01_2.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/polo_chau_550x688_01_2.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/polo_chau_550x688_01_2.jpg?itok=rBLM8ZpS]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Polo Chau]]></image_alt>                    <created>1578315397</created>          <gmt_created>2020-01-06 12:56:37</gmt_created>          <changed>1578315397</changed>          <gmt_changed>2020-01-06 12:56:37</gmt_changed>      </item>          <item>          <nid>630497</nid>          <type>image</type>          <title><![CDATA[Mark Davenport, Associate Director for Community and Students]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[davenport-square.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/davenport-square.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/davenport-square.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/davenport-square.jpg?itok=LZJ-RrQu]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Mark Davenport]]></image_alt>                    <created>1578315143</created>          <gmt_created>2020-01-06 12:52:23</gmt_created>          <changed>1578315143</changed>          <gmt_changed>2020-01-06 12:52:23</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="37041"><![CDATA[Computational Science and Engineering]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="129"><![CDATA[Institute and Campus]]></category>          <category tid="134"><![CDATA[Student and Faculty]]></category>      </categories>  <news_terms>          <term tid="129"><![CDATA[Institute and Campus]]></term>          <term tid="134"><![CDATA[Student and Faculty]]></term>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="629748">  <title><![CDATA[Amazon and Georgia Tech Team Up with Ciara to Inspire Students to Code through Competition to Remix the Singer/Songwriter’s Song “SET”]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Amazon has&nbsp;announced a new addition to its Amazon Future Engineer program &ndash; a&nbsp;<a href="https://cts.businesswire.com/ct/CT?id=smartlink&amp;url=https%3A%2F%2Fwww.amazonfutureengineer.com%2Fearsketch&amp;esheet=52132432&amp;newsitemid=20191120005225&amp;lan=en-US&amp;anchor=music+remix+competition&amp;index=1&amp;md5=55e2731a08604d2df275f4575b568720">music remix competition</a>&nbsp;that teaches students how to write code that makes music. Alongside&nbsp;Georgia Tech&nbsp;and their learn-to-code-through music platform, EarSketch, participating high school students have the opportunity to win prizes by composing an original remix featuring original music stems from Grammy Award winning singer/songwriter&nbsp;<a href="https://cts.businesswire.com/ct/CT?id=smartlink&amp;url=https%3A%2F%2Ftwitter.com%2Fciara%2Fstatus%2F1196820429748334592&amp;esheet=52132432&amp;newsitemid=20191120005225&amp;lan=en-US&amp;anchor=Ciara&amp;index=2&amp;md5=82ff2e20a316ce9fe5bab3182da6ea6b">Ciara</a>&nbsp;and her song, &ldquo;SET&rdquo; from her latest album&nbsp;<em>Beauty Marks</em>. The competition is intended to uniquely activate young people to try computer science and coding. All high school students across the country are encouraged to&nbsp;<a href="https://cts.businesswire.com/ct/CT?id=smartlink&amp;url=https%3A%2F%2Fwww.amazonfutureengineer.com%2Fearsketch&amp;esheet=52132432&amp;newsitemid=20191120005225&amp;lan=en-US&amp;anchor=enter&amp;index=3&amp;md5=caa3175e0a1c7088703a02ee88feed24">enter</a>&nbsp;the competition now through&nbsp;January 20<sup>th</sup>. Teaching guides are available for teachers to bring the competition to their classroom, or as part of their introductory computer science curriculum.</p><p>Students will use computer science and coding to build their remix using musical samples from Ciara&rsquo;s song &ldquo;SET,&rdquo; as well as other sounds from the EarSketch library. Students will learn looping (repeating) to extend the length of their song, use strings to create new beats, create custom functions representing different song sections, and learn to upload their own sounds to the EarSketch library.</p><p>&ldquo;We are excited to support the innovative and unique work&nbsp;Georgia Tech&nbsp;and EarSketch are pioneering to give students across the country more access to computer science, coding, and music,&rdquo; said&nbsp;Jeff Wilke, CEO Worldwide Consumer,&nbsp;Amazon. &ldquo;This competition will give thousands of students from underserved and underrepresented communities the opportunity to try something new and fun. It will build their confidence and, most importantly, encourage them to think creatively.&rdquo;</p><p>Students and teachers can learn more about the competition details at&nbsp;<a href="https://cts.businesswire.com/ct/CT?id=smartlink&amp;url=http%3A%2F%2Fwww.amazonfutureengineer.com%2Fearsketch&amp;esheet=52132432&amp;newsitemid=20191120005225&amp;lan=en-US&amp;anchor=www.amazonfutureengineer.com%2Fearsketch&amp;index=4&amp;md5=8ec92013daecf047f0df18a11b07d6d6">www.amazonfutureengineer.com/earsketch</a>&nbsp;- all high school students are encouraged to participate, either in class or on their own. The top three student winners will each receive an all-expenses-paid trip to Amazon&rsquo;s headquarters in&nbsp;Seattle, Washington&nbsp;to be an &ldquo;Amazon Future Engineer&rdquo; for the day. Additional winners will receive a PreSonus Audiobox 96 Studio and Amazon.com Gift Cards. The competition opens today and will close on&nbsp;Monday, January 20&nbsp;at&nbsp;11:59PM EST.</p><p>The Bureau of Labor Statistics&nbsp;projects that by 2020 there will be 1.4 million computer-science-related jobs available and only 400,000 computer science graduates with the skills to apply for those jobs. Computer science is the fastest-growing profession within the Science, Technology, Engineering and Math (STEM) field, but only 8% of STEM graduates earn a computer science degree, with a small percentage from underserved backgrounds. In multiple research studies, students using the EarSketch platform significantly increased their positive attitudes towards computing and their intentions to persist in computing, with particularly significant impacts on students from groups historically underrepresented in the field. Female students expressed even greater gains in computing confidence, motivation, and identity as compared to their male counterparts.</p><p>&ldquo;Music and coding are both highly prevalent today, but coding is less apparent&mdash;EarSketch provides an amazing way to experience these creative disciplines simultaneously,&rdquo; said Dr.&nbsp;Roxanne Moore, Senior Research Engineer at&nbsp;Georgia Tech&nbsp;and project lead for the remix competition.</p><p>&ldquo;We know that creativity is the key to success in both music and computer science,&rdquo; said Professor&nbsp;Jason Freeman, Chair of the&nbsp;School of Music&nbsp;at&nbsp;Georgia Tech&nbsp;and co-creator of the EarSketch platform. &ldquo;We&rsquo;re thrilled to partner with&nbsp;Amazon&nbsp;to support more students as they unlock their creative potential through EarSketch.&rdquo;</p><p>To date, more than 375,000 students in all 50 states and over 100 countries have used EarSketch. Programs like EarSketch serve Georgia Tech&rsquo;s mission to meet the demand for STEM (science, technology, engineering, and mathematics) professionals by opening the eyes of more students to these engaging and important subjects.</p><p>Launched in&nbsp;November 2018, Amazon Future Engineer is a four-part childhood-to-career program intended to inspire, educate, and prepare children and young adults from underrepresented and underserved communities to pursue careers in the fast-growing field of computer science. Each year,&nbsp;<a href="https://cts.businesswire.com/ct/CT?id=smartlink&amp;url=https%3A%2F%2Fwww.amazonfutureengineer.com%2F&amp;esheet=52132432&amp;newsitemid=20191120005225&amp;lan=en-US&amp;anchor=Amazon+Future+Engineer&amp;index=5&amp;md5=704ffeac8db4d857b848f0c0044dfe7b">Amazon Future Engineer</a>&nbsp;aims to inspire millions of kids to explore computer science; provides over 100,000 young people in over 2,000 high schools access to Intro or AP Computer Science courses; awards 100 students with four-year&nbsp;$10,000&nbsp;scholarships, as well as offers guaranteed and paid&nbsp;Amazon&nbsp;internships to gain work experience. Amazon Future Engineer is part of Amazon&rsquo;s&nbsp;$50 million&nbsp;investment in computer science/STEM education. In addition, Amazon Future Engineer has donated more than&nbsp;$10 million&nbsp;to organizations that promote computer science/STEM education across the country.</p><p>AMAZON CONTACT:</p><p>Amazon.com, Inc.<br />Media Hotline<br /><a href="mailto:Amazon-pr@amazon.com">Amazon-pr@amazon.com</a><br /><a href="https://cts.businesswire.com/ct/CT?id=smartlink&amp;url=http%3A%2F%2Fwww.amazon.com%2Fpr&amp;esheet=52132432&amp;newsitemid=20191120005225&amp;lan=en-US&amp;anchor=www.amazon.com%2Fpr&amp;index=8&amp;md5=eadd66973ffd13c490f7098aa8727897">www.amazon.com/pr</a></p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1575643868</created>  <gmt_created>2019-12-06 14:51:08</gmt_created>  <changed>1575926637</changed>  <gmt_changed>2019-12-09 21:23:57</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Amazon and Georgia Tech have announced a music remix competition using coding.]]></teaser>  <type>news</type>  <sentence><![CDATA[Amazon and Georgia Tech have announced a music remix competition using coding.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-11-20T00:00:00-05:00</dateline>  <iso_dateline>2019-11-20T00:00:00-05:00</iso_dateline>  <gmt_dateline>2019-11-20 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu">Joshua Preston</a><br />Research Communications Manager<br /><em>GVU Center and College of Computing</em></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>629749</item>      </media>  <hg_media>          <item>          <nid>629749</nid>          <type>image</type>          <title><![CDATA[Ciara Remix Competition]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Ciara Remix Competition.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Ciara%20Remix%20Competition.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Ciara%20Remix%20Competition.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Ciara%2520Remix%2520Competition.png?itok=mXwSVfuQ]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1575644056</created>          <gmt_created>2019-12-06 14:54:16</gmt_created>          <changed>1575644056</changed>          <gmt_changed>2019-12-06 14:54:16</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://tabsoft.co/2PipJDr]]></url>        <title><![CDATA[Data Viz: Explore and listen to Ciara's Billboard hits]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>          <topic tid="71881"><![CDATA[Science and Technology]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="628671">  <title><![CDATA[FairVis is Helping Data Scientists Discover Societal Biases in their Machine Learning Models ]]></title>  <uid>34540</uid>  <body><![CDATA[<p>Researchers at Georgia Tech, Carnegie Mellon University, and University of Washington have developed a data visualization system that can help data scientists discover bias in machine learning algorithms.&nbsp;</p><p><a href="https://arxiv.org/pdf/1904.05419.pdf">FairVis</a>, presented at&nbsp;<a href="http://ieeevis.org/year/2019/welcome">IEEE Vis 2019</a>&nbsp;in Vancouver, is the first system to integrate a novel technique that allows users to audit the fairness of machine learning models by identifying and comparing different populations in their data sets.&nbsp;</p><p>According to School of Computational Science and Engineering (CSE) Professor and co-investigator&nbsp;<a href="https://poloclub.github.io/polochau/"><strong>Polo Chau</strong></a><strong>,&nbsp;</strong>this feat has never been accomplished by any platform before, and is a major contribution of FairVis to the data science and machine learning communities.</p><p>&ldquo;Computers are never going to be perfect. So, the question is how to help people prioritize where to look in their data, and then, in a scalable way, enable them to compare these areas to other similar or dissimilar groups in the data. By enabling comparison of groups in a data set,&nbsp;FairVis allows data to become very scannable,&rdquo; he said.</p><p>In order to do accomplish this, FairVis uses two novel techniques to find subgroups that are statistically similar.&nbsp;</p><p>The first technique groups similar items together in the training data set, calculates various performance metrics like accuracy, and then shows users which groups of people the algorithm may be biased against. The second technique uses statistical divergence to measure the distance between subgroups to allow users to compare similar groups and find larger patterns of bias.</p><p>These outputs are then viewed and analyzed through FairVis&rsquo; visual analytics system, which is designed to specifically discover and show intersectional bias.&nbsp;</p><p>Intersectional bias, or bias that is found when looking at populations defined by multiple features, is a mounting challenge for scientists to tackle in an increasingly diverse world.</p><p>&ldquo;While a machine learning algorithm may work very well in general, there may be certain groups for which it fails. For example, various face detection algorithms were found to be 30 percent less accurate for darker skinned women than for lighter skinned men. When you look at more specific groups of sex, race, nationality, and more, there can be hundreds or thousands of groups to audit,&rdquo; said&nbsp;Carnegie Mellon University&nbsp;Ph.D. student&nbsp;<a href="https://cabreraalex.com/"><strong>Alex Cabrera</strong></a>.</p><p>Cabrera is the primary investigator of FairVis and has been pursuing this problem since he was an undergraduate student at Georgia Tech.</p><p>&ldquo;During the summer of my junior year I had been researching various topics in machine learning, and discovered some recent work showing how machine learning models can encode and worsen societal biases. I quickly realized that not only was this a significant issue, with examples of biased algorithms in everything from hiring systems to self-driving cars, but that my own work during my internship had the possibility to be biased against lower socioeconomic groups.&rdquo;</p><p>This is when Cabrera reached out to Chau who then recruited the help of CSE alumni&nbsp;<a href="https://minsuk.com/"><strong>Minsuk Kahng</strong></a>, CSE Ph.D.&nbsp;<a href="https://fredhohman.com/"><strong>Fred Hohman</strong></a><strong>,&nbsp;</strong>College of Computing undergraduate student&nbsp;<a href="http://www.willepperson.com/"><strong>Will Epperson</strong></a><strong>,&nbsp;</strong>and University of Washington Assistant Professor&nbsp;<a href="http://jamiemorgenstern.com/"><strong>Jamie Morgenstern</strong></a><strong>.</strong></p><p>Morgenstern is the lead researcher for a number of projects related to fairness in machine learning, including the study Cabrera mentioned about self-driving cars. This particular study shows the potentially&nbsp;<a href="https://www.scs.gatech.edu/news/620309/research-reveals-possibly-fatal-consequences-algorithmic-bias">fatal consequences of algorithmic bias</a>&nbsp;which highlights the severity of software created without fairness embedded into its core.&nbsp;</p><p>FairVis is one of the first systems that helps us achieve a dramatic step towards understanding and addressing the problem of fairness in machine learning, and prevents similar headlines from making their way to reality in the future.&nbsp;&nbsp;</p><p>However, Cabrera stressed that the solution does not simply end with better data practices.</p><p>&ldquo;Fairness is an extremely difficult problem, a so-called &lsquo;wicked problem&rsquo;, that will not be solved by technology alone,&rdquo; he said.&nbsp;</p><p>&ldquo;Social scientists, policy makers, and engineers need to work together to make inroads and ensure that our algorithms are equitable for all people. We hope FairVis is a step in this direction and helps people start the conversation about how to tackle and address these issues.&rdquo;</p>]]></body>  <author>Kristen Perez</author>  <status>1</status>  <created>1573063454</created>  <gmt_created>2019-11-06 18:04:14</gmt_created>  <changed>1575643490</changed>  <gmt_changed>2019-12-06 14:44:50</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Researchers present FairVis -  a visual analytics system that enables discovery of user subgroups to discover bias in machine learning models.]]></teaser>  <type>news</type>  <sentence><![CDATA[Researchers present FairVis -  a visual analytics system that enables discovery of user subgroups to discover bias in machine learning models.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-11-06T00:00:00-05:00</dateline>  <iso_dateline>2019-11-06T00:00:00-05:00</iso_dateline>  <gmt_dateline>2019-11-06 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[kristen.perez@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:kristen.perez@cc.gatech.edu">Kristen Perez</a></p><p>Communications Officer</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>628667</item>      </media>  <hg_media>          <item>          <nid>628667</nid>          <type>image</type>          <title><![CDATA[FairVis]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[FairVis.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/FairVis.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/FairVis.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/FairVis.jpg?itok=88khOAxB]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[A screenshot of a visual analytics system that enables discovery of user subgroups to discover bias in machine learning models]]></image_alt>                    <created>1573063180</created>          <gmt_created>2019-11-06 17:59:40</gmt_created>          <changed>1573063180</changed>          <gmt_changed>2019-11-06 17:59:40</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="4305"><![CDATA[cse]]></keyword>          <keyword tid="83261"><![CDATA[Polo Chau]]></keyword>          <keyword tid="181315"><![CDATA[cse-dse]]></keyword>          <keyword tid="181220"><![CDATA[cse-ml]]></keyword>          <keyword tid="182995"><![CDATA[FairVis]]></keyword>          <keyword tid="1496"><![CDATA[Ethics]]></keyword>          <keyword tid="9167"><![CDATA[machine learning]]></keyword>          <keyword tid="307"><![CDATA[fairness]]></keyword>          <keyword tid="182996"><![CDATA[Alex Cabrera]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="629259">  <title><![CDATA[Georgia Tech Researchers Explore New Ways to Give Navigation Directions to Robots]]></title>  <uid>34773</uid>  <body><![CDATA[<p>Robots can navigate buildings, but how do they know where to go? While some robots can follow pre-programmed routes, or be controlled by setting waypoints on a map, these methods are inflexible and can be unnatural to use. Researchers at Georgia Tech believe the best way to give robots navigation instructions is by talking to them.</p><p>&ldquo;Giving natural language instructions to a robot is a fundamental research problem on the critical path to developing more flexible domestic robots that can work with people,&rdquo; said <strong>Peter Anderson</strong>, a research scientist at Georgia Tech.</p><p>In a <a href="https://arxiv.org/pdf/1907.02022.pdf">recent paper</a>, Georgia Tech has introduced a new way for robots to reason about navigation instructions in an unknown environment.</p><p>The team created a semantic map representation that updates each time the robot moves or sees something new. To reason about navigation instructions using this map, the lab found a way to leverage an algorithm used in classical robotics and apply it to artificial intelligence. The algorithm, called Bayesian state estimation, usually tracks the location of a robot from sensor measurements like lidar and wheel odometry. By manipulating the algorithm, Georgia Tech says their robots can use it model language instruction inputs instead.</p><p>The paper got its title &quot;Chasing Ghosts: Instruction Following as Bayesian State Tracking&quot; because rather than tracking a robot from sensor measurements, the team is tracking the likely trajectory taken by an ideal agent or human demonstrator in response to the instructions. In this approach, the sensor measurements are the instructions themselves.&nbsp; This algorithm allows the agent to &ldquo;reason&rdquo; about all the different trajectories it could take and the probability of each trajectory when completing a task. By using an explicit map, researchers are easily able to inspect the model to see where the agent thinks the goal is and where it is likely to move next.</p><p>Currently, the robots move in simulated reconstructions of buildings, and communication is through written text, though some applications and off-the-shelf speech-to-text systems could work in conjunction with the existing system, according to researchers.</p><p>&ldquo;Spoken language would definitely be more natural in many situations, so we might in the future investigate models that go directly from speech to robot actions,&rdquo; said Anderson.</p><p>Anderson particularly likes to think about this work in regards to telepresence robots, though it could be applied to any robot.</p><p>&ldquo;Telepresence robots are a great idea, but they are not as popular as they could be. Maybe we need smarter, more natural robots that just go where you tell them to go and look at what you ask them to look at,&rdquo; said Anderson.</p><p>Think about all of the time that is lost commuting to work and walking to meetings. Imagine how climate change might be positively impacted if people needed to travel less for business. Anderson hopes that this work will allow people to focus more on their meetings, conversations with people, and, perhaps, help with climate change, rather than micromanaging a robot or jetting off around the world.&nbsp;</p><p>This work will be presented in December at the <a href="https://neurips.cc/">Thirty-third Conference on Neural Information Processing Systems (NeurIPS)</a> 2019 in Vancouver, British Columbia.</p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1574438425</created>  <gmt_created>2019-11-22 16:00:25</gmt_created>  <changed>1575643260</changed>  <gmt_changed>2019-12-06 14:41:00</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The latest work from Georgia Tech researchers finds a way to give better directions to robots.]]></teaser>  <type>news</type>  <sentence><![CDATA[The latest work from Georgia Tech researchers finds a way to give better directions to robots.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-11-22T00:00:00-05:00</dateline>  <iso_dateline>2019-11-22T00:00:00-05:00</iso_dateline>  <gmt_dateline>2019-11-22 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allie.mcfadden@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>629258</item>      </media>  <hg_media>          <item>          <nid>629258</nid>          <type>image</type>          <title><![CDATA[Anderson and his co-authors will present this work at at the Thirty-third Conference on Neural Information Processing Systems (NeurIPS) 2019 in Vancouver, British Columbia.]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Screen Shot 2019-11-08 at 10.53.41 AM.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Screen%20Shot%202019-11-08%20at%2010.53.41%20AM.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Screen%20Shot%202019-11-08%20at%2010.53.41%20AM.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Screen%2520Shot%25202019-11-08%2520at%252010.53.41%2520AM.png?itok=iGoDhvFG]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Map of robot moving through building]]></image_alt>                    <created>1574438198</created>          <gmt_created>2019-11-22 15:56:38</gmt_created>          <changed>1574438198</changed>          <gmt_changed>2019-11-22 15:56:38</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="629306">  <title><![CDATA[ML@GT Displays Diverse Research Interests at NeurIPS]]></title>  <uid>34773</uid>  <body><![CDATA[<p>With 30&nbsp;papers to present, the <a href="http://ml.gatech.edu/">Machine Learning Center at Georgia Tech (ML@GT)</a> will make a strong showing at this year&rsquo;s Neural Information Processing Systems (NeurIPS) conference, Dec. 8-14 in Vancouver, British Columbia.</p><p>The conference fosters the exchange of research on the theoretical, technological, biological, and mathematical aspects of neural information processing systems. ML@GT research spans all of the categories, including work on <a href="https://arxiv.org/pdf/1908.07896.pdf">neural data</a>, <a href="https://b.gatech.edu/2NS3Bz9">fairness in machine learning algorithms</a>, and <a href="http://bit.ly/2NEH1Lr">teaching artificial intelligence to work in changing environments</a>.</p><p>&ldquo;NeurIPS continues to be an exciting conference to attend because of the diverse research that is being presented each year. It is one of the most sought-after and anticipated conferences every year, and it&rsquo;s good to see ML@GT have a good variety of papers being accepted,&rdquo; said <strong>Tuo Zhao</strong>, an assistant professor in the <a href="https://www.isye.gatech.edu/">H. Milton Stewart School of Industrial and Systems Engineering (ISyE)</a>. Zhao has three accepted papers.</p><p>NeurIPS also continues to be a hotspot for major technology companies like Google, Microsoft, Facebook and to recruit new talent.&nbsp;</p><p>To see a full list and recaps of ML@GT&rsquo;s accepted papers <a href="http://bit.ly/2WTlnGo">click here</a>.</p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1574689819</created>  <gmt_created>2019-11-25 13:50:19</gmt_created>  <changed>1574689819</changed>  <gmt_changed>2019-11-25 13:50:19</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech will present 30 papers at one of the hottest conferences in artificial intelligence.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech will present 30 papers at one of the hottest conferences in artificial intelligence.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-11-25T00:00:00-05:00</dateline>  <iso_dateline>2019-11-25T00:00:00-05:00</iso_dateline>  <gmt_dateline>2019-11-25 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allie.mcfadden@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>628944</item>      </media>  <hg_media>          <item>          <nid>628944</nid>          <type>image</type>          <title><![CDATA[Georgia Tech will present 30 papers at the Thirty-third Conference on Neural Information Processing Systems]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[NeurIPS 2019_Twitter.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/NeurIPS%202019_Twitter_0.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/NeurIPS%202019_Twitter_0.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/NeurIPS%25202019_Twitter_0.png?itok=D7bsi2vp]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[NeurIPS 2019]]></image_alt>                    <created>1573672076</created>          <gmt_created>2019-11-13 19:07:56</gmt_created>          <changed>1573672217</changed>          <gmt_changed>2019-11-13 19:10:17</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="628444">  <title><![CDATA[Keep Forgetting Your Password? Try This Novel Virtual Authentication Technique]]></title>  <uid>33939</uid>  <body><![CDATA[<h3><em>First-person Virtual Maze Offers More Memorable, Harder-to Break Passwords for Infrequent Authentication</em></h3><p>We&rsquo;ve all been there. For the first time in months, you&rsquo;ve been logged out of your social media account and need to log back in. The problem is it&rsquo;s been so long since your last log in that you don&rsquo;t remember your password. You try every combination of baby and pet name, sister&rsquo;s birthday, childhood street address &ndash; nothing works, and now you&rsquo;re locked out.</p><p>If only there was a better way to remember these passwords after extended periods of disuse.</p><p>Luckily, researchers at <a href="http://gatech.edu" target="_blank">Georgia Tech</a> have come up with a novel solution to this longstanding problem, applying an old memory technique to new technology to offer users a more effective authentication method. Known as &lsquo;the Memory Palace, the new tool is a three-dimensional virtual labyrinth navigated in the first-person perspective.</p><p>In cases of infrequent authentication, the Memory Palace works in place of an account&rsquo;s password. Users create their own personal path with multiple left or right turns through a maze that must then be recreated to log in to their account. If the user makes it through the maze, similar to the one found in the old Windows three-dimensional labyrinth screensaver, they gain access.</p><p>Studies evaluating the technique showed that visual-spatial secrets were most memorable if navigated in the three-dimensional first-person perspective. They also showed that, in comparison to Android&rsquo;s 9-dot pattern lock, the Memory Palace was significantly more memorable after one week, was harder to break through shoulder surfing (capturing passwords by looking over someone&rsquo;s shoulders), and were not significantly slower to enter.</p><h3><strong><a href="https://www.youtube.com/watch?v=I02XDR7Mg0">VIDEO: Explore &#39;The Memory Palace&#39;</a></strong></h3><p>&ldquo;Humans have evolved with remarkably persistent and fast-imprinting spatial memories, owing in no small part to our nomadic history,&rdquo; said <a href="http://ic.gatech.edu" target="_blank">School of Interactive Computing</a> Assistant Professor <strong>Sauvik Das</strong>, the lead researcher on the project. &ldquo;Many people can, for example, clearly visualize and mentally walk through their childhood homes, even if they haven&rsquo;t stepped foot in it for decades. They may only need to be shown once or twice how to drive to a new part of a familiar city.</p><p>&ldquo;Our key insight was simple: Why not co-opt this incredibly strong spatial memory system for infrequent authentication?&rdquo;</p><p>This visual-spacial authentication is based upon an old memory technique of the same name, also called the &ldquo;method of loci.&rdquo; That approach uses visualizations with the use of spatial memory, familiar information about one&rsquo;s environment, to quickly and efficiently recall information. World Memory champions have applied this technique in competition for years, associating vivid images along a specific path with digits, letters, or playing cards they are required to memorize. In fact, the technique dates all the way back to ancient Greeks and Romans.</p><p>When developing their program, researchers focused on a few keys to their method. In addition to security against common attacks like random guessing or shoulder surfing, they needed the authentication secret to be memorable without much practice or reinforcement and they needed it to be deployable to the public.</p><p>&ldquo;Users are unlikely to accept a solution that requires significant upfront training or effort,&rdquo; said Das, an expert in a field dubbed social cybersecurity that examines social norms that impact the adoption or rejection of security techniques. &ldquo;Also, the solution should be cost-effective and not require specialized hardware. Many authentication solutions have been proposed, but most fail to be widely adopted for these reasons.&rdquo;</p><p>Existing solutions fall short in these requirements. Biometrics, like a thumb print or facial recognition, require specialized hardware that can be expensive for infrequent use cases. PINs and graphical passwords have problems in long-term memorability without frequent reinforcement, or are otherwise vulnerable to shoulder surfing.</p><p>&ldquo;The Memory Palace addresses each of these concerns with a proven memory technique that can hold up over time but is not easily stolen,&rdquo; Das said.</p><p>Das provided a handful of potential instances of infrequent authentication. Perhaps a session persists for a long period of time, like social media accounts, or a user must log in on a different device than normal, like a Netflix account on a web browser versus a smart TV. Other situations include occasionally-accessed resources, like a conference room secured with a smart lock, or as a fallback authentication method where a secondary secret is needed to recover access to an account where the primary secret has been compromised.</p><p>To deploy to the public, an app could implement the Memory Palace as a means of authenticating users. Alternatively, an operating system like Android could implement it as a means of authenticating into a device and automatically handle authenticating into any existing apps on the device.</p><p>This work was presented in a paper, titled <em>T<a href="https://sauvikdas.com/uploads/paper/pdf/22/file.pdf" target="_blank">he Memory Palace: Exploring Visual-Spatial Paths for Strong, Memorable, Infrequent Authentication</a></em> (Sauvik Das, David Lu, Taehoon Lee, Joanne Lo, Jason I. Hong), at the <a href="https://uist.acm.org/uist2019/" target="_blank">ACM Symposium on User Interface Software and Technology</a> (UIST 2019), which was held&nbsp;Oct. 20-23 in New Orleans.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1572547216</created>  <gmt_created>2019-10-31 18:40:16</gmt_created>  <changed>1572547216</changed>  <gmt_changed>2019-10-31 18:40:16</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[This first-person virtual maze offers more memorable, harder-to-break passwords for infrequent authentication.]]></teaser>  <type>news</type>  <sentence><![CDATA[This first-person virtual maze offers more memorable, harder-to-break passwords for infrequent authentication.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-10-31T00:00:00-04:00</dateline>  <iso_dateline>2019-10-31T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-10-31 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>628443</item>      </media>  <hg_media>          <item>          <nid>628443</nid>          <type>image</type>          <title><![CDATA[The Memory Palace]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Screen Shot 2019-10-31 at 2.38.07 PM.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Screen%20Shot%202019-10-31%20at%202.38.07%20PM.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Screen%20Shot%202019-10-31%20at%202.38.07%20PM.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Screen%2520Shot%25202019-10-31%2520at%25202.38.07%2520PM.png?itok=_xHMh1oz]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[The Memory Palace - A person navigates a virtual maze on a smartphone]]></image_alt>                    <created>1572547175</created>          <gmt_created>2019-10-31 18:39:35</gmt_created>          <changed>1572547175</changed>          <gmt_changed>2019-10-31 18:39:35</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="182941"><![CDATA[cc-research; ic-cybersecurity; ic-hcc]]></keyword>      </keywords>  <core_research_areas>          <term tid="145171"><![CDATA[Cybersecurity]]></term>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="628437">  <title><![CDATA[Opportunities for Impact: Startup Zyrobotics Helped Ayanna Howard Reach More People]]></title>  <uid>33939</uid>  <body><![CDATA[<p><strong>Ayanna Howard</strong> was not thinking about starting a business. Working as a professor in <a href="http://gatech.edu" target="_blank">Georgia Tech</a>&rsquo;s <a href="http://ece.gatech.edu/" target="_blank">School of Electrical and Computer Engineering</a> (ECE) in 2013, her focus was on her research into assistive robotics and therapy gaming applications for children, not launching a startup outside of her lab.</p><p>Doing research in an environment like Georgia Tech&rsquo;s, however, where entrepreneurship and risk-taking is not only encouraged but required, has a way of making even the clearest of plans veer off in varying unforeseen directions. Thus, out of her lab came <a href="http://zyrobotics.com/" target="_blank">Zyrobotics</a>, a technology company that develops educational technologies for children with differing abilities.</p><p>For the past six years, Zyrobotics has developed personalized technologies that stimulate social, cognitive, and motor skill development using fun and educational applications. Now, there are five products, three hardware and two software. The software comprises about 15 different programs in math, robot, and coding education. There have been over 600,000 downloads and about 80 distributors using or distributing the products in clinics and school systems.</p><p>&ldquo;As researchers, we&rsquo;re not only concerned with development,&rdquo; said Howard, now the Chair of Georgia Tech&rsquo;s <a href="http://ic.gatech.edu" target="_blank">School of Interactive Computing</a> (IC). &ldquo;We want to know the impact. What Zyrobotics has done is allowed the research we were doing in the lab to touch so many more people than we otherwise would have done.&rdquo;</p><h3>A proof of concept</h3><p>It started as the work of one of her graduate students in ECE. <strong>Hae Won Park</strong> was finishing up her Ph.D. when she came to a bit of a crossroads. Trying to decide whether to pursue a career in academia or to, perhaps, go into industry, she looked to Howard for some guidance.</p><p>&ldquo;There was an opportunity with the <a href="https://www.nsf.gov/" target="_blank">National Science Foundation</a> <a href="https://www.nsf.gov/news/special_reports/i-corps/" target="_blank">I-Corps grant</a> where you have to write a proposal, put your ideas down, defend to a program manager, et cetera,&rdquo; Howard said. &ldquo;It seemed like a good program that would allow her to experience all of these aspects in a low-risk way. If it didn&rsquo;t work out, oh well.&rdquo;</p><p>Park&rsquo;s research examined methods for utilizing touchscreen interfaces for accessible human-robot interaction. It was a project called <a href="http://tabaccess.com/" target="_blank">TabAccess</a>, an assistive technology that provides alternative switch inputs to control smartphones and tablets for users with motor impairments.</p><h3><a href="https://www.youtube.com/watch?v=z3q5C2yTxU8" target="_blank">VIDEO: How does TabAccess work?</a></h3><p>Throughout the course of customer discovery, where Park and Howard spoke with varying professionals and potential users, Howard said she realized just how big of a difference the technology could make outside of the lab.</p><p>&ldquo;A year later, it was enough of a concept,&rdquo; Howard said. &ldquo;It looked like we could design something that made sense. The company was founded, and then it went off and did its own thing.&rdquo;</p><h3>A broader impact</h3><p>It was the impact that led Howard to push forward on the project as a startup. At the time, she had been doing robotics educational STEM camps focused on children with special needs. Students, who had primarily visual and motor impairments, were taught how to code robots.</p><p>The camps were successful, but the touch points, as Howard called them, were relatively few.</p><p>&ldquo;The touch points were just the kids I was able to recruit along with my clinical collaborator,&rdquo; she said. &ldquo;My touch point was: If I show up, I touched. If I didn&rsquo;t, there was nothing going on. Whereas, in customer discovery, you weren&rsquo;t necessarily speaking with the people you were impacting &ndash; the kids &ndash; but you were speaking with the teachers who interact with kids.&rdquo;</p><p>Suddenly, the impact in her mind shifted from the 1-to-1 relationship of STEM camps to 1-to-100, 1-to-1,000, and more.</p><p>&ldquo;My workshop on a good day had maybe 10 kids,&rdquo; she said. &ldquo;I did these in a good year maybe twice. So, maybe like 20 kids in a year. You can&rsquo;t possibly do what we&rsquo;re doing now without Zyrobotics.&rdquo;</p><h3>&#39;For students, (entrepreneurship) is a no-brainer&#39;</h3><p>Howard said it&rsquo;s this mindset that sets Georgia Tech apart.</p><p>&ldquo;Students have always thought about the impact of what they&rsquo;re doing,&rdquo; she said. &ldquo;Socially-responsible engineering. That&rsquo;s always been the core mission. Being an entrepreneur has this aspect of knowing the exact problems you want to attack, versus maybe going into industry and working on someone else&rsquo;s.&rdquo;</p><p>It&rsquo;s important, she said, that academics continue to have a place in the technological market. If left to major tech conglomerates, we up against group think and hesitating to take necessary risks.</p><p>&ldquo;For students, it&rsquo;s a no-brainer to engage in some entrepreneurial pursuit,&rdquo; she said. &ldquo;That mindset of thinking about problems and impacts allows you to view it differently.</p><p>&ldquo;We need to be willing to make mistakes. The probability is that your startup will fail. But students understand that and still do it. We need to get rid of that fear of failure, or else we&rsquo;ll never make significant change.&rdquo;</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1572545925</created>  <gmt_created>2019-10-31 18:18:45</gmt_created>  <changed>1572545925</changed>  <gmt_changed>2019-10-31 18:18:45</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[For the past six years, Zyrobotics has developed personalized technologies that stimulate social, cognitive, and motor skill development using fun and educational applications.]]></teaser>  <type>news</type>  <sentence><![CDATA[For the past six years, Zyrobotics has developed personalized technologies that stimulate social, cognitive, and motor skill development using fun and educational applications.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-10-31T00:00:00-04:00</dateline>  <iso_dateline>2019-10-31T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-10-31 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>628356</item>      </media>  <hg_media>          <item>          <nid>628356</nid>          <type>image</type>          <title><![CDATA[Ayanna Howard's Zyrobotics]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[ayanna_zyrobotics.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/ayanna_zyrobotics.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/ayanna_zyrobotics.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/ayanna_zyrobotics.png?itok=wS8pNhUe]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Ayanna Howard's Zyrobotics]]></image_alt>                    <created>1572456199</created>          <gmt_created>2019-10-30 17:23:19</gmt_created>          <changed>1572456199</changed>          <gmt_changed>2019-10-30 17:23:19</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="182940"><![CDATA[cc-research; ic-ai-ml; ic-robotics; ic-hcc]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="627626">  <title><![CDATA[AI Agent Breaks Down Social Barriers in Online Education]]></title>  <uid>27592</uid>  <body><![CDATA[<h5><strong>On the internet, students are able to take courses on the couch (or anywhere) and set their own pace for learning. These and other factors have contributed to an explosion in web enrollments.</strong></h5><p>But online convenience can come at a cost to building social connections. The lack of human interaction and support can be a direct cause for high dropout rates in web courses.</p><p>To directly address social barriers in virtual classes, an artificially intelligent system from Georgia Tech has been designed to connect online students quickly to their peers. It is being deployed in the institute&rsquo;s Online Master of Science in Computer Science program (OMSCS) as well as two campus classes fall semester 2019.</p><p>Using the <a href="http://emprize.gatech.edu/"><strong>Jill Watson AI framework</strong></a>, the social agent is intended ultimately to help support students from different walks of life adapt more quickly to rigorous course requirements and foster a community where students can build their own support structures.</p><p>&ldquo;In previous semesters, we had what we called an introduction agent that responded to student introductions and greeted students. Now we have a more fully realized social AI agent that can help students connect virtually and in real life,&rdquo; said <strong>Ida Camacho</strong>, the lead engineer for the redesigned AI.</p><p>Encouraging social interactions among students using the Jill social agent demanded Camacho rethink the AI&rsquo;s construct. Questions of privacy came up early on, and researchers found out from testing that if the social agent was too personal, students might get distracting playing with it.</p><blockquote><h5><em><strong>We wanted students to feel like they are part of the community without giving up their anonymity.</strong> - Ida Camacho, Lead AI Designer</em></h5></blockquote><p>Using student introductions in the online forum, researchers prompted students to share personal details in order to help build a model for the agent. Using this unstructured data presented its own challenges, such as when the system encountered certain words, such as Paris, and had to parse out whether it was a location or it was referring to a certain blonde celebrity.</p><p>One of Camacho&rsquo;s insightful designs centered on creating summaries of student information that is viewable by those enrolled.</p><p>When students enter the forum and introduce themselves now, the Jill social agent can immediately share top results by percent of classmates based on location, timezone, other courses being taken, and primary interests.</p><p>&ldquo;We wanted students to feel like they are part of the community without giving up their anonymity,&rdquo; said Camacho. &ldquo;Increasing student engagement and creating micro-communities are two of our primary goals.&rdquo;</p><p>Students can also choose to join conversations based on any area of interest (location, hobbies, etc.) using the hashtag #ConnectMe, which allows them to see and click on individual student names for those who opt in.</p><p>Based on the responses, students have already taken to the new and improved Jill.</p><p>&ldquo;One thing that surprised me was that students started trying to connect IRL, or in real life,&rdquo; said Camacho. &ldquo;They wanted to set up study groups and meet each other. This was happening all over the place, like New York City, Austin, and Tokyo.&rdquo;</p><p>Camacho suspected there might be a hunger for more social interactions in online courses, and she is fully committed to delivering the best student experience on this front.</p><p>&ldquo;I envision the social agent being used more than just at the start of classes,&rdquo; she said. &ldquo;It&rsquo;s already creating some social glue, getting students to talk right away so they don&rsquo;t feel like they&rsquo;re in this all alone.</p><p>&ldquo;Jill can keep the conversation going, and I&rsquo;m planning for the AI at the end of the semester to share recommendations by students on courses they&rsquo;ve taken.&rdquo;</p><p>Camacho in one sense is the ideal person to head the Design &amp; Intelligence Lab&rsquo;s new Jill social agent initiative. The Fresno, Calif., resident is a recent alumna of the OMSCS program and knows how important it was to stay engaged with her peers.</p><p>&ldquo;I feel like if I hadn&rsquo;t met anyone I might not have been as successful. My community-building started when others invited me into their study groups and I became a TA.&rdquo;</p><p>She jokes that when she was a student, she enrolled in the program&rsquo;s Knowledge-Based AI course &ndash; where the Jill Watson virtual TA was deployed using a pseudonym &ndash; to see if she could pick out the AI amongst the human TAs helping students in the online forums.</p><p>&ldquo;I guessed wrong,&rdquo; she said laughing. &ldquo;I ended up thinking that it was the head TA. He was online all the time answering student questions. How can someone be online that much?&rdquo;</p><p>Jill Watson being indistinguishable from its human counterparts might be taken as a good sign by the lab&rsquo;s researchers as they continue to build the future of AI and help students from around the world succeed in pursuing online learning.</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1571231665</created>  <gmt_created>2019-10-16 13:14:25</gmt_created>  <changed>1571258417</changed>  <gmt_changed>2019-10-16 20:40:17</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[To directly address social barriers in virtual classes, an artificially intelligent system from Georgia Tech has been designed to connect online students quickly to their peers. ]]></teaser>  <type>news</type>  <sentence><![CDATA[To directly address social barriers in virtual classes, an artificially intelligent system from Georgia Tech has been designed to connect online students quickly to their peers. ]]></sentence>  <summary><![CDATA[<p>To directly address social barriers in virtual classes, an artificially intelligent system from Georgia Tech has been designed to connect online students quickly to their peers. It is being deployed in the institute&rsquo;s Online Master of Science in Computer Science program (OMSCS) as well as two campus classes fall semester 2019.</p>]]></summary>  <dateline>2019-10-16T00:00:00-04:00</dateline>  <iso_dateline>2019-10-16T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-10-16 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jpreston@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu">Joshua Preston</a><br />Research Communications Manager<br /><em>GVU Center and College of Computing</em></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>627694</item>          <item>627627</item>      </media>  <hg_media>          <item>          <nid>627694</nid>          <type>image</type>          <title><![CDATA[Ida Camacho, Lead Architect for Jill Watson AI Social Agent]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[ida_camacho_web.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/ida_camacho_web.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/ida_camacho_web.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/ida_camacho_web.jpg?itok=-3vGevjj]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1571258375</created>          <gmt_created>2019-10-16 20:39:35</gmt_created>          <changed>1571258851</changed>          <gmt_changed>2019-10-16 20:47:31</gmt_changed>      </item>          <item>          <nid>627627</nid>          <type>image</type>          <title><![CDATA[Jill Watson AI Social Agent]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[socialAgent 500x500.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/socialAgent%20500x500.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/socialAgent%20500x500.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/socialAgent%2520500x500.png?itok=0M6awi0f]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1571232643</created>          <gmt_created>2019-10-16 13:30:43</gmt_created>          <changed>1571232643</changed>          <gmt_changed>2019-10-16 13:30:43</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://emprize.gatech.edu/]]></url>        <title><![CDATA[What's New with Jill? This fall's AI hat trick at Georgia Tech]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="627578">  <title><![CDATA[Jill Watson Now Fielding Questions on New AI-enabled Research Tool]]></title>  <uid>32045</uid>  <body><![CDATA[<p>A new artificially intelligent (AI) research tool that harnesses the power of the Smithsonian Institution&rsquo;s massive&nbsp;<a href="https://eol.org" target="_blank">Encyclopedia of Life</a>&nbsp;(EOL) ecological database debuted this semester at Georgia Tech.&nbsp;</p><p>The <a href="http://vera.cc.gatech.edu/" target="_blank">virtual ecological research assistant, known as VERA</a>, was developed at Georgia Tech and enables students to perform virtual experiments to explain existing ecological systems or to predict possible outcomes based on variables they input into the tool.</p><h4><strong>Getting to Know VERA</strong></h4><p>&ldquo;People using VERA have access to the EOL and can test a hypothesis using countless organisms, make as many changes to variables as they want, and study the effects on any ecosystem through real-time modeling,&rdquo; said&nbsp;<a href="https://www.linkedin.com/in/sungeun-an-89730063/" target="_blank"><strong>Sungeun An</strong>, human-centered computing Ph.D. student</a> and lead developer of the AI system.</p><p>&ldquo;This is a unique opportunity that doesn&rsquo;t exist anywhere else.&rdquo;</p><p>Although the EOL has extensive data entries for more than two million species, An says that VERA has an intuitive user interface and design that is relatively easy to use.</p><p>&ldquo;Students don&rsquo;t need extensive scientific knowledge or programming and math skills to use VERA. They can build a conceptual model with simple visual cues on the computer screen, such as dragging elements or selecting input options,&rdquo; said An.</p><h4><strong>Combining the Strength&nbsp;of Two AIs</strong></h4><p>However, to get the most out of VERA, An says there can be a learning curve.</p><p>To flatten the&nbsp;curve and help students optimize their experience with VERA, An and her fellow researchers turned to&nbsp;Jill Watson, the <a href="https://www.wsj.com/articles/if-your-teacher-sounds-like-a-robot-you-might-be-on-to-something-1462546621" target="_blank">famed AI-enabled virtual teaching assistant (TA) that premiered in 2016</a>&nbsp;supporting Georgia Tech&rsquo;s <a href="http://www.omscs.gatech.edu" target="_blank">online Master of Science in Computer Science (OMSCS) program</a>.</p><p>Jill Watson&nbsp;answers student questions about VERA via the collaborative messaging app, Slack. These range from technical questions about the tool &ndash; &ldquo;How do I add a new project&rdquo; &ndash; to subject matter questions &ndash; &ldquo;What is consumption rate?&rdquo;</p><p>&ldquo;Leveraging the Jill Watson virtual TA and VERA together is a powerful demonstration of how to scale technology to serve more populations and provide access to the world&rsquo;s scientific knowledge,&rdquo; said&nbsp;<a href="https://www.cc.gatech.edu/people/ashok-goel" target="_blank"><strong>Ashok Goel</strong>, professor of Interactive Computing</a> and director of the <a href="http://dilab.gatech.edu/" target="_blank">Design &amp; Intelligence Lab, which created both AI agents</a>.</p><p>Combining the strength of the two AI agents, said Goel, is part of <a href="https://emprize.gatech.edu" target="_blank">an intentional approach to rethinking instructional design for online learning</a>.</p><p>&ldquo;VERA is a significant advancement for artificial intelligence in science education and meant to be used anywhere by anyone interested in science exploration, so making it as accessible as possible is key to the system&rsquo;s adoption,&rdquo; Goel said.</p><p>Students and others using VERA &ndash; it&rsquo;s also publicly available&nbsp;and linked on the Smithsonian&rsquo;s EOL homepage &shy;&ndash; can learn more through&nbsp;a <a href="https://www.youtube.com/playlist?list=PLwXogtSxXaLCP4AXU_VFUP92TVmotGLMv" target="_blank">video series produced by Georgia Tech</a>.</p><p>The videos demonstrate VERA&rsquo;s capabilities using kudzu growth in the southeastern United States as an example. The videos are co-hosted by&nbsp;<a href="http://www.emilygweigelphd.com" target="_blank"><strong>Emily Weigel</strong>, School of Biological Sciences</a> instructor for the biology course using VERA, and <a href="https://www.cc.gatech.edu/fac/Spencer.Rugaber/" target="_blank">College of Computing faculty member <strong>Spencer Rugaber</strong></a>.</p><p>VERA research is funded by a grant from the National Science Foundation, #NSF-1636848.</p><p>For more information about Georgia Tech&#39;s emPRIZE, contact&nbsp;<a href="mailto:jpreston@cc.gatech.edu?subject=Jill%20Watson%20Helping%20With%20Questions%20on%20New%20Research%20AI">Joshua Preston, research communications manager</a>.</p>]]></body>  <author>Ben Snedeker</author>  <status>1</status>  <created>1571081465</created>  <gmt_created>2019-10-14 19:31:05</gmt_created>  <changed>1571183230</changed>  <gmt_changed>2019-10-15 23:47:10</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A new AI-enabled research tool powered by the Smithsonian debuted in an undergraduate biology class at Georgia Tech this semester.]]></teaser>  <type>news</type>  <sentence><![CDATA[A new AI-enabled research tool powered by the Smithsonian debuted in an undergraduate biology class at Georgia Tech this semester.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-10-14T00:00:00-04:00</dateline>  <iso_dateline>2019-10-14T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-10-14 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[albert.snedeker@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Albert Snedeker, Communications Manager<br /><a href="mailto:albert.snedeker@cc.gatech.edu?subject=Jill%20Watson%20Answering%20Questions%20on%20Research%20AI%20Tool">albert.snedeker@cc.gatech.edu</a></p><p>Joshua Preston, Research Communications Manager<br /><a href="mailto: jpreston@cc.gatech.edu">jpreston@cc.gatech.edu</a><br />&nbsp;</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>627580</item>          <item>627584</item>      </media>  <hg_media>          <item>          <nid>627580</nid>          <type>image</type>          <title><![CDATA[Jill Watson 2019 AI Teaching Assistant]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[093086626-technology-and-science-abstrac.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/093086626-technology-and-science-abstrac.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/093086626-technology-and-science-abstrac.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/093086626-technology-and-science-abstrac.jpeg?itok=OCisAdX6]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Stock image of personified female AI looking at reflection in mirror]]></image_alt>                    <created>1571083583</created>          <gmt_created>2019-10-14 20:06:23</gmt_created>          <changed>1571083583</changed>          <gmt_changed>2019-10-14 20:06:23</gmt_changed>      </item>          <item>          <nid>627584</nid>          <type>image</type>          <title><![CDATA[Sungeun An - Ph.D. Human-Centered Computing Student]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Sungeun An_human-centered-computingPhD.-student-2019.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Sungeun%20An_human-centered-computingPhD.-student-2019.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Sungeun%20An_human-centered-computingPhD.-student-2019.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Sungeun%2520An_human-centered-computingPhD.-student-2019.jpg?itok=qfYjdDeU]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Sungeun An, Georgia Tech human-centered computing PhD student]]></image_alt>                    <created>1571086076</created>          <gmt_created>2019-10-14 20:47:56</gmt_created>          <changed>1571086076</changed>          <gmt_changed>2019-10-14 20:47:56</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://emprize.gatech.edu]]></url>        <title><![CDATA[Georgia Tech’s emPRIZE: AI-Powered Learning. Anytime. Anywhere.]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="66442"><![CDATA[MS HCI]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="2556"><![CDATA[artificial intelligence]]></keyword>          <keyword tid="9167"><![CDATA[machine learning]]></keyword>          <keyword tid="182669"><![CDATA[VERA]]></keyword>          <keyword tid="169183"><![CDATA[Jill Watson]]></keyword>          <keyword tid="182670"><![CDATA[goel]]></keyword>          <keyword tid="168873"><![CDATA[Smithsonian]]></keyword>          <keyword tid="182671"><![CDATA[encyclopedia of life]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="627425">  <title><![CDATA[Premier Computer Vision Conference Accepts 10 Georgia Tech Papers]]></title>  <uid>34773</uid>  <body><![CDATA[<p>From helping chair umpires make better line calls in professional tennis to teaching robots to &ldquo;see&rdquo;, the field of computer vision continues to expand and become a part of people&rsquo;s everyday lives. A subfield of artificial intelligence, computer vision teaches computers to understand and interpret the visual world through photos or videos.</p><p>The <a href="http://iccv2019.thecvf.com/">International Conference on Computer Vision (ICCV)</a> takes place from Oct. 27 to Nov. 2 and brings together researchers from Georgia Tech and around the world to discuss recent breakthroughs and research in the field. Researchers in the <a href="http://ml.gatech.edu/">Machine Learning Center at Georgia Tech (ML@GT)</a> have ten accepted papers in the conference.</p><p><a href="http://ml.gatech.edu/">School of Interactive Computing (IC)</a> and ML@GT associate professor <strong>Devi Parikh</strong> leads with seven research papers. Her work spans from <a href="https://www.voguebusiness.com/technology/facebook-ai-fashion-styling">using artificial intelligence (AI) to help people make more stylish outfit choices</a> to <a href="http://bit.ly/2ndC6qv">embodied visual recognition</a>.</p><p>IC assistant professor <strong>Judy Hoffman </strong>and professor <strong>James Rehg</strong> are 2019 area chairs.</p><p>&ldquo;As the computer vision field continues to expand and create novel ideas, conferences like ICCV become increasingly important. There was a lot of impressive work submitted to the conference this year. With computer vision being one of ML@GT&rsquo;s strongest areas, I&rsquo;m thrilled to see the center&rsquo;s presence in this premier conference,&rdquo; said Hoffman.</p><p>Other work from Georgia Tech includes papers on <a href="https://mlatgt.blog/2019/09/10/overcoming-large-scale-annotation-requirements-for-understanding-videos-in-the-wild/">lessening the need for additional annotation in videos</a>, making vision and language models more grounded, and <a href="http://bit.ly/2ndC6qv">agents learning to move to better perceive objects.</a></p><p>&quot;Having a paper accepted, especially as an oral presentation, especially in a top conference gives me lots of confidence and encouragement for my Ph.D. research. I can&#39;t wait to attend ICCV to share my work, talk with other talented people, and learn other interesting topics in both academic and industrial areas,&quot; said <strong>Min-Hung Chen</strong>, a sixth-year electrical and computer engineering Ph.D. student.</p><p>Organized by IEEE, ICCV is one of the premier international computer vision conferences and will take place at the COEX Convention Center in Seoul, South Korea.</p><p>For more information on ML@GT&rsquo;s involvement with the conference, visit <a href="http://bit.ly/339BYaS">http://bit.ly/339BYaS</a></p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1570650888</created>  <gmt_created>2019-10-09 19:54:48</gmt_created>  <changed>1570709503</changed>  <gmt_changed>2019-10-10 12:11:43</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The Machine Learning Center will make a splash at the International Conference on Computer Vision later this month.]]></teaser>  <type>news</type>  <sentence><![CDATA[The Machine Learning Center will make a splash at the International Conference on Computer Vision later this month.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-10-10T00:00:00-04:00</dateline>  <iso_dateline>2019-10-10T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-10-10 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[allie.mcfadden@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allie.mcfadden@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>627424</item>      </media>  <hg_media>          <item>          <nid>627424</nid>          <type>image</type>          <title><![CDATA[Seoul, South Korea]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[sunyu-kim-HjsWTyyVDgg-unsplash.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/sunyu-kim-HjsWTyyVDgg-unsplash.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/sunyu-kim-HjsWTyyVDgg-unsplash.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/sunyu-kim-HjsWTyyVDgg-unsplash.jpg?itok=1oyU5BFP]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1570650742</created>          <gmt_created>2019-10-09 19:52:22</gmt_created>          <changed>1570650742</changed>          <gmt_changed>2019-10-09 19:52:22</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="625193">  <title><![CDATA[Firefighters Have Mixed Response to Wearable Tech for Emergency Work]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Computing technology has shaped modern offices and retooled how many businesses operate. Now, as technology gets smaller, cheaper, and more connected, jobs that aren&rsquo;t bound to a desk are seeing similar changes.</p><p>A new study from Georgia Tech shows how advanced computing tech worn by firefighters impacts the nature of work for emergency responders, and how front-line firefighters and their commanders view its usefulness.</p><p>Researchers sought to establish the effects of a wearable device used by firefighters on the job as well as how companies might better design devices for more physical types of labor.</p><p>&ldquo;For emergency responders, work is carried out with the pressure of someone&rsquo;s life or property on the line based on how well the job is done,&rdquo; said <strong>Alyssa Rumsey</strong>, PhD student in Digital Media, who conducted the study. &ldquo;Firefighting is unlike office work, which is typically examined in human-computer interaction research; simply put, the stakes are higher.&rdquo;</p><p>The findings showed that the wearable device went through many iterations &ndash; originally it was a heads-up display (HUD) that projected situational information onto a firefighter&rsquo;s mask. This proved too distracting, so then only firefighter&rsquo;s biometric information was displayed.</p><p>Ultimately the tool became a wrist-worn device that gave the biometric information to the on-scene commander, rather than the front-line firefighters, to see the vital signs of the emergency responders in real-time through a web interface.</p><p>&ldquo;Firefighters reportedly didn&rsquo;t have time to react to the information during the fire suppression because their attention was focused solely on the task at hand,&rdquo; Rumsey said.</p><p>The final device could be used to remotely measure the physical conditions of firefighters in the field. The device was not tested in any live fires but was used extensively on &ldquo;job duty courses&rdquo; at two Georgia fire departments where routine training exercises by firefighters in full gear took place. Exercises included a series of obstacles such as ladder runs, hose drags, tire pulls, and crawling.</p><p>The device allowed managers to identify personnel who were pushing too hard and those who weren&rsquo;t pushing hard enough based on heartrate spikes and overall activity during exercises. Supervisors could then call out those individuals and issue them commands either in person or via radio communications.</p><p>With overexertion and stress accounting for more than 50 percent of all firefighter deaths, the real-time biometric data potentially allows commanders to assess who on their teams is in trouble and be able to pull them from a scene.</p><p>Rumsey found that this clashed with some firefighters&rsquo; sense of identity, one that valued getting the job done and putting the safety of others above their own. Some participants rejected the idea of using the wearable tech in a real emergency fire.</p><p>As one participant put it: &ldquo;It would be a way for the chief to see someone&rsquo;s heart rate, and be like, &lsquo;Yeah I know this person, he&#39;s gung ho, he&#39;s not gonna quit. It&#39;s time to pull him out.&rsquo;&rdquo;</p><p>Firefighters discussed how overreliance on technology sometimes makes even seasoned pros forget the basics. One participant vividly described an incident where firefighters relied on a thermal imaging camera to view temperatures in a room and assess its safety:</p><p>&ldquo;He scans the room, sees the floors there. Takes off walking through the middle of the living room floor and never went back to the basics of &lsquo;sounding the floor.&rsquo; Him and his other two guys with him, fell through the floor and burned to death.&rdquo;</p><p>While the wearable device was viewed favorably in training as a way to improve physical fitness, reduce obesity, and generate comradery, how it might be implemented beyond that was not clear.</p><p>&ldquo;We&rsquo;ve seen that introducing this technology impacts identity, power dynamics, and organizational structures within fire departments,&rdquo; Rumsey said. &ldquo;Smart technology in safety-critical settings, such as fire scenarios, can exacerbate risks rather than lessen them. Understanding an organization is key to implementing technology in some of these more physically demanding jobs.&rdquo;</p><p>Rumsey and co-investigator <strong>Chris LeDantec</strong>, assistant professor of digital media, published their findings in the paper <em><a href="https://ledantec.net/wp-content/uploads/2019/05/disfp1096-rumsey.pdf">Clearing the Smoke: The Changing Identities and Work in Firefighting</a></em>&nbsp;in&nbsp;the Proceedings of the Association for Computing Machinery&rsquo;s 2019 Conference on&nbsp;Designing Interactive Systems.</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1566912699</created>  <gmt_created>2019-08-27 13:31:39</gmt_created>  <changed>1570569158</changed>  <gmt_changed>2019-10-08 21:12:38</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A new study from Georgia Tech shows how advanced computing tech worn by firefighters impacts the nature of work for emergency responders, and how front-line firefighters and their commanders view its usefulness.]]></teaser>  <type>news</type>  <sentence><![CDATA[A new study from Georgia Tech shows how advanced computing tech worn by firefighters impacts the nature of work for emergency responders, and how front-line firefighters and their commanders view its usefulness.]]></sentence>  <summary><![CDATA[<p>A new study from Georgia Tech shows how advanced computing tech worn by firefighters impacts the nature of work for emergency responders, and how front-line firefighters and their commanders view its usefulness.</p>]]></summary>  <dateline>2019-08-27T00:00:00-04:00</dateline>  <iso_dateline>2019-08-27T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-08-27 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jpreston@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu">Joshua Preston</a><br />Research Communications Manager<br /><em>GVU Center and College of Computing</em></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>625196</item>          <item>625197</item>          <item>625198</item>      </media>  <hg_media>          <item>          <nid>625196</nid>          <type>image</type>          <title><![CDATA[Firefighter on emergency scene]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[firefight pic_web.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/firefight%20pic_web.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/firefight%20pic_web.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/firefight%2520pic_web.png?itok=iAoONdra]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1566913056</created>          <gmt_created>2019-08-27 13:37:36</gmt_created>          <changed>1566913056</changed>          <gmt_changed>2019-08-27 13:37:36</gmt_changed>      </item>          <item>          <nid>625197</nid>          <type>image</type>          <title><![CDATA[Alyssa Rumsey]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Rumsey, Alyssa_web.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Rumsey%2C%20Alyssa_web.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Rumsey%2C%20Alyssa_web.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Rumsey%252C%2520Alyssa_web.png?itok=Lqnq3TFi]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1566913095</created>          <gmt_created>2019-08-27 13:38:15</gmt_created>          <changed>1566913095</changed>          <gmt_changed>2019-08-27 13:38:15</gmt_changed>      </item>          <item>          <nid>625198</nid>          <type>image</type>          <title><![CDATA[Chris Le Dantec]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Le Dantec, Chris_web.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Le%20Dantec%2C%20Chris_web.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Le%20Dantec%2C%20Chris_web.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Le%2520Dantec%252C%2520Chris_web.png?itok=pdaw99n9]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1566913139</created>          <gmt_created>2019-08-27 13:38:59</gmt_created>          <changed>1566913139</changed>          <gmt_changed>2019-08-27 13:38:59</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.spreaker.com/user/10751784/ep8-don-t-get-burned-understanding-tech-]]></url>        <title><![CDATA[Tech Unbound EP8: Don’t Get Burned. Understanding Tech Adoption Among Firefighters]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="627023">  <title><![CDATA[New $1.2 Million NSF Grant Aims to Improve Treatment for PTSD Patients]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Post-traumatic stress disorder (PTSD), particularly among veterans returning from combat zones or other troubling situations, is a devastating mental condition with tremendous individual and societal costs. About 12 percent of Gulf War veterans and 15 percent of Vietnam veterans suffer from PTSD according to a 2019 article in <em>U.S. News and World Report</em>. While recovery is possible, it requires intensive therapeutic engagement that less than 50 percent of affected veterans actually seek out.</p><p><a href="https://www.nsf.gov/awardsearch/showAward?AWD_ID=1915504&amp;HistoricalAwards=false" target="_blank">A new four-year, $1.2 million grant</a> from the <a href="http://nsf.gov" target="_blank">National Science Foundation</a> to a team of researchers from Georgia Tech, Emory University, and the University of Rochester will help bridge this gap by funding the development of a computational assessment toolkit for PTSD patients and clinicians, called PE Collective Sensing System (PECSS). PECSS, which will sit atop the PE Coach App developed by the Veterans Health Administration and the Department of Defense, will aim to improve current treatment practices and increase the number of veterans who seek treatment.</p><p>&ldquo;PECSS will allow clinicians to use automated predictions to deliver better therapeutic treatment and individualized feedback, and patients to better understand the progress they are making and how to improve their exposure exercises,&rdquo; said <strong>Rosa Arriaga</strong>, a Senior Research Scientist in <a href="http://ic.gatech.edu" target="_blank">Georgia Tech&rsquo;s School of Interactive Computing</a> and the principal investigator on the project.</p><h3><a href="https://podcasts.apple.com/us/podcast/is-technology-game-changer-for-care-ptsd-patients-rosa/id1435564422?i=1000451292353" target="_blank"><strong>[THE INTERACTION HOUR PODCAST: IS TECHNOLOGY A GAME CHANGER FOR CARE OF PTSD PATIENTS?, FEATURING DR. ROSA ARRIAGA]</strong></a></h3><p>Currently, the most common and empirically-supported treatment for PTSD is &ldquo;prolonged exposure&rdquo; (PE) therapy. The treatment consists of imaginal exposure, where patients imagine themselves and narrate their traumatic event, and in-vivo exposure to real-world stimuli in safe but challenging environments.</p><p>There are, however, challenges in data collection and extraction, which is often subjective and narrow. This project will address those challenges by developing a novel, user-tailored sensing system that can record and transfer information from exercises, continuously monitoring patients and clinicians</p><p>&ldquo;Clinicians are in urgent need of methods, tools, and data to efficiently track, assess, and respond to mental health needs throughout the treatment process,&rdquo; Arriaga said.</p><p>The project will involve insights from experts in multiple fields &ndash; ubiquitous computing, human-computer interaction, applied machine learning, psychology, and more. When complete, the system will be deployed at the <a href="https://www.emoryhealthcare.org/centers-programs/veterans-program/index.html" target="_blank">Emory Healthcare Veterans Program</a>, a nationally-renowned initiative that treats members of the military suffering from PTSD.</p><p>&ldquo;The Trauma and Anxiety Recovery Program that includes the Emory Veterans Program has been on the cutting edge in using technology to advance the care of people suffering with anxiety since it was founded by Dr. <strong>Barbara Rothbaum</strong> over 25 years ago,&rdquo; said <strong>Sheila Rauch</strong>, an associate professor in Emory&rsquo;s Department of Psychiatry and Behavioral Sciences and a co-principal investigator on the project.</p><p>&ldquo;As a team of international experts in PTSD treatment, we integrate technology to speed response to treatment and help patients to visualize the changes as they respond to care. Our aim is to use this real-time data to find tune practice for the individual patient and learn across patients how we can improve care.&rdquo;</p><p>&ldquo;Mental health clinicians and their patients are in urgent need of 21<sup>st</sup>-centural methods, tools, and objective data to optimize therapy,&rdquo; added Emory Assistant Professor <strong>Andrew Sherrill</strong>, another co-principal investigator. &ldquo;This partnership will bring together innovators in HCI and evidence-based psychotherapy to transform mental health care for PTSD patients.&rdquo;</p><p>This grant is provided under the <a href="https://www.nsf.gov/funding/pgm_summ.jsp?pims_id=504739" target="_blank">NSF Smart and Connect Health Funding Program</a> in its <a href="https://www.nsf.gov/div/index.jsp?div=IIS" target="_blank">Division of Information and Intelligent Systems</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1570034661</created>  <gmt_created>2019-10-02 16:44:21</gmt_created>  <changed>1570193777</changed>  <gmt_changed>2019-10-04 12:56:17</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The grant -- which includes Georgia Tech, Emory, and the University of Rochester -- will fund the development of a computational assessment toolkit for patients and clinicians.]]></teaser>  <type>news</type>  <sentence><![CDATA[The grant -- which includes Georgia Tech, Emory, and the University of Rochester -- will fund the development of a computational assessment toolkit for patients and clinicians.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-10-02T00:00:00-04:00</dateline>  <iso_dateline>2019-10-02T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-10-02 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>627021</item>      </media>  <hg_media>          <item>          <nid>627021</nid>          <type>image</type>          <title><![CDATA[Veteran battling PTSD]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Battling_PTSD_(4949341330).jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Battling_PTSD_%284949341330%29.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Battling_PTSD_%284949341330%29.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Battling_PTSD_%25284949341330%2529.jpg?itok=fGEXtzTk]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Veteran battling PTSD with head in hands]]></image_alt>                    <created>1570033840</created>          <gmt_created>2019-10-02 16:30:40</gmt_created>          <changed>1570033840</changed>          <gmt_changed>2019-10-02 16:30:40</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.ic.gatech.edu/academics/human-centered-computing-phd-program]]></url>        <title><![CDATA[Human-Centered Computing at Georgia Tech]]></title>      </link>          <link>        <url><![CDATA[https://podcasts.apple.com/us/podcast/is-technology-game-changer-for-care-ptsd-patients-rosa/id1435564422?i=1000451292353]]></url>        <title><![CDATA[The Interaction Hour: Is Technology a Game Changer for Care of PTSD Patients?]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="181216"><![CDATA[cc-research]]></keyword>          <keyword tid="181214"><![CDATA[ic-hcc]]></keyword>          <keyword tid="182582"><![CDATA[ic-ai-ml]]></keyword>          <keyword tid="181949"><![CDATA[PTSD]]></keyword>          <keyword tid="55581"><![CDATA[military veterans]]></keyword>          <keyword tid="10681"><![CDATA[veterans]]></keyword>          <keyword tid="11178"><![CDATA[Rosa Arriaga]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71891"><![CDATA[Health and Medicine]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="626926">  <title><![CDATA[Cleaning Up the Community: Shagun Jhaver Explores Impact of Content Moderation Practices on Social Media]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Online communities like Reddit or Twitter act like town halls, where opinions are shared and everyone, in theory, has a voice. Only, it doesn&rsquo;t always work like that. What was once optimistically viewed as a solution to public discourse, offering promises of open and logical discussions where anyone with a keyboard and an internet connection could speak their piece, has instead become a bit of a Wild West. Message boards have degraded into sources of harassment, misinformation, radicalization, and more.</p><p>Now, the largely techno-utopian view has been adjusted, and moderation of content has become the norm. The question is: how can you moderate, while also maintaining the promise of free speech? Also, how can you avoid discouraging posters whose content was moderated or removed while encouraging them to remain a part of the public discourse?</p><p>These are just a few of the questions being posed and pursued by <strong>Shagun Jhaver</strong>, a Ph.D. student in <a href="http://gatech.edu" target="_blank">Georgia Tech</a>&rsquo;s <a href="http://ic.gatech.edu" target="_blank">School of Interactive Computing</a> (IC), whose papers at the upcoming <a href="http://cscw.acm.org/2019/" target="_blank">Computer-Supported Cooperative Work and Social Computing</a> (CSCW) conference provide some context and, perhaps, solutions.</p><h3>Fairness, accountability, and transparency</h3><p>Jhaver is a computer scientist at heart. He earned his bachelor&rsquo;s degree in India in electrical engineering and then studied computer science for his master&rsquo;s at the University of Texas at Dallas. Like most in IC, though, his primary focus is on humans.</p><p>&ldquo;One of the main attractions to our School was that, although it is a computer science school, I am able to do interviews and surveys with people,&rdquo; Jhaver explained. &ldquo;What good are technological developments if they don&rsquo;t work for humans, if they don&rsquo;t improve society? In order to understand the interactions between technology and society, I wanted to develop a mixed-methods background, and the resources and faculty here are perfect for that.&rdquo;</p><p>One of his first projects as a graduate student was investigating communication on social media around the Black Lives Matter movement.</p><p>&ldquo;I wanted to understand the emergent collective participation around this movement and what people were feeling on the ground in the moment,&rdquo; he said. &ldquo;That&rsquo;s how I entered this area of social computing.&rdquo;</p><p>Social computing is an area of computer science that focuses on the intersection between social behavior and computational systems. Integral to Jhaver&rsquo;s study was how social media and the data gathered within those systems reflected what was happening within society as a whole.</p><p>There may be no more adequate reflection of this phenomena than on Reddit and Twitter, two communities his research has looked at. At CSCW, he&rsquo;ll present a handful of studies that have examined the topic of content moderation. One of the papers, titled <a href="https://medium.com/acm-cscw/does-transparency-in-moderation-really-matter-b86bab9b4810" target="_blank"><em>Does Transparency in Moderation Really Matter?: User Behavior After Content Removal Explanations on Reddit</em></a>, earned a best paper award. Another, titled <a href="https://medium.com/acm-cscw/did-you-suspect-the-post-would-be-removed-1dd1839277cb" target="_blank"><em>Did You Suspect the Post Would be Removed?: Understanding User Reactions to Content Removals on Reddit</em></a>, earned an honorable mention.</p><p>How, he wonders, do you develop good moderation practices that enforce community rules while also maintaining the free expression of ideas? And, what practices improve how posters feel about their moderated content and encourage them to continue participating in these forums?</p><p>&ldquo;Content moderation is more nuanced than just editing and removing content,&rdquo; Jhaver said. &ldquo;It&rsquo;s about the overall experience of the user and the community and how they interact.&rdquo;</p><p>His research came to a few conclusions:</p><p>One, fairness matters; two, accountability is important; three, the platforms should be transparent in their decisions. From the perspective of end users, that means that rules are clear and easy to follow, and when the post is removed they are notified and given a clear explanation of why. If they appeal, they are given an appropriate response.</p><p>But there are multiple stakeholders involved in the exchange, and who determines what is fair?</p><p>&ldquo;These Reddit moderators are volunteers,&rdquo; Jhaver said. &ldquo;Is it fair for us to expect them to take on these increased responsibilities for providing explanations?&rdquo;</p><p>In other words, these issues are much more nuanced than they would seem to many casual participants. <strong>Amy Bruckman</strong>, a professor in IC and Jhaver&rsquo;s co-advisor (with IC adjunct faculty <strong>Eric Gilbert</strong>), said she can&rsquo;t think of other research that has examined this aspect of social communities.</p><p>&ldquo;I don&rsquo;t think it has been studied &ndash; okay, your content was just removed, so how do you feel about that?&rdquo; she said. &ldquo;Taking that other side of it is unique.&rdquo;</p><h3>Giving everyone a voice</h3><p>So, why do these explanations even matter? Why not just remove bad content and move on?</p><p>&ldquo;But free speech is interesting,&rdquo; Jhaver said. &ldquo;There&rsquo;s this dichotomy where if you are free to harass certain people over their race, gender, or other aspects of identity, then you are preventing them from having the voice to speak their truth. So, you are infringing on their freedom of speech. That&rsquo;s why there&rsquo;s this need.&rdquo;</p><p>Whatever the case, these issues are not going away. Methods of communication will continue to change over time, particularly as technology continues to advance. But, Jhaver said, these conversations aren&rsquo;t anything new either.</p><p>&ldquo;These are age old problems,&rdquo; he said. &ldquo;Harassment, free speech, suppression of free speech. These topics have always been discussed, but the internet has changed the way we see them and changed how they manifest themselves.</p><p>&ldquo;I want my research to help minorities and other vulnerable groups have a greater voice in society,&rdquo; Jhaver said. &ldquo;I want to contribute to the design of more equitable, inclusive, and participatory technologies.&rdquo;</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1569872091</created>  <gmt_created>2019-09-30 19:34:51</gmt_created>  <changed>1569872091</changed>  <gmt_changed>2019-09-30 19:34:51</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Online communities, once thought to be a place everyone had a voice, has instead become a Wild West. Understanding the impact of content moderation on user behavior could improve the free flow of ideas.]]></teaser>  <type>news</type>  <sentence><![CDATA[Online communities, once thought to be a place everyone had a voice, has instead become a Wild West. Understanding the impact of content moderation on user behavior could improve the free flow of ideas.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-09-30T00:00:00-04:00</dateline>  <iso_dateline>2019-09-30T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-09-30 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>626923</item>      </media>  <hg_media>          <item>          <nid>626923</nid>          <type>image</type>          <title><![CDATA[Shagun Jhaver]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Shagun_Jhaver.JPG]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Shagun_Jhaver.JPG]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Shagun_Jhaver.JPG]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Shagun_Jhaver.JPG?itok=mvBoiso5]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Shagun Jhaver]]></image_alt>                    <created>1569871871</created>          <gmt_created>2019-09-30 19:31:11</gmt_created>          <changed>1569871871</changed>          <gmt_changed>2019-09-30 19:31:11</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.ic.gatech.edu/content/human-centered-computing-cognitive-science]]></url>        <title><![CDATA[Human-Centered Computing at Georgia Tech]]></title>      </link>          <link>        <url><![CDATA[https://www.ic.gatech.edu/content/social-computing-computational-journalism]]></url>        <title><![CDATA[Social Computing at Georgia Tech]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="182508"><![CDATA[cc-research; ic-hcc; ic-social-computing]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="625602">  <title><![CDATA[The Google Internship That Almost Wasn’t]]></title>  <uid>27592</uid>  <body><![CDATA[<p><strong>Sam Harvey</strong>, a master of science student in human-computer interaction, was presented with possibly a once-in-a-lifetime opportunity to work on a major product at Google, but one thing stood in his way &ndash; he didn&rsquo;t have an online portfolio the recruiter was searching for.</p><p>&ldquo;When I got that email, I responded within 30 milliseconds and said I&rsquo;d have a website up in two days,&rdquo; recalls Harvey, who is now well into a four-month internship at Google&rsquo;s Switzerland campus in Zurich.</p><p>Making good on his promise helped Harvey secure a spot in the interview process, which spanned several early-morning video calls to Zurich (six hours ahead of Atlanta) and two rounds of vetting over about 45 days.</p><p>Harvey is now experiencing firsthand the culture that such a selective hiring process helps cultivate. He jokes about trying not to become too &ldquo;googley&rdquo; &ndash; an enigmatic term that hints at Google&rsquo;s sometimes utopian ideals &ndash; but he was soon struck with the weight of the responsibility given to him.</p><p>&ldquo;It&rsquo;s incredibly humbling to be working on Google Flights,&rdquo; Harvey says, referring to the product that has been his focus. The tool is part of Google Travel, designed to be a comprehensive resource for planning trips.</p><p>&ldquo;Releasing a bad product impacts a lot of people. When designing, I might think of grandparents who want to fly out to visit their grandchildren,&rdquo; Harvey says. &ldquo;How do I help them?&rdquo;</p><p>&ldquo;What I&rsquo;m designing can either make the experience easier or harder. Imagine making something that will ruin the day for a million grandmothers. I don&rsquo;t want to do that.&rdquo;</p><p>Harvey&rsquo;s internship as a UX designer &ndash; short for user experience designer &ndash; is what Harvey himself wants to make of it. The Google culture that includes 24/7 free food, nap rooms, and flex schedules (just a few of the perks) is designed to &ldquo;let you maximize your full potential,&rdquo; as Harvey puts it.</p><p>UX designers, as the name implies, often focus on how to design an experience that differentiates a product from its competitors. Harvey approaches his work by starting from a place of empathy and figuring out how he would respond to a product. He then evaluates evidence-based designs to quantify what works, and finally, he sets out on the long, chaotic journey to build something truly special.</p><p>There are no shortcuts in the process, especially not at Google. A glimpse into Harvey&rsquo;s experience shows this.</p><p>&ldquo;I listen &ndash; and I listen hard &ndash; to the user researchers and product experts. I turn off the part of my brain where I want to talk over people because I think I have something brilliant to say. I switch off my ego.&rdquo;</p><p>Analyzing Google&rsquo;s unmatched volume of user data is helping Harvey to identify some of the most vexing problems in online flight planning. He&rsquo;ll let the information he gathers marinate on his brain for a long time, then start sketching out on paper as many solutions as possible &ndash; literally anything that might solve the given challenge.</p><p>&ldquo;Three percent of the ideas are gonna make it out of the furnace,&rdquo; he jokes, referring to the process where team members kindly discard the concepts that don&rsquo;t pass muster.</p><p>What survives undergoes even closer scrutiny, and the intensity of the process leaves only those product designs that might work well as part of a sprawling Google ecosystem operating around the clock.</p><p>Harvey still marvels that a team of some of the most talented people he&rsquo;s ever met &ndash; working together on the same challenges &ndash; doesn&rsquo;t create a toxic culture of competing alphas. Rather, it&rsquo;s the opposite.</p><p>&ldquo;This place is conducive to making some awesome stuff and not making people feel small,&rdquo; he said. &ldquo;My favorite part of being here is being pushed every day and striving to be a contributing member. I&rsquo;d recommend coming to Google just for the growth potential.&rdquo;</p><p>Harvey mentions the benefit of Georgia Tech&rsquo;s MS-HCI program, the guidance of program director Dick Henneman, and how these both prepared him for Google. Taking the summer job delayed graduation for him, but Harvey is OK with this, realizing that this rare opportunity will shape the rest of his career.</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1567616681</created>  <gmt_created>2019-09-04 17:04:41</gmt_created>  <changed>1567691722</changed>  <gmt_changed>2019-09-05 13:55:22</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Sam Harvey, MS student in human-computer interaction, is working on Google Flights and making sure his design decisions don't ruin your travel plans, or those for a million or so grandmothers. ]]></teaser>  <type>news</type>  <sentence><![CDATA[Sam Harvey, MS student in human-computer interaction, is working on Google Flights and making sure his design decisions don't ruin your travel plans, or those for a million or so grandmothers. ]]></sentence>  <summary><![CDATA[<p>Sam Harvey, MS student in human-computer interaction, is working on Google Flights and making sure his design decisions don&#39;t ruin your travel plans, or those for a million or so grandmothers.&nbsp;</p>]]></summary>  <dateline>2019-09-05T00:00:00-04:00</dateline>  <iso_dateline>2019-09-05T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-09-05 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[MS-HCI Student Lands at the Search Giant and Learns That Only the Best Product Ideas Survive]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu">Joshua Preston</a><br />Research Communications Manager<br /><em>GVU Center and College of Computing</em></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>625651</item>      </media>  <hg_media>          <item>          <nid>625651</nid>          <type>image</type>          <title><![CDATA[Sam Harvey]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Sam_Harvey_zurich.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Sam_Harvey_zurich.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Sam_Harvey_zurich.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Sam_Harvey_zurich.jpg?itok=Y0HuDCZH]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1567691094</created>          <gmt_created>2019-09-05 13:44:54</gmt_created>          <changed>1567691094</changed>          <gmt_changed>2019-09-05 13:44:54</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="624901">  <title><![CDATA[Researchers Use Social Media to Help Measure Outcomes of Psychiatric Medication]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Social media posts are becoming a vital tool to assessing the effects of psychiatric medication, according to a new study from researchers in Georgia Tech&rsquo;s <a href="http://ic.gatech.edu" target="_blank">School of Interactive Computing</a> (IC). The approach offers clinicians a more effective method to measure mental health outcomes in a notoriously imprecise space.</p><p>In treating mental illness, clinicians are often forced into a trial-and-error approach to prescribing medication to patients. Each patient may react differently &ndash; oftentimes with negative outcomes &ndash; to drugs that have been matched with conditions based on incomplete and potentially biased data from clinical trials.</p><p>&ldquo;In most non-mental health treatment where particular symptoms like a fever or chronic pain might indicate a specific physical condition, there exists a more definitive matching approach to prescription,&rdquo; said <strong>Koustuv Saha</strong>, an IC Ph.D. student who led the study. &ldquo;In psychiatric care, that matching approach is unknown.&rdquo;</p><p>Patients taking the wrong medication could experience increased depression or anxiety, suicidal ideation, or other symptoms like fluctuations in sleep and weight. In many cases, they are forced to return to their clinician for a change in medication or, in worse cases, may lose trust in the medication entirely and stop using it.</p><p>&ldquo;Considering that five of the top 50 drugs sold in the United States are psychiatric medications, it&rsquo;s extremely important to understand how they actually work on individuals,&rdquo; Saha said.</p><p>In the past, clinical trials have taken a disease-centered approach that attempts to prescribe specific medications with psychiatric symptoms, neglecting those psychoactive effects of the drug. Trials are conducted for smaller cohorts over shorter periods of time, eliminate some individuals who experience more extreme symptoms, and are often biased, being conducted by the drug companies themselves.</p><p>Adopting a &ldquo;patient-centered&rdquo; model that considers individual outcomes for patients using a specific medication, this study leveraged longitudinal and large-scale social media data to achieve a form of digital-based matching of patients to medications.</p><p>The researchers collected a list of medications approved by the Food and Drug Administration, then collected Tweets that mentioned these medications between 2015-16. From that, they collected over 600,000 Tweets that identified users of a specific medication. Interestingly enough, their data matched the top four prescription psychiatric medications in that period: Sertraline (Zoloft), Escitalopram (Lexapro), Fluoxetine (Prozac), and Duloxetine (Cymbalta).</p><p>Using a control group of random Twitter users that did not take the medication and building on prior work that showed the ability for language found in social posts to predict mental health conditions, researchers could match specific medications with their outcomes, positive or negative, after use.</p><p>The findings indicated that Selective Serotonin Reuptake Inhibitors (Sertraline, Escitalopram, Fluoxetine) &ndash; three of the most popular prescription medications &ndash; are actually associated with worsening symptoms. Tricyclic Antidepressants like Dosulepin, Imipramine, and Clomipramine, by comparison, were more associated with improving conditions.</p><p>&ldquo;Clinically, our findings reveal signals of the most common effects of the psychiatric medications over a large population, with the potential for improved characterization of their occurrence,&rdquo; Saha writes in the paper. &ldquo;Technologically, we show the potential of novel technologies in digital therapeutics.&rdquo;</p><p>This research, Saha said, exists as a proof of concept to show levels of a specific condition &ndash; before and after medication use &ndash; using digital data. He stressed it is not a replacement for clinical care, only a way to help augment treatment using additional available data.</p><p>The work was presented at the <a href="https://www.icwsm.org/2019/">13</a><a href="https://www.icwsm.org/2019/" target="_blank"><sup>th</sup></a> International AAAI Conference on Web and Social Media<a href="https://www.icwsm.org/2019/" target="_blank"> in a paper titled <em>A Social Media Study on the Effects of Psychiatric Medication Use</em></a> (Koustuv Saha, <strong>Benjamin Sugar</strong>, <strong>John Torous</strong>, <strong>Bruno Abrahao</strong>, <strong>Emre Kiciman</strong>, <strong>Munmun De Choudhury</strong>). It was awarded Outstanding Study Design Paper at the conference. It is funded in part by a grant from the <a href="http://www.nih.gov" target="_blank">National Institutes of Health</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1566406737</created>  <gmt_created>2019-08-21 16:58:57</gmt_created>  <changed>1566406737</changed>  <gmt_changed>2019-08-21 16:58:57</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[This research exists as a proof of concept to show levels of a specific condition – before and after medication use – using digital data.]]></teaser>  <type>news</type>  <sentence><![CDATA[This research exists as a proof of concept to show levels of a specific condition – before and after medication use – using digital data.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-08-21T00:00:00-04:00</dateline>  <iso_dateline>2019-08-21T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-08-21 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>624519</item>      </media>  <hg_media>          <item>          <nid>624519</nid>          <type>image</type>          <title><![CDATA[Social Media Logos]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Social Media logos.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Social%20Media%20logos.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Social%20Media%20logos.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Social%2520Media%2520logos.jpg?itok=9S-0Fb6s]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[A keyboard featuring different social media logos]]></image_alt>                    <created>1565805908</created>          <gmt_created>2019-08-14 18:05:08</gmt_created>          <changed>1565805908</changed>          <gmt_changed>2019-08-14 18:05:08</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.ic.gatech.edu/content/social-computing-computational-journalism]]></url>        <title><![CDATA[Social Computing Research at Georgia Tech]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="182015"><![CDATA[cc-research; ic-ai-ml; ic-hcc; ic-social-computing]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="624538">  <title><![CDATA[Researchers Use Social Media to Help Measure Outcomes of Psychiatric Medication]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Social media posts are becoming a vital tool to assessing the effects of psychiatric medication, according to a new study from researchers in Georgia Tech&rsquo;s <a href="http://ic.gatech.edu" target="_blank">School of Interactive Computing</a> (IC). The approach offers clinicians a more effective method to measure mental health outcomes in a notoriously imprecise space.</p><p>In treating mental illness, clinicians are often forced into a trial-and-error approach to prescribing medication to patients. Each patient may react differently &ndash; oftentimes with negative outcomes &ndash; to drugs that have been matched with conditions based on incomplete and potentially biased data from clinical trials.</p><p>&ldquo;In most non-mental health treatment where particular symptoms like a fever or chronic pain might indicate a specific physical condition, there exists a more definitive matching approach to prescription,&rdquo; said <strong>Koustuv Saha</strong>, an IC Ph.D. student who led the study. &ldquo;In psychiatric care, that matching approach is unknown.&rdquo;</p><p>Patients taking the wrong medication could experience increased depression or anxiety, suicidal ideation, or other symptoms like fluctuations in sleep and weight. In many cases, they are forced to return to their clinician for a change in medication or, in worse cases, may lose trust in the medication entirely and stop using it.</p><p>&ldquo;Considering that five of the top 50 drugs sold in the United States are psychiatric medications, it&rsquo;s extremely important to understand how they actually work on individuals,&rdquo; Saha said.</p><p>In the past, clinical trials have taken a disease-centered approach that attempts to prescribe specific medications with psychiatric symptoms, neglecting those psychoactive effects of the drug. Trials are conducted for smaller cohorts over shorter periods of time, eliminate some individuals who experience more extreme symptoms, and are often biased, being conducted by the drug companies themselves.</p><p>Adopting a &ldquo;patient-centered&rdquo; model that considers individual outcomes for patients using a specific medication, this study leveraged longitudinal and large-scale social media data to achieve a form of digital-based matching of patients to medications.</p><p>The researchers collected a list of medications approved by the Food and Drug Administration, then collected Tweets that mentioned these medications between 2015-16. From that, they collected over 600,000 Tweets that identified users of a specific medication. Interestingly enough, their data matched the top four prescription psychiatric medications in that period: Sertraline (Zoloft), Escitalopram (Lexapro), Fluoxetine (Prozac), and Duloxetine (Cymbalta).</p><p>Using a control group of random Twitter users that did not take the medication and building on prior work that showed the ability for language found in social posts to predict mental health conditions, researchers could match specific medications with their outcomes, positive or negative, after use.</p><p>The findings indicated that Selective Serotonin Reuptake Inhibitors (Sertraline, Escitalopram, Fluoxetine) &ndash; three of the most popular prescription medications &ndash; are actually associated with worsening symptoms. Tricyclic Antidepressants like Dosulepin, Imipramine, and Clomipramine, by comparison, were more associated with improving conditions.</p><p>&ldquo;Clinically, our findings reveal signals of the most common effects of the psychiatric medications over a large population, with the potential for improved characterization of their occurrence,&rdquo; Saha writes in the paper. &ldquo;Technologically, we show the potential of novel technologies in digital therapeutics.&rdquo;</p><p>This research, Saha said, exists as a proof of concept to show levels of a specific condition &ndash; before and after medication use &ndash; using digital data. He stressed it is not a replacement for clinical care, only a way to help augment treatment using additional available data.</p><p>The work was presented at the <a href="https://www.icwsm.org/2019/">13</a><a href="https://www.icwsm.org/2019/" target="_blank"><sup>th</sup></a> International AAAI Conference on Web and Social Media<a href="https://www.icwsm.org/2019/" target="_blank"> in a paper titled <em>A Social Media Study on the Effects of Psychiatric Medication Use</em></a> (Koustuv Saha, <strong>Benjamin Sugar</strong>, <strong>John Torous</strong>, <strong>Bruno Abrahao</strong>, <strong>Emre Kiciman</strong>, <strong>Munmun De Choudhury</strong>). It was awarded Outstanding Study Design Paper at the conference. It is funded in part by a grant from the <a href="http://www.nih.gov" target="_blank">National Institutes of Health</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1565810173</created>  <gmt_created>2019-08-14 19:16:13</gmt_created>  <changed>1566248988</changed>  <gmt_changed>2019-08-19 21:09:48</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[This research exists as a proof of concept to show levels of a specific condition – before and after medication use – using digital data.]]></teaser>  <type>news</type>  <sentence><![CDATA[This research exists as a proof of concept to show levels of a specific condition – before and after medication use – using digital data.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-08-14T00:00:00-04:00</dateline>  <iso_dateline>2019-08-14T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-08-14 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>624519</item>      </media>  <hg_media>          <item>          <nid>624519</nid>          <type>image</type>          <title><![CDATA[Social Media Logos]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Social Media logos.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Social%20Media%20logos.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Social%20Media%20logos.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Social%2520Media%2520logos.jpg?itok=9S-0Fb6s]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[A keyboard featuring different social media logos]]></image_alt>                    <created>1565805908</created>          <gmt_created>2019-08-14 18:05:08</gmt_created>          <changed>1565805908</changed>          <gmt_changed>2019-08-14 18:05:08</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.ic.gatech.edu/content/social-computing-computational-journalism]]></url>        <title><![CDATA[Social Computing Research at Georgia Tech]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="182015"><![CDATA[cc-research; ic-ai-ml; ic-hcc; ic-social-computing]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="624130">  <title><![CDATA['MacGyver'-like Robot Can Build Own Tools By Assessing Form, Function of Supplies]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Thanks to new technology that enables them to create simple tools, robots may be on the verge of their own version of the Stone Age.</p><p>Using a novel capability to reason about shape, function, and attachment of unrelated parts, researchers have for the first time successfully trained an intelligent agent to create basic tools by combining objects.</p><p>The breakthrough comes from Georgia Tech&rsquo;s <a href="http://www.rail.gatech.edu/">Robot Autonomy and Interactive Learning</a> (RAIL) research lab and is a significant step toward enabling intelligent agents to devise more advanced tools that could prove useful in hazardous &ndash; and potentially life-threatening &ndash; environments.</p><p>The concept may sound familiar. It&rsquo;s called &ldquo;MacGyvering,&rdquo; based off the name of a 1980s &mdash; and recently rebooted &mdash; television series. In the series, the title character is known for his unconventional problem-solving ability using differing resources available to him.</p><p>For years, computer scientists and others have been working to provide robots with similar capabilities. In their new robot-MacGyvering work, RAIL lab researchers led by Associate Professor <strong>Sonia Chernova</strong> used as a starting point a robotics technique previously developed by former Georgia Tech Professor <strong>Mike Stilman</strong>.</p><p>In this latest work, a robot trained using the team&rsquo;s novel approach is given a set of optional parts and told to make a specific tool. Much like its human counterparts, the robot first examines the shapes of each part and how one might be attached to another.</p><p>Using machine learning, the robot is trained to match form to function &ndash; which object shapes facilitate a particular outcome &ndash; from numerous examples of everyday objects. For example, by learning that the concavity of bowls enables them to hold liquids, it makes use of this knowledge when constructing a spoon. Similarly, the robots were taught how to attach objects together from examples of materials that could be pierced or grasped.</p><p>In the study, researchers successfully created hammers, spatulas, scoops, squeegees, and screwdrivers.</p><p>&ldquo;The screwdriver was particularly interesting because the robot combined pliers and a coin,&rdquo; said <strong>Lakshmi Nair</strong>, a Ph.D. student in the <a href="http://www.ic.gatech.edu">School of Interactive Computing</a> and one of the researchers on the project. &ldquo;It reasoned that the pliers were able to grasp something and said that the coin sort of matched the head of a screwdriver. Put them together, and it creates an effective tool.&rdquo;</p><p>Currently, the robot is limited only to the shape and attachment. It cannot yet effectively reason about particular material properties, a crucial step in advancing to a real-world scenario.</p><p><a href="https://www.ic.gatech.edu/news/623044/robot-able-instantly-identify-household-materials-using-near-infrared-light"><strong>[RELATED: Robot Able to Instantly Identify Household Materials Using Near-Infrared Light]</strong></a></p><p>&ldquo;People reason that hammers are sturdy and strong, so you wouldn&rsquo;t make a hammer out of foam blocks,&rdquo; Nair said. &ldquo;We want to reach that level of reasoning in our work, which is something we&rsquo;re working on now.&rdquo;</p><p>The inspiration for the work comes from the popular story of Apollo 13, the doomed seventh crewed flight of the Apollo space program. After an oxygen tank in the ship&rsquo;s service module exploded two days into the mission, crew members were forced to make makeshift modifications to the carbon dioxide removal system.</p><p>Despite a dangerously tight window of time and extremely high tension among all aboard and at mission control, the rescue proved successful. Nair and collaborators hope this research will prove foundational to future robotics technology that could reason faster and without the burden of stress.</p><p>&ldquo;They were able to make this filter, but the solution took a long time to come up with,&rdquo; Nair said. &ldquo;We want to make robots that can assist humans in these kinds of scenarios to take the pressure off of them to come up with innovative solutions and potentially save their lives.&rdquo;</p><p>This work was presented at the 2019 Robotics: Science and Systems conference in a paper titled <a href="http://www.roboticsproceedings.org/rss15/p09.pdf"><em>Autonomous Tool Construction Using Part Shape and Attachment Prediction </em></a>(Lakshmi Nair, <strong>Nithin Shrivatsav</strong>, <strong>Zackory Erickson</strong>, Sonia Chernova). It is supported in part by grants from the <a href="https://www.nsf.gov/">National Science Foundation</a> and the <a href="https://www.onr.navy.mil/">Office of Naval Research</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1565211849</created>  <gmt_created>2019-08-07 21:04:09</gmt_created>  <changed>1565640505</changed>  <gmt_changed>2019-08-12 20:08:25</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The breakthrough is a significant step toward enabling intelligent agents to devise more advanced tools that could prove useful in hazardous and potentially life-threatening environments.]]></teaser>  <type>news</type>  <sentence><![CDATA[The breakthrough is a significant step toward enabling intelligent agents to devise more advanced tools that could prove useful in hazardous and potentially life-threatening environments.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-08-07T00:00:00-04:00</dateline>  <iso_dateline>2019-08-07T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-08-07 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>624128</item>      </media>  <hg_media>          <item>          <nid>624128</nid>          <type>image</type>          <title><![CDATA[Robot MacGyvering - Lakshmi Nair 1]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Macgyvering MAIN.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Macgyvering%20MAIN.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Macgyvering%20MAIN.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Macgyvering%2520MAIN.jpg?itok=olNRLfjP]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Lakshmi Nair stands next to a robotic arm with tool parts on a table]]></image_alt>                    <created>1565210646</created>          <gmt_created>2019-08-07 20:44:06</gmt_created>          <changed>1565210646</changed>          <gmt_changed>2019-08-07 20:44:06</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://rail.gatech.edu]]></url>        <title><![CDATA[Robot Autonomy and Interactive Learning Lab]]></title>      </link>          <link>        <url><![CDATA[https://www.ic.gatech.edu/content/robotics-computational-perception]]></url>        <title><![CDATA[Robotics and Computational Perception Research at Georgia Tech]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="181920"><![CDATA[cc-research; ic-ai-ml; ic-robotics]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71881"><![CDATA[Science and Technology]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="624042">  <title><![CDATA[Civic Data Science Pairs with Smart Cities for Sixth Summer]]></title>  <uid>34541</uid>  <body><![CDATA[<p>Students presented data science solutions for problems like climate change and traffic at the <a href="https://civicdatascience.gatech.edu/" target="_blank">Civic Data Science</a> (CDS) finale on July 28. This was the first year the National Science Foundation&ndash;funded summer program partnered with the <a href="https://smartcities.gatech.edu/" target="_blank">Georgia Smart Communities Challenge</a>.</p><p>Since 2013, undergraduates from colleges across the country come to campus for the 10-week program, where they learn how to use data science to tackle civic problems. This year, CDS paired with Smart Cities&rsquo; <a href="https://www.news.gatech.edu/2019/06/18/georgia-smart-communities-challenge-selects-four-new-community-projects" target="_blank">Smart Communities</a>, an initiative that integrates technology-based research with a community&rsquo;s goals.</p><p>This year&rsquo;s CDS projects were:</p><ul><li><a href="http://smartcities.ipat.gatech.edu/chatham-county" target="_blank">Smart Sea Level Tools for Emergency Planning and Response</a>, in which students found a better way to conduct maintenance for 30 smart sea level sensors that are part of a program run by School of Computer Science and <a href="http://ipat.gatech.edu/" target="_blank">Institute for People and Technology Senior Research Scientist</a> <a href="https://www.cc.gatech.edu/fac/Russell.Clark/" target="_blank"><strong>Russell Clark</strong></a> in Savannah, Georgia.</li><li><a href="http://smartcities.ipat.gatech.edu/city-albany" target="_blank">Albany Housing Data Initiative</a>, where students cleaned city data from disparate sources and created a database to help the city of Albany, Georgia, understanding the effect of programs to reduce energy costs.</li><li><a href="http://smartcities.ipat.gatech.edu/gwinnett-county" target="_blank">Connected Vehicle Technology Master Plan</a>, in which students analyzed data to better handle the flow of traffic in Gwinnett county for emergency vehicles.</li></ul><p>The program&rsquo;s co-director and SCS Professor <a href="https://www.cc.gatech.edu/~ewz/Welcome.html" target="_blank"><strong>Ellen Zegura</strong></a> believes students connected to these projects more because of their real-world application.</p><p>&ldquo;It&rsquo;s a pleasure to watch the work progress from the early first days to getting to see how much you all have learned and how much you all understand the context of the projects you&rsquo;re doing,&rdquo; she said during the finale ceremony held in the Technology Square Research Building.</p><p>&ldquo;It&rsquo;s not just that you built a database, but here&rsquo;s what a sensor looks like and here&rsquo;s how it can go wrong.&rdquo;</p><p>The students agree. <strong><a href="https://www.linkedin.com/in/angelalau15/" target="_blank">Angela Lau</a></strong>, a rising sophomore at Cornell Unviersity, wanted an internship that coud help the community.</p><p>&ldquo;I was really interested in this program because of the local applicability of the projects,&rdquo; she said. &ldquo;It surprised me how real it was and how we could help a community over a few weeks.&rdquo;</p><p>Working with real data also presented unique learning experiences that students wouldn&rsquo;t normally encounter in a classroom setting.</p><p>&ldquo;There were a lot of challenges working with a real data,&rdquo; said <a href="https://www.linkedin.com/in/kutub-gandhi-83439514b/" target="_blank"><strong>Kutub Gandhi</strong></a>, a rising senior at Rice University. &ldquo;Our entire project was figuring out what was wrong with the data collected from sea level sensors.&rdquo;</p><p>For many students, this was their first time learning data science skills that they can now use throughout their career.</p><p>&ldquo;I had heard of data visualization but didn&rsquo;t know much about it,&rdquo; said <a href="https://www.linkedin.com/in/david-s-li/" target="_blank"><strong>David Li</strong></a>, a rising senior at Stony Brook University. &ldquo;But by the end I realized, &lsquo;Wow I learned this and I never knew I could do this before!&rsquo;&rdquo;</p>]]></body>  <author>Tess Malone</author>  <status>1</status>  <created>1565114258</created>  <gmt_created>2019-08-06 17:57:38</gmt_created>  <changed>1565632750</changed>  <gmt_changed>2019-08-12 17:59:10</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Students presented data science solutions for problems like climate change and traffic at the Civic Data Science (CDS) finale on July 28. ]]></teaser>  <type>news</type>  <sentence><![CDATA[Students presented data science solutions for problems like climate change and traffic at the Civic Data Science (CDS) finale on July 28. ]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-08-06T00:00:00-04:00</dateline>  <iso_dateline>2019-08-06T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-08-06 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[tess.malone@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Tess Malone, Communications Officer</p><p><a href="mailto:tess.malone@cc.gatech.edu">tess.malone@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>624043</item>      </media>  <hg_media>          <item>          <nid>624043</nid>          <type>image</type>          <title><![CDATA[CDS 2019]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[IMG_8550.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/IMG_8550.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/IMG_8550.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/IMG_8550.jpg?itok=PWVpfpLF]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[CDS students]]></image_alt>                    <created>1565117881</created>          <gmt_created>2019-08-06 18:58:01</gmt_created>          <changed>1565117881</changed>          <gmt_changed>2019-08-06 18:58:01</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="621721">  <title><![CDATA[AIs and Humans Become ‘Creative Equals’ with New Design Tool]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Georgia Tech researchers have created software with a built-in AI agent that works alongside human designers in real time to create game levels. The software, dubbed MorAI Maker in a nod to Nintendo&rsquo;s game Mario Maker, uses new machine learning techniques for game content generation that allows humans and an&nbsp;AI agent&nbsp;to work in a turn-based fashion on the same digital canvas. This is the first such tool of its kind.</p><p>Through two studies with more than 100 game hobbyists and practicing game developers, the Georgia Tech team found that people varied significantly in how they used the AI.</p><p>&ldquo;We did not explicitly structure any roles into our machine learning models, but we still found that users naturally projected different roles onto the same AI and took corresponding roles,&rdquo; said <strong>Matthew Guzdial</strong>, Ph.D. student in computer science and lead researcher.</p><p>According to researchers, after refining the machine learning model, the AI agent was capable of picking up on users&rsquo; preferences for level structures. A majority of game developers reported that they would use the AI co-designer in the software, which was developed in Unity.</p><p>Researchers observed four major categories of roles that people assigned their virtual partners.</p><p>Some participants viewed the AI as a friend. One participant prompted the AI to begin the level design, forfeiting her own turn and stating, &ldquo;Let&rsquo;s see what my friend comes up with.&rdquo;</p><p>Some participants wanted an equal design partner (collaborator), others seemed to expect the AI to adhere to their specific design beliefs or instructions (student), and some designers followed the AI&rsquo;s lead or expected to be evaluated on their design (manager).</p><p>&ldquo;Human designers in the study demonstrated a willingness to adapt their own design practices to the AI, sometimes as a means of attempting to determine how best to interact with it,&rdquo; said Guzdial.</p><p>Conversely, every participant had at least one interaction where the AI adapted to the human designs. For some, this was the exception rather than the rule. &ldquo;The [AI] agent placed objects fairly arbitrarily, in places where it didn&rsquo;t really affect gameplay, just looked weird,&rdquo; said another participating professional designer.</p><p>The AI agent embedded in the game design software was trained on implicit feedback from the user. If a user kept the AI&rsquo;s game level additions, the AI received a &ldquo;reward,&rdquo; and if the user removed them a &ldquo;penalty&rdquo; was given to the AI. The AI was not allowed to remove human-generated elements.</p><p>One designer said, &ldquo;It was nice to be surprised by the AI partner. It prompted conversation/discussion in my head.&rdquo; Another said, &ldquo;I was running out of ideas, then prompted the AI for help, and I said, &lsquo;Oh yeah I forgot about these things!&rsquo;&rdquo;</p><p>Despite mostly positive feedback, not everyone found the tool to be consistently valuable. As one participant put it, &ldquo;I could see using this tool as a way to give myself inspiration. But, if I had more specific goals in mind... I would have found it more inhibiting than useful.&rdquo;</p><p>Guzdial says MorAI Maker is intended as a design aide, not as a replacement for designers.</p><p>&ldquo;The AI system is developed in favor of augmenting, not replacing, creative work,&rdquo; he said.</p><p>The full research,&nbsp;<a href="https://arxiv.org/pdf/1901.06417.pdf"><em>Friend, collaborator, student, manager: How design of an AI-driven game level editor affects creators</em></a>, is published in the 2019 Proceedings of the ACM Conference on Humans Factors in Computing Systems.</p><p>The research is based upon work supported by the National Science Foundation under Grant No. IIS-1525967. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1558006658</created>  <gmt_created>2019-05-16 11:37:38</gmt_created>  <changed>1565621452</changed>  <gmt_changed>2019-08-12 14:50:52</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Will video game developers welcome AI assistance in their workflow? In short, yes, and in wildly different ways, based on research from Georgia Tech published this month. ]]></teaser>  <type>news</type>  <sentence><![CDATA[Will video game developers welcome AI assistance in their workflow? In short, yes, and in wildly different ways, based on research from Georgia Tech published this month. ]]></sentence>  <summary><![CDATA[<p>Will video game developers welcome AI assistance in their workflow? In short, yes, and in wildly different ways, based on research from Georgia Tech published this month.&nbsp;</p>]]></summary>  <dateline>2019-05-16T00:00:00-04:00</dateline>  <iso_dateline>2019-05-16T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-05-16 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[Video Game Developers Use an AI partner In Wildly Different Ways, From Friend to Boss]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu">Joshua Preston</a><br />Research Communications Manager<br /><em>College of Computing and GVU Center</em></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>621722</item>      </media>  <hg_media>          <item>          <nid>621722</nid>          <type>image</type>          <title><![CDATA[MorAI Maker Game Design Tool]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[MorAI Maker creations.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/MorAI%20Maker%20creations.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/MorAI%20Maker%20creations.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/MorAI%2520Maker%2520creations.png?itok=OHfQh8Fc]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1558007459</created>          <gmt_created>2019-05-16 11:50:59</gmt_created>          <changed>1558007477</changed>          <gmt_changed>2019-05-16 11:51:17</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.youtube.com/watch?v=UkMeM5Ty1lA&amp;feature=youtu.be&amp;t=563]]></url>        <title><![CDATA[VIDEO: Early Interaction with AI Creative Partner]]></title>      </link>          <link>        <url><![CDATA[https://www.spreaker.com/user/10751784/tu-ep6-video-game-devs-react-to-ai]]></url>        <title><![CDATA[Tech Unbound Podcast EP6: Video Game Developers React in Wildly Different Ways to AI-Enabled Software]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="624291">  <title><![CDATA[AI 'Performers' Take Center Stage and Get Creative with People in Public Spaces]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Researchers at Georgia Tech are seeking to improve &ldquo;artificial intelligence literacy&rdquo; and give people opportunities to engage directly with AI systems in order to understand the potential and capabilities of the technology.</p><p>AI-assisted tech is increasingly common, but actions by these autonomous programs are often hard to spot in people&rsquo;s daily use of devices and online services.</p><p>Georgia Tech&rsquo;s Expressive Machinery Lab has developed exhibitions where the AI agents are front-and-center and people are able to create with them in public spaces. These AIs have included a dance partner, visual storyteller, music maker, and comedic improv performer.</p><p>&ldquo;There are common misconceptions about what AI is, what it is capable of, and how it works,&rdquo; said <strong>Brian Magerko</strong>, professor of digital media and director of the Expressive Machinery Lab. &ldquo;AI systems in public spaces that can engage as active participants in co-creative activities have the potential to serve as avenues for AI literacy. We believe this work pushes these efforts forward considerably.&rdquo;</p><p>The exhibitions&nbsp;involving live interactions between people and AIs &ndash; what the researchers call co-creative experiences &ndash; have taken place across the country since 2013 at academic conferences, art festivals, museums, and other venues. &nbsp;</p><p>The multi-year endeavor has resulted in a design blueprint developed by the researchers that shows how to build AI experiences for public spaces where audiences or performers can create with an AI partner.</p><p>&ldquo;Museums and other public spaces can serve as alternative venues for AI literacy initiatives, complementing formal education and broadening access to opportunities to interact with and learn about AI by both adults and children who may not have AI devices in their homes or schools,&rdquo; said <strong>Duri Long</strong>, human-centered computing Ph.D. student at Georgia Tech and a researcher involved in the work.</p><p>Researchers encountered challenges unique to making &ldquo;creative AIs&rdquo;, such as how to build systems that engage people with different tastes, AIs that perform over sustained periods of time, and AIs being able to adapt to unpredictable human behavior.</p><p>For example, the AI dance partner, known as LuminAI and the oldest of the group, doesn&rsquo;t have fingers so any naughty hand gestures aren&rsquo;t processed in the AI&rsquo;s dance routine.</p><p>&ldquo;Our AI agents are unlike many other AIs, which usually have a specific task to accomplish,&rdquo; Long said. &ldquo;Our work involves open-ended co-creative AI installations where there is not a single clear goal or other reward function to optimize the AI&rsquo;s behavior. Our AIs are meant to create or collaborate with a human counterpart, and that looks different every time.&rdquo;</p><p>While AIs in general often have large databases of sensor data (images, temperature readings, etc.) to improve their understanding of the world, in creative areas such as dance, theater, and other performing arts there is limited data from which AIs can pull.</p><p>The researchers overcame this in part by having their AIs learn from human partners in real-time and decide what might be a suitable action. For professional performers, who want a greater degree of control, they could perhaps take turns with the AI partner to have a more structured performance. Conversely, an AI as part of a museum exhibit might guide participants on how to start an activity in order to engage people early on. &nbsp;</p><p>Social interaction was also important to consider and, counter to some technology trends, the researchers discovered that human-to-human interaction could increase as a result of AI involvement.</p><p>LuminAI, the dancing AI, prompted a couple to do the salsa, two friends to start a synchronized dance routine, and a group of teenagers to perform in a dance circle.</p><p>The comedic AI in the roster, called Robot Improv Circus, allows an audience to watch someone interacting in VR with the AI agent and provide feedback to the person by using voice prompts and gestures to trigger in-game reward systems. This led to several groups of friends encouraging each other to try different actions with the comedic AI.</p><p>The research was published in the Proceedings of the Creativity &amp; Cognition Conference 2019. The paper <em>Designing Co-Creative AI for Public Spaces</em> was co-authored by Duri Long, Mikhail Jacob, and Brian Magerko.</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1565370815</created>  <gmt_created>2019-08-09 17:13:35</gmt_created>  <changed>1565371395</changed>  <gmt_changed>2019-08-09 17:23:15</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech’s Expressive Machinery Lab has developed exhibitions where the AI agents are front-and-center and people are able to create with them. These AIs have included a dance partner, visual storyteller, music maker, and improv comedian.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech’s Expressive Machinery Lab has developed exhibitions where the AI agents are front-and-center and people are able to create with them. These AIs have included a dance partner, visual storyteller, music maker, and improv comedian.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-08-09T00:00:00-04:00</dateline>  <iso_dateline>2019-08-09T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-08-09 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jpreston@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu">Joshua Preston</a><br />Research Communications Manager<br />GVU Center and College of Computing<br />678.231.0787</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>624288</item>          <item>624289</item>          <item>624287</item>      </media>  <hg_media>          <item>          <nid>624288</nid>          <type>image</type>          <title><![CDATA[AI Performers]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Expressive Machinery Lab AIs.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Expressive%20Machinery%20Lab%20AIs_0.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Expressive%20Machinery%20Lab%20AIs_0.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Expressive%2520Machinery%2520Lab%2520AIs_0.png?itok=k7XisiHV]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1565370405</created>          <gmt_created>2019-08-09 17:06:45</gmt_created>          <changed>1565370439</changed>          <gmt_changed>2019-08-09 17:07:19</gmt_changed>      </item>          <item>          <nid>624289</nid>          <type>image</type>          <title><![CDATA[Duri Long]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Duri Long.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Duri%20Long.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Duri%20Long.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Duri%2520Long.png?itok=zVKLIvcQ]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1565370460</created>          <gmt_created>2019-08-09 17:07:40</gmt_created>          <changed>1565370460</changed>          <gmt_changed>2019-08-09 17:07:40</gmt_changed>      </item>          <item>          <nid>624287</nid>          <type>image</type>          <title><![CDATA[Brian Magerko]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Brian Magerko.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Brian%20Magerko.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Brian%20Magerko.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Brian%2520Magerko.png?itok=UI2ufSt5]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1565370308</created>          <gmt_created>2019-08-09 17:05:08</gmt_created>          <changed>1565370308</changed>          <gmt_changed>2019-08-09 17:05:08</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.youtube.com/watch?v=K1juBtnJjTk&amp;list=PLqbYO_bYE2ClHihmAEMrP2FtqE6qpXnSF]]></url>        <title><![CDATA[AI Dance Partner ]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="143"><![CDATA[Digital Media and Entertainment]]></category>          <category tid="148"><![CDATA[Music and Music Technology]]></category>          <category tid="151"><![CDATA[Policy, Social Sciences, and Liberal Arts]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="143"><![CDATA[Digital Media and Entertainment]]></term>          <term tid="148"><![CDATA[Music and Music Technology]]></term>          <term tid="151"><![CDATA[Policy, Social Sciences, and Liberal Arts]]></term>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39431"><![CDATA[Data Engineering and Science]]></term>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="624192">  <title><![CDATA[ML@GT Announces Fall Seminar Series Speakers]]></title>  <uid>34773</uid>  <body><![CDATA[<p>Each semester, hundreds of students, faculty, and external guests are treated to talks by some of the world&rsquo;s most renowned scientists. This fall, the <a href="http://ml.gatech.edu/">Machine Learning Center at Georgia Tech (ML@GT)</a> will host five talks as a part of its Fall Seminar Series.</p><p>Speakers come from industry and academia, giving attendees exposure to problems being solved by both entities. Talks touch on current topics in machine learning and artificial intelligence, applications for technologies, and related insights and experiences. Past speakers have included the likes of <strong>Pieter Abbeel, </strong><a href="http://bit.ly/2MJtYbA">Magic Leap&rsquo;s</a><strong><a href="http://bit.ly/2MJtYbA"> Ashwin Swaminathan and Prateek Singhal</a>, Hugo Larochelle</strong>, and <a href="https://mlatgt.blog/2019/05/06/13-questions-with-manuela-veloso/"><strong>Manuela Veloso.</strong></a></p><p>&ldquo;We are proud to be able to bring world-class researchers to our campus to further explore different areas of machine learning and artificial intelligence. Talks like these are important for continuing to grow the ML community and broadening the public&rsquo;s awareness about where the field is headed. We&rsquo;re looking forward to another great semester of exciting talks,&rdquo; said <strong>Irfan Essa</strong>, director of ML@GT.</p><p>The series kicks off on Sept. 4 with <strong>Galen Reeves</strong>, an assistant professor from Duke University. Talks will be given every other Wednesday at 12:15 p.m. in the Marcus Nanotechnology Building unless otherwise noted. All talks are open to the public.</p><p><strong>Fall Seminar Series Schedule</strong></p><p>Sept. 4 &ndash; <a href="http://ml.gatech.edu/events/mlgt-fall-seminar-galen-reeves-duke-university">Galen Reeves, Duke University</a></p><p>Sept. 18 &ndash; <a href="http://ml.gatech.edu/events/mlgt-fall-seminar-chandrajit-bajaj-university-texas">Chandrajit Bajaj, University of Texas</a></p><p>Oct. 2 &ndash; <a href="http://ml.gatech.edu/events/mlgt-seminar-vijay-subramamian-university-michigan">Vijay Suvramamian, University of Michigan</a></p><p>Oct. 23 &ndash; <a href="http://ml.gatech.edu/events/mlgt-fall-seminar-aleksandra-faust-google-brain-robotics">Aleksandra Faust, Google Brain Robotics</a></p><p>Nov. 20 &ndash; Speaker to be announced soon</p><p>For the most up to date information on the seminar series, visit <a href="http://ml.gatech.edu/seminars">http://ml.gatech.edu/seminars</a></p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1565286721</created>  <gmt_created>2019-08-08 17:52:01</gmt_created>  <changed>1565360707</changed>  <gmt_changed>2019-08-09 14:25:07</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The Machine Learning Center at Georgia Tech will host five speakers this fall for their fall seminar series.]]></teaser>  <type>news</type>  <sentence><![CDATA[The Machine Learning Center at Georgia Tech will host five speakers this fall for their fall seminar series.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-08-09T00:00:00-04:00</dateline>  <iso_dateline>2019-08-09T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-08-09 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[allie.mcfadden@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allie.mcfadden@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>63128</item>      </media>  <hg_media>          <item>          <nid>63128</nid>          <type>image</type>          <title><![CDATA[Georgia Tech's Marcus Nanotechnology Building]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[thx89611.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/thx89611_0.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/thx89611_0.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/thx89611_0.jpg?itok=74ihbGYH]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Georgia Tech's Marcus Nanotechnology Building]]></image_alt>                    <created>1449176649</created>          <gmt_created>2015-12-03 21:04:09</gmt_created>          <changed>1475894552</changed>          <gmt_changed>2016-10-08 02:42:32</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="623821">  <title><![CDATA[Georgia Tech Faculty, Students, and Alumni Take Part in 41st Meeting of the Cognitive Science Society]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Members of the Georgia Tech research community were present last week at the <a href="https://cognitivesciencesociety.org/cogsci-2019/">2019 Annual Meeting of the Cognitive Science Society</a> in Montreal, Canada. This year, the conference highlighted research on the theme <em>Creativity+Cognition+Computation</em>, as well as the full breadth of research topics offered by the society&rsquo;s membership.</p><p>Many Georgia Tech faculty, students, and alumni participated among the leadership for the conference.</p><ul><li>Professor <strong>Ashok Goel</strong> served as the conference&rsquo;s co-chair;</li><li>Professor <strong>Keith McGreggor</strong> was the sponsorship chair;</li><li><strong>Wendy Newstetter</strong> of the <a href="http://www.coe.gatech.edu">College of Engineering</a> and <a href="https://c21u.gatech.edu/">Center for 21st Century Universities</a> served on the awards committee;</li><li>Georgia Tech alum <strong>Jim Davies</strong> was co-chair for publication-based talks;</li><li>Georgia Tech alum <strong>Maithilee Kunda</strong> was co-chair for member abstracts;</li><li>Georgia Tech alum <strong>Swaroop Vattam</strong> served on the workshops and tutorials committee.</li></ul><p><a href="http://ic.gatech.edu">School of Interactive Computing</a> adjunct professors <strong>Brian Magerko</strong> and <strong>Gil Weinberg</strong>, primarily of the <a href="https://www.iac.gatech.edu/">Ivan Allen College of Liberal Arts</a> and <a href="https://music.gatech.edu/">School of Music</a>, respectively, were also part of a panel on Creativity in the Arts.</p><p>Ph.D. students <strong>Sungeun An</strong> presented a poster paper at the conference titled <em>Learning by Doing: Supporting Experimentation in Inquiry-Driven Modeling</em> (Sungeun An, <strong>Robert Bates</strong>, <strong>Jennifer Hammock</strong>, <strong>Spencer Rugaber</strong>, <strong>Emily Weigel</strong>, Ashok Goel), and <strong>Marissa Gonzales</strong> another titled <em>Why are Some Online Education Programs Successful: Student Cognition and Success</em> (Marissa Gonzales, Ashok Goel).</p><p>For more information about this year&rsquo;s conference and to stay up-to-date on news about future conferences, visit <a href="https://cognitivesciencesociety.org/">https://cognitivesciencesociety.org/</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1564504493</created>  <gmt_created>2019-07-30 16:34:53</gmt_created>  <changed>1564504493</changed>  <gmt_changed>2019-07-30 16:34:53</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[This year, the conference highlighted research on the theme Creativity+Cognition+Computation.]]></teaser>  <type>news</type>  <sentence><![CDATA[This year, the conference highlighted research on the theme Creativity+Cognition+Computation.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-07-30T00:00:00-04:00</dateline>  <iso_dateline>2019-07-30T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-07-30 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>623820</item>      </media>  <hg_media>          <item>          <nid>623820</nid>          <type>image</type>          <title><![CDATA[CogSci 2019]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[MontrealSideBanner-sm.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/MontrealSideBanner-sm.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/MontrealSideBanner-sm.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/MontrealSideBanner-sm.jpg?itok=IsR2HB6c]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[CogSci 2019 banner]]></image_alt>                    <created>1564504438</created>          <gmt_created>2019-07-30 16:33:58</gmt_created>          <changed>1564504438</changed>          <gmt_changed>2019-07-30 16:33:58</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="623044">  <title><![CDATA[Robot Able to Instantly Identify Household Materials Using Near-Infrared Light ]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Robots aren&rsquo;t yet household fixtures, but Georgia Tech researchers have already come up with a way domestic bots might recognize materials around the home.</p><p>Using near-infrared light, similar to what&rsquo;s used in TV remotes, the robot can identify common materials used in household objects to better inform its actions. This might allow intelligent machines to understand, for example, the right bowl (paper versus metal) to put in a microwave or how hard to grasp a cup made of glass versus plastic.</p><p>To classify materials, the researchers first determined hundreds of light wavelengths reflected from five common materials &ndash; paper, wood, plastic, metal, and fabric. With this information, they trained a neural network on 10,000 examples in order to create a machine-learning (ML) model that could be used by a robot to quickly identify a material.</p><p>According to the researchers, a robot using their new ML model can identify materials without it first having to touch an object, a useful function for handling potentially fragile items. To do so, the robot holds a small spectrometer near an object to get a quick light measurement, which is then processed to identify the material.</p><p>&ldquo;Robots currently use conventional cameras or haptic sensing - the sense of touch - to estimate a material type,&rdquo; said <strong>Zackory Erickson</strong>, the first author on the research paper detailing the new work and Georgia Tech robotics Ph.D. student.</p><p>&ldquo;This is the first time that we know of that spectroscopy and machine learning have been used for material classification in robotics research, and our accuracy is on par with existing methods.&rdquo;</p><p>The team&rsquo;s new ML model yielded the best results using spectrometer measurements from near-infrared light. In fact, the accuracy was 99.9 percent with the full dataset of 10,000 measurements from 50 objects that the model had been trained on.</p><p>&ldquo;While human eyes typically use three color receptors to see the world, our robot can be thought of as using hundreds of color receptors to recognize materials,&rdquo; said <strong>Charlie Kemp</strong>, associate professor in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University and part of the research team. &ldquo;Instead of a conventional color camera that measures red, green, and blue light, our robot uses a spectrometer that measures light at hundreds of different wavelengths, some outside of the range of human vision.&rdquo;</p><p>To see how results would compare using only a single light reading from each object, the team also trained the model on just 50 measurements, one from each object. Interestingly, accuracy in identifying the correct material only dropped to 95 percent. When using a spectrometer reading from objects the machine learning model had never seen, the robot still achieved an 81.6 percent success rate.</p><p>&ldquo;Spectroscopy presents a reliable and effective way for robots to estimate materials of household objects,&rdquo; Erickson said. &ldquo;We&rsquo;ve demonstrated how a robot can use near-infrared spectroscopy to infer the materials of everyday objects like cups, bowls, and garments.&rdquo;</p><p>The research is published in the Proceedings of the 2019 International Conference on Robotics and Automation (ICRA) in the paper titled <em>Classification of Household Materials via Spectroscopy</em> co-authored by <a href="http://zackory.com/" target="_blank"><strong>Zackory Erickson</strong></a>, <strong>Nathan Luskey</strong>, <a href="https://www.cc.gatech.edu/~chernova/" target="_blank"><strong>Sonia Chernova</strong></a>, and <a href="http://charliekemp.com" target="_blank"><strong>Charlie Kemp</strong></a>.</p><p>For more Georgia Tech research published at ICRA, as well as the entire conference program,&nbsp;explore this <a href="https://public.tableau.com/shared/J22YXRJXM?:display_count=yes&amp;:origin=viz_share_link&amp;:showVizHome=no" target="_blank">interactive visualization</a>&nbsp;from the GVU Center at Georgia Tech.</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1562607131</created>  <gmt_created>2019-07-08 17:32:11</gmt_created>  <changed>1563396936</changed>  <gmt_changed>2019-07-17 20:55:36</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Robots aren’t yet household fixtures, but Georgia Tech researchers have already come up with a way domestic bots might recognize materials around the home.]]></teaser>  <type>news</type>  <sentence><![CDATA[Robots aren’t yet household fixtures, but Georgia Tech researchers have already come up with a way domestic bots might recognize materials around the home.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-07-08T00:00:00-04:00</dateline>  <iso_dateline>2019-07-08T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-07-08 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[No Contact is Required with Objects by Using Inexpensive, Handheld 'Light-Reading' Device]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jpreston@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu">Joshua Preston</a><br />Research Communications Manager, GVU Center<br />678.231.0787</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>623045</item>      </media>  <hg_media>          <item>          <nid>623045</nid>          <type>image</type>          <title><![CDATA[Robot Classifies Materials of Household Objects Using 'Light-Reading' Device]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Robot classifies materials of household objects.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Robot%20classifies%20materials%20of%20household%20objects.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Robot%20classifies%20materials%20of%20household%20objects.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Robot%2520classifies%2520materials%2520of%2520household%2520objects.png?itok=mzISojq0]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1562609057</created>          <gmt_created>2019-07-08 18:04:17</gmt_created>          <changed>1562609089</changed>          <gmt_changed>2019-07-08 18:04:49</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.youtube.com/watch?v=fBv_xEai2AU]]></url>        <title><![CDATA[VIDEO: Watch how GT researchers are bringing domestic bots one step closer to reality]]></title>      </link>          <link>        <url><![CDATA[https://www.spreaker.com/user/10751784/tu-ep5-robot-instantly-identifies-materials]]></url>        <title><![CDATA[Tech Unbound Podcast EP5: Robot Able to Instantly Identify Household Materials Without Touching Objects]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="667"><![CDATA[robotics]]></keyword>      </keywords>  <core_research_areas>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="582978">  <title><![CDATA[CIC Returns With Three New Categories for Fall Semester]]></title>  <uid>27980</uid>  <body><![CDATA[<p>The <a href="http://cic.gatech.edu" target="_blank">Convergence Innovation Competition (CIC)</a> is back for another semester, and we&rsquo;re looking for innovative student ideas in three new categories.<br /><br />The CIC, produced by IPaT and the <a href="http://rnoc.gatech.edu" target="_blank">Georgia Tech Research Network Operations Center (GT-RNOC)</a>, is a bi-annual competition dedicated to helping students create products and experiences with the support of campus resources and industry sponsors. The Fall competition is campus-focused, and categories are determined by our campus partners. Categories for the Fall 2016 competition, which are aligned with IPaT&rsquo;s research priorities, include:<br /><br /><strong>Lifelong Health and Wellbeing</strong><br />Entries should focus on new or reimagined solutions for patients, communities, and/or those involved in the continuum of care (caregivers, doctors, hospitals, insurers, employers).<br /><br /><strong>Smart Cities and Healthy Communities</strong><br />Entries should focus on solutions for individuals, communities, business and community stakeholders, and government service providers.<br /><br /><strong>Socio-Technical Systems and Human-Technology Frontier Innovation</strong><br />Entries will demonstrate new platforms, services, and devices ranging from the Internet of Things (IoT), Software Defined Networking (SDN), automotive and wearable computing devices, mixed and augmented Reality, data science and analytics, collaboration and communication tools.<br /><br />While the CIC is not tied to any specific Georgia Tech course, students are often able to take advantage of class partnerships where lecture and lab content and projects are aligned with competition categories. GT-RNOC research assistants provide technical support and guide teams through the competition process.<br /><br />CIC entries are due on November 11th; teams will create a project name, logo and webpage, plus a supporting video that demonstrates their project in action.<br /><br />&quot;Finalists in the CIC are judged across multiple criteria, and winning projects showcase innovation, user experience and viability in the real world,&quot; said Siva Jayaraman,&nbsp;IPaT Strategic Partnerships Manager.<br /><br />Finalists will present their projects on November 16th at a demo and judging event held at IPaT. Past CIC winners have gone on to commercialization, other competitions, as well as internship and job opportunities strengthened by their competition experience.&nbsp;To learn more about the CIC, including how to submit your project or become a sponsor, visit the competition website at <a href="http://cic.gatech.edu" target="_blank">cic.gatech.edu</a>.</p>]]></body>  <author>Alyson Key</author>  <status>1</status>  <created>1477319380</created>  <gmt_created>2016-10-24 14:29:40</gmt_created>  <changed>1562850819</changed>  <gmt_changed>2019-07-11 13:13:39</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The Convergence Innovation Competition (CIC) is back for another semester, and we’re looking for innovative student ideas in three new categories.]]></teaser>  <type>news</type>  <sentence><![CDATA[The Convergence Innovation Competition (CIC) is back for another semester, and we’re looking for innovative student ideas in three new categories.]]></sentence>  <summary><![CDATA[<p>The Convergence Innovation Competition (CIC) is back for another semester, and we&rsquo;re looking for innovative student ideas in three new categories.</p>]]></summary>  <dateline>2016-10-24T00:00:00-04:00</dateline>  <iso_dateline>2016-10-24T00:00:00-04:00</iso_dateline>  <gmt_dateline>2016-10-24 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Alyson Powell</p><p>Communications Officer, Institute for People and Technology</p><p>alyson.powell@ipat.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>582976</item>      </media>  <hg_media>          <item>          <nid>582976</nid>          <type>image</type>          <title><![CDATA[Fall 2016 Convergence Innovation Competition]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[cic-banner-ipat.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/cic-banner-ipat.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/cic-banner-ipat.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/cic-banner-ipat.jpg?itok=l8VFT6WU]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1477319200</created>          <gmt_created>2016-10-24 14:26:40</gmt_created>          <changed>1477319200</changed>          <gmt_changed>2016-10-24 14:26:40</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="69599"><![CDATA[IPaT]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>          <category tid="129"><![CDATA[Institute and Campus]]></category>          <category tid="42901"><![CDATA[Community]]></category>          <category tid="133"><![CDATA[Special Events and Guest Speakers]]></category>          <category tid="134"><![CDATA[Student and Faculty]]></category>          <category tid="8862"><![CDATA[Student Research]]></category>          <category tid="135"><![CDATA[Research]]></category>      </categories>  <news_terms>          <term tid="129"><![CDATA[Institute and Campus]]></term>          <term tid="42901"><![CDATA[Community]]></term>          <term tid="133"><![CDATA[Special Events and Guest Speakers]]></term>          <term tid="134"><![CDATA[Student and Faculty]]></term>          <term tid="8862"><![CDATA[Student Research]]></term>          <term tid="135"><![CDATA[Research]]></term>      </news_terms>  <keywords>          <keyword tid="63931"><![CDATA[CIC]]></keyword>          <keyword tid="63951"><![CDATA[Convergence Innovation Competition]]></keyword>          <keyword tid="181703"><![CDATA[HTF]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="623011">  <title><![CDATA[IC's Dhruv Batra Named PECASE Winner, One of Three at Georgia Tech]]></title>  <uid>33939</uid>  <body><![CDATA[<p><a href="http://www.ic.gatech.edu/">School of Interactive Computing</a> Assistant Professor <strong>Dhruv Batra</strong> was awarded the prestigious Presidential Early Career Award for Scientists and Engineers (PECASE) on Wednesday in an announcement by President Donald Trump. The PECASE is the highest honor bestowed by the United States government to outstanding scientists and engineers beginning independent research careers.</p><p>Batra is one of three Georgia Tech faculty members this year to earn the award, giving the Institute a total of 18 in its history. The other two awardees in this class are Associate Professor Mark Davenport of the School of Electrical and Computer Engineering and Assistant Professor Matthew McDowell of the School of Materials Science and Engineering.</p><p>Along with the Department of Defense, the White House Office of Science and Technology Policy will provide $1 million over the course of five years to support Batra&rsquo;s research to make artificial intelligence (AI) systems more transparent, explainable, and trustworthy. The award comes as a result of Batra&rsquo;s selection for a similar early-career award by the Army Research Office Young Investigator Program in 2014.</p><p>The research Batra&rsquo;s lab will pursue with the funding addresses a fundamental challenge in development of AI &ndash; their &ldquo;black-box&rdquo; nature, the consequent difficulty humans face in identifying why or how AI systems fail, and how to improve upon those technologies. When a self-driving car from a major tech company, for example, suffered its first fatality in 2015, legal and regulatory agencies understandably questioned what went wrong. The challenge at the time was providing a sufficient answer to that question.</p><p>&ldquo;Your response can&rsquo;t just be, &lsquo;Well, there was this machine learning box in there, and it just didn&rsquo;t detect the car. We don&rsquo;t know why,&rsquo;&rdquo; Batra said.</p><p>Batra&rsquo;s research aims to create AI systems that can more readily explain what they do and why. This could come in the form of natural language or visual explanations, both of which &ndash; computer vision and natural language processing &ndash; are central areas of focus in Batra&rsquo;s lab. The machine could, for example, identify regions in image that provide support for its predictions, potentially assisting a user&rsquo;s understanding of what the machine can or cannot do.</p><p>It&rsquo;s an important area of study for a few reasons, Batra said. He classifies AI technology into three levels of maturity:</p><ul><li>Level 1 is technology that is in its infancy. It is not near deployment to everyday users, and the consumers of the technology are researchers. The goal for transparency and explanation is to help researchers and developers to understand the failure modes and current limitations, and deduce how to improve the technology &ndash; &ldquo;actionable insight,&rdquo; as Batra called it.<br />&nbsp;</li><li>Level 2 is when things are working to a degree, enough so that the technology can and has been deployed.<br /><br />&ldquo;The technology may be mature in a narrow range, and you can ship the product,&rdquo; Batra said. &ldquo;Like face detection or fingerprint technology. It&rsquo;s built into products and being used at agencies, airports, or other places.&rdquo;<br /><br />In such cases, you want explanations and interpretability that helps build appropriate trust with users. Users can understand when the system reliably works and when it might not work &ndash; face detection in bad lighting, for example &ndash; and make efforts to use in a more appropriate setting.<br />&nbsp;</li><li>Level 3 is typically a fairly narrow category where the AI is better &ndash; sometimes significantly so &ndash; than the human. Batra used chess-playing and Go-playing bots as an example. The best chess-playing bots convincingly outperform the best humans and reliably hand a resounding defeat to the average human player.<br /><br />&ldquo;We already know bots play much better than humans,&rdquo; he said. &ldquo;In such cases, you don&rsquo;t need to improve the machine and you already trust its skill level. You want the machine to give you explanations not so that you can improve the AI, but so that you can improve yourself.&rdquo;</li></ul><p>Batra envisions scenarios where the techniques his lab develops could assist at all three levels, but the experiments will take place between Levels 1 and 2. They will work in Visual Question Answering, which are agents that answer natural language questions about visual content, and other areas of maturity that may reach the product level in five or more years.</p><p>Batra has served as an assistant professor at Georgia Tech since Fall 2016. <a href="https://www.cc.gatech.edu/~dbatra/">Visit his website for more information about his research.</a></p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1562343497</created>  <gmt_created>2019-07-05 16:18:17</gmt_created>  <changed>1562343497</changed>  <gmt_changed>2019-07-05 16:18:17</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The PECASE is the highest honor bestowed by the United States government to outstanding scientists and engineers beginning independent research careers.]]></teaser>  <type>news</type>  <sentence><![CDATA[The PECASE is the highest honor bestowed by the United States government to outstanding scientists and engineers beginning independent research careers.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-07-05T00:00:00-04:00</dateline>  <iso_dateline>2019-07-05T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-07-05 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>586461</item>      </media>  <hg_media>          <item>          <nid>586461</nid>          <type>image</type>          <title><![CDATA[Dhruv Batra]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[DhruvBatra.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/DhruvBatra.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/DhruvBatra.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/DhruvBatra.jpg?itok=ImTmEvl-]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1485377710</created>          <gmt_created>2017-01-25 20:55:10</gmt_created>          <changed>1485377710</changed>          <gmt_changed>2017-01-25 20:55:10</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="181639"><![CDATA[cc-research; ic-ai-ml]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="622864">  <title><![CDATA[IC Researchers Earn 2018 IJRR Paper of the Year for Impactful Robotics Research]]></title>  <uid>33939</uid>  <body><![CDATA[<p>A paper published in the <em>I<a href="http://www.ijrr.org/">nternational Journal of Robotics Research</a></em> (IJRR) by researchers in the <a href="http://ic.gatech.edu">School of Interactive Computing</a> (IC) was selected as the 2018 IJRR Paper of the Year.</p><p>Chosen from a shortlist considered by the IJRR Executive Committee, the paper, <a href="https://arxiv.org/abs/1707.07383"><em>Continuous-time Gaussian Process Motion Planning via Probabilistic Inference</em></a>, was recognized for its technical rigor, relevance, and potential for impact in the robotics research community. The research comes from IC Ph.D. students <strong>Mustafa Mukadam</strong> and <strong>Jing Dong</strong>, master&rsquo;s student <strong>Xinyan Yan</strong>, and advisors Professor <strong>Frank Dellaert</strong> and Assistant Professor <strong>Byron Boots</strong>.</p><p>This paper introduces a novel formulation of motion planning that treats the problem of finding an efficient, feasible path between two points as probabilistic inference with Gaussian Processes. Motion planning is a hard problem, and state-of-the art sampling-based and trajectory optimization algorithms have well-known drawbacks. The former can effectively find feasible trajectories but often exhibits jerky and redundant motion, and the latter requires a fine approximation of the trajectory to reason about thin obstacles or tight constraints.</p><p>In their paper, the team of researchers adopts a continuous-time representation of trajectories, viewing them as functions that map time to robot state. Combing this representation with fast approaches to probabilistic inference, they developed a computationally-efficient gradient-based optimization algorithm called a Gaussian Process Motion Planner that can overcome large computational costs associated with fine discretization, while still maintaining smoothness of motion in the result.</p><p>With the award comes a $1,000 prize. Boots attended the <a href="http://www.roboticsconference.org/">Robotics: Science and Systems</a> (RSS) conference in the Freiburg, Germany, this week, where he accepted the award on behalf of his team.</p><p>Another paper involving Boots was also awarded a Best Student Paper Award at RSS. Titled <a href="https://arxiv.org/abs/1902.08967"><em>An Online Learning Approach to Model Predictive Control</em></a>, the paper was written by Robotics Ph.D. students <strong>Nolan Wagener</strong>, <strong>Ching-An Cheng</strong>, and <strong>Jacob Sacks</strong>, along with Boots.</p><p>It shows that there exists a close connection between model predictive control (MPC), a popular technique for solving dynamic control tasks, and online learning, an abstract theoretical framework for analyzing online decision making. This new perspective provides a foundation for leveraging powerful online learning algorithms to design MPC algorithms. Toward this end, the researchers propose a generic framework for synthesizing new MPC algorithms called Dynamic Mirror Decent Model Predictive Control.</p><p>The framework exposes key design choices that can help practitioners easily develop new control algorithms tailored to the challenges of their specific task. The approach is validated by developing new MPC algorithms that consistently match or outperform the state-of-the-art on several tasks including an aggressive driving problem with the goal of racing an autonomous car around a dirt track under computational resource constraints.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1561758313</created>  <gmt_created>2019-06-28 21:45:13</gmt_created>  <changed>1561758313</changed>  <gmt_changed>2019-06-28 21:45:13</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[With the award comes a $1,000 prize. Boots attended the Robotics: Science and Systems (RSS) conference in the Freiburg, Germany, this week, where he accepted the award on behalf of his team.]]></teaser>  <type>news</type>  <sentence><![CDATA[With the award comes a $1,000 prize. Boots attended the Robotics: Science and Systems (RSS) conference in the Freiburg, Germany, this week, where he accepted the award on behalf of his team.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-06-28T00:00:00-04:00</dateline>  <iso_dateline>2019-06-28T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-06-28 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>622863</item>          <item>622862</item>      </media>  <hg_media>          <item>          <nid>622863</nid>          <type>image</type>          <title><![CDATA[IJRR Paper of the Year]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[IJRR Paper of the Year.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/IJRR%20Paper%20of%20the%20Year.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/IJRR%20Paper%20of%20the%20Year.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/IJRR%2520Paper%2520of%2520the%2520Year.jpeg?itok=eXIm5Yza]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Byron Boots accepts the IJRR Paper of the Year Award at RSS 2019]]></image_alt>                    <created>1561757769</created>          <gmt_created>2019-06-28 21:36:09</gmt_created>          <changed>1561757769</changed>          <gmt_changed>2019-06-28 21:36:09</gmt_changed>      </item>          <item>          <nid>622862</nid>          <type>image</type>          <title><![CDATA[RSS Best Student Paper]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[RSS Best Student Paper.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/RSS%20Best%20Student%20Paper.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/RSS%20Best%20Student%20Paper.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/RSS%2520Best%2520Student%2520Paper.jpeg?itok=Y_wnOJ51]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[A team of researchers accepts the Best Student Paper award at RSS 2019]]></image_alt>                    <created>1561757679</created>          <gmt_created>2019-06-28 21:34:39</gmt_created>          <changed>1561757679</changed>          <gmt_changed>2019-06-28 21:34:39</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.ic.gatech.edu/content/robotics-computational-perception]]></url>        <title><![CDATA[Robotics and Computational Perception Research at Georgia Tech]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="181602"><![CDATA[ic-robotics]]></keyword>          <keyword tid="181216"><![CDATA[cc-research]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="622859">  <title><![CDATA[Georgia Tech Team Wins New Fetch Robot at ICRA's FetchIt! Mobile Manipulation Challenge]]></title>  <uid>33939</uid>  <body><![CDATA[<p><a href="https://www.cc.gatech.edu/~chernova/"><strong>Sonia Chernova</strong></a>&rsquo;s <a href="http://www.rail.gatech.edu/">Robot Autonomy and Interactive Learning</a> (RAIL) lab is adding a new member this summer after a successful foray into the <a href="https://opensource.fetchrobotics.com/competition"><em>FetchIt!</em><em> Mobile Manipulation Challenge</em></a> at the <a href="https://www.icra2019.org/">International Conference on Robotics and Automation</a> (ICRA) last month.</p><p>A team of Georgia Tech master&rsquo;s and Ph.D. students, advised by Chernova, won the challenge by successfully assembling three kits with its robot in 39 minutes. It was the only team in the competition to complete the task, with the second-place finisher failing to score a point.</p><p>For its victory, the RAIL lab will receive a new mobile manipulation robot from Fetch Robotics, its second. Along with the other robots already in the lab&rsquo;s possession, the newcomer will provide RAIL researchers new opportunities to pursue multi-robot applications. The prize package also includes items from the event&rsquo;s co-sponsors EandM Robotics, Schunk, SICK Sensor Intelligence, and The Construct, to go with the $100,000 robot.</p><p>[VIDEO::https://youtu.be/G_ur71h4CNQ]</p><p>&ldquo;This is a long-term benefit,&rdquo; said Chernova, an associate professor in the <a href="http://ic.gatech.edu">School of Interactive Computing</a>. &ldquo;This is one of the most capable mobile manipulation platforms out there, and to now have two of them will enable us to enhance the capabilities of the robot and pursue new lines of research in our lab.&rdquo;</p><p>The allure of a new state-of-the-art robot would be enough to entice most teams to take part in the competition, but for Chernova and her participating students it was more about the opportunity to explore specific applications that aligned with their research initiatives, past and present.</p><p>The lab has done past work in grasping, semantic reasoning and mapping, and fault diagnosis, the latter of which has become a focus over the past six months. The competition, Ph.D. student <strong>David Kent</strong> said, came at a good time because the particular challenges it presented are often in this domain.</p><p>&ldquo;This particular setup was particularly challenging because there was just enough variability where it wasn&rsquo;t going to work every time,&rdquo; he said. &ldquo;There would always be something going wrong, so fault recovery ended up being very central.&rdquo;</p><p>To win the competition, not only did Georgia Tech&rsquo;s team have to come in first place, it had to do so by scoring at least 14 points. To put that into context, Georgia Tech was the only team in the competition to finish with any points. Teams scored points by successfully collecting items laid out at different stations to assemble three kits. They were awarded eight points for each completed kit. Any kit that was missing a piece, however, resulted in zero points awarded, and any kit with extra pieces would have points deducted.</p><p>&ldquo;If you drop one screw along the way and you don&rsquo;t notice &ndash; which is actually very easy to do &ndash; you go away with nothing,&rdquo; Chernova said. &ldquo;In the real world, a partial kit is useless.&rdquo;</p><p>Georgia Tech achieved its first 15 points and elected to complete its third kit without official scoring to ensure it wouldn&rsquo;t drop below the threshold needed to win the robot. Officially the team scored 15, but a completed third kit gave it an unofficial 23 points, after bonuses were added.</p><p>&ldquo;It was a lot of fun to be able to work with my lab on a single project and see it come together,&rdquo; said Ph.D. student <strong>Weiyu Liu</strong>, another member of the team. &ldquo;It was a really great opportunity to try out some of the code we had written and also to see others&rsquo; code and other research projects.&rdquo;</p><p>Already, the team has turned the experience into a submitted paper, which they hope to have accepted and published in the future. The focus is on mobile manipulation, which is a particularly challenging aspect of robotics because of what Chernova calls &ldquo;an explosion of uncertainty.&rdquo;</p><p>&ldquo;Manipulation in many ways is a solved problem,&rdquo; she said. &ldquo;Navigation in many ways is a solved problem. When you put those two solved problems together, though &ndash; when you take the wheels and put the arm on it &ndash; it becomes a much more challenging problem, one our research will continue to tackle with the aid of Fetch in the coming years.&rdquo;</p><p>Members of the team included: Chernova, Kent, Liu, <strong>Siddhartha Banerjee</strong>, <strong>Angel Daruna</strong>, <strong>Jonathan Balloch</strong>, <strong>Abhinav Jain</strong>, <strong>Akshay Krishnan</strong>, <strong>Muhammad Asif Rana</strong>, <strong>Harish Ravichandar</strong>, <strong>Binit Shah</strong>, and <strong>Nithin Shrivatsav</strong>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1561751714</created>  <gmt_created>2019-06-28 19:55:14</gmt_created>  <changed>1561751714</changed>  <gmt_changed>2019-06-28 19:55:14</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A team of Georgia Tech master’s and Ph.D. students, advised by Sonia Chernova, won the challenge by successfully assembling three kits with its robot in 39 minutes.]]></teaser>  <type>news</type>  <sentence><![CDATA[A team of Georgia Tech master’s and Ph.D. students, advised by Sonia Chernova, won the challenge by successfully assembling three kits with its robot in 39 minutes.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-06-28T00:00:00-04:00</dateline>  <iso_dateline>2019-06-28T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-06-28 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>622858</item>      </media>  <hg_media>          <item>          <nid>622858</nid>          <type>image</type>          <title><![CDATA[Georgia Tech FetchIt! Win]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Fetch.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Fetch.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Fetch.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Fetch.jpeg?itok=fyRhAYhh]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[The Georgia Tech RAIL lab celebrates a win in the FetchIt Mobile Manipulation Challenge at ICRA]]></image_alt>                    <created>1561750984</created>          <gmt_created>2019-06-28 19:43:04</gmt_created>          <changed>1561750984</changed>          <gmt_changed>2019-06-28 19:43:04</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://rail.gatech.edu]]></url>        <title><![CDATA[Robot Autonomy and Interactive Learning Lab]]></title>      </link>          <link>        <url><![CDATA[https://www.ic.gatech.edu/content/robotics-computational-perception]]></url>        <title><![CDATA[Robotics and Computational Perception Research at Georgia Tech]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="181602"><![CDATA[ic-robotics]]></keyword>          <keyword tid="181216"><![CDATA[cc-research]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="622523">  <title><![CDATA[IC Researchers Awarded Outstanding Study Design Paper Award at ICWSM-19]]></title>  <uid>33939</uid>  <body><![CDATA[<p>A team of researchers that included individuals from Georgia Tech&rsquo;s <a href="http://ic.gatech.edu/">School of Interactive Computing</a> were awarded the Outstanding Study Design Paper award at the <a href="https://www.icwsm.org/2019/index.php">International AAAI Conference on Web and Social Media</a> (ICWSM 2019) this week in Munich, Germany.</p><p>The paper, titled <em><a href="http://www.munmund.net/pubs/ICWSM19_DrugEffects.pdf">A Social Media Study on the Effects of Psychiatric Medication Use</a></em>, was presented by IC Ph.D. student <strong>Koustuv Saha</strong> and included fellow IC Ph.D. student <strong>Benjamin Sugar</strong> and IC Assistant Professor <strong>Munmun De Choudhury</strong>. Collaborators from Microsoft Research, Harvard Medical School, and New York University-Shanghai were also involved with the research.</p><p>The research addresses a challenge in understanding the effects of psychiatric medications during mental health treatment. While clinical trials help to evaluate effects of the medication, there are challenges in generalizing trials to broader populations. Using a list of common approved and regulated psychiatric medications and a Twitter dataset of 300 million posts from 30,000 individuals, researchers developed machine learning models to first assess effects relating to mood, cognition, depression, anxiety, psychosis, and suicidal ideation and then, based on a score, observe how use of specific drugs are associated with characteristic changes in an individual&rsquo;s psychopathology.</p><p>The goal of this research is a deeper understanding of effects and how to situate those with treatment outcomes.</p><p>ICWSM is a forum for researchers from multiple disciplines to come together to share knowledge, discuss ideas, exchange information, and learn about cutting-edge research in diverse fields with the common theme of online social media. This includes social theories, as well as computational algorithms for analyzing social media. In its 13<sup>th</sup> year of existence, the conference has become one of the premier venues for computational social science.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1560541875</created>  <gmt_created>2019-06-14 19:51:15</gmt_created>  <changed>1560541875</changed>  <gmt_changed>2019-06-14 19:51:15</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The paper, titled A Social Media Study on the Effects of Psychiatric Medication Use, was presented by IC Ph.D. student Koustuv Saha and included fellow IC Ph.D. student Benjamin Sugar and IC Assistant Professor Munmun De Choudhury.]]></teaser>  <type>news</type>  <sentence><![CDATA[The paper, titled A Social Media Study on the Effects of Psychiatric Medication Use, was presented by IC Ph.D. student Koustuv Saha and included fellow IC Ph.D. student Benjamin Sugar and IC Assistant Professor Munmun De Choudhury.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-06-14T00:00:00-04:00</dateline>  <iso_dateline>2019-06-14T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-06-14 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>622522</item>      </media>  <hg_media>          <item>          <nid>622522</nid>          <type>image</type>          <title><![CDATA[Koustuv Saha ICWSM]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Screen Shot 2019-06-14 at 3.46.50 PM.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Screen%20Shot%202019-06-14%20at%203.46.50%20PM.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Screen%20Shot%202019-06-14%20at%203.46.50%20PM.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Screen%2520Shot%25202019-06-14%2520at%25203.46.50%2520PM.png?itok=HfYfL-Ge]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Koustuv Saha presents a paper at ICWSM]]></image_alt>                    <created>1560541641</created>          <gmt_created>2019-06-14 19:47:21</gmt_created>          <changed>1560541641</changed>          <gmt_changed>2019-06-14 19:47:21</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="181216"><![CDATA[cc-research]]></keyword>          <keyword tid="181214"><![CDATA[ic-hcc]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="622225">  <title><![CDATA[ICML 2019: Georgia Tech Researchers Present at Global Machine Learning Conference]]></title>  <uid>34773</uid>  <body><![CDATA[<p>This year, Long Beach, Calif. will host the <a href="https://icml.cc/Conferences/2019">Thirty-Sixth International Conference on Machine Learning (ICML)</a>. The conference is the premier gathering for artificial intelligence (AI) professionals who specialize in the branch of AI known as machine learning.</p><p>Georgia Tech researchers will present 18 research papers at this year&rsquo;s event. The papers touch on a variety of aspects of machine learning including <a href="https://mlatgt.blog/2019/05/29/mixing-frank-wolfe-and-gradient-descent/?utm_source=mailchimp&amp;utm_campaign=030010e6e1f0&amp;utm_medium=page">blended unconditional gradients</a>, <a href="https://scs.gatech.edu/news/622219/new-machine-learning-algorithms-keep-group-data-diverse">clustering with fairness constraints</a>, and <a href="http://ml.gatech.edu/hg/item/622215">observational agents.</a></p><p><a href="https://ic.gatech.edu/">School of Interactive Computing</a> assistant professor, <strong>Byron Boots </strong>is a 2019 area chair. Boots is also the co-organizer of the <em>Real-World Sequential Decision Making: Reinforcement Learning and Beyond</em> workshop and a guest speaker at the <em>Generative Modeling and Model-Based Reasoning for Robotics and AI.</em></p><p>&ldquo;ICML is globally renowned as one of the best conferences for machine learning research. Year after year, cutting edge research is presented and published and it&rsquo;s a sign of ML@GT&rsquo;s strength that Georgia Tech is consistently a top contributor in the accepted papers.&rdquo; <strong>Justin Romberg, </strong><a href="https://www.ece.gatech.edu/">School of Electrical and Computer Engineering</a> Schlumberger Professor and associate director of the <a href="http://ml.gatech.edu/">Machine Learning Center at Georgia Tech (ML@GT).</a></p><p>Hosted June 9 through 15 at the Long Beach Convention and Entertainment Center, ICML is one of the fastest growing conferences in the world. It will bring together over 8,000 participants including entrepreneurs, engineers, graduate students, postdocs, and academic and industrial researchers.</p><p>Along with Georgia Tech papers, other accepted papers will include work in closely related fields like statistics, data science, and artificial intelligence, and important application areas like speech recognition, robotics, and machine vision.</p><p>For a full list of Georgia Tech&rsquo;s research papers and more information about Georgia Tech&rsquo;s presence at the conference, please <a href="http://bit.ly/ICML2019">click here.</a></p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1559670447</created>  <gmt_created>2019-06-04 17:47:27</gmt_created>  <changed>1559670447</changed>  <gmt_changed>2019-06-04 17:47:27</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech will present 18 papers at the International Conference on Machine Learning.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech will present 18 papers at the International Conference on Machine Learning.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-06-04T00:00:00-04:00</dateline>  <iso_dateline>2019-06-04T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-06-04 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[allie.mcfadden@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allie.mcfadden@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>622050</item>      </media>  <hg_media>          <item>          <nid>622050</nid>          <type>image</type>          <title><![CDATA[ICML 2019]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[icml2019.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/icml2019.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/icml2019.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/icml2019.jpg?itok=DBaIfKTu]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[ICML 2019]]></image_alt>                    <created>1559143704</created>          <gmt_created>2019-05-29 15:28:24</gmt_created>          <changed>1559143704</changed>          <gmt_changed>2019-05-29 15:28:24</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="621531">  <title><![CDATA[Two Georgia Tech Alums Receive Prestigious Awards at CHI 2019]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Two former <a href="http://www.gatech.edu">Georgia Tech</a> students were recognized by the CHI community this week in Glasgow, U.K., one for her overall contributions in human-computer interaction at the conference and another for her long history of promoting social action within the community.</p><p><strong>Jennifer Mankoff</strong>, one of Professor <strong>Gregory Abowd</strong>&rsquo;s first of 30 Ph.D graduates in 2001, was inducted into the prestigious CHI Academy this week, and <strong>Gillian Hayes</strong> (2007), also advised by Abowd, was awarded the Social Impact award.</p><p>Mankoff, who was Abowd&rsquo;s third ever Ph.D. graduate, joined an exclusive community that includes eight Georgia Tech faculty members. Most recently, Professor <strong>Amy Bruckman</strong> was <a href="https://www.cc.gatech.edu/news/602715/professor-amy-bruckman-joins-seven-other-ic-faculty-chi-academy">inducted a year ago</a>.</p><p>Mankoff noted the mentors, like Abowd, she had along the way to give her that opportunity. Abowd provided the introduction for Mankoff at the awards ceremony for the CHI academy. She credited her research community and the CHI community for giving her the freedom to pursue the kind of research that she was passionate about.</p><p>&ldquo;The openness to let people be able to work on whatever they&rsquo;re passionate about and see that has value is something that&rsquo;s been important to me over the years,&rdquo; Mankoff said. &ldquo;More than once, I&rsquo;ve shifted to another area that I wasn&rsquo;t working in before and maybe a lot of others weren&rsquo;t either. It&rsquo;s a sign of how open the community is.&rdquo;</p><p>Seated at a reunion&nbsp;party for the Abowd &ldquo;family&rdquo; &ndash; academics who were part of a lineage that began as doctoral students in Abowd&rsquo;s lab &ndash; she noted the importance of having a vibrant community like that.</p><p>&ldquo;We were very lucky to be there at the beginning, helping to form his group and to learn from him and all the energy he brings to this group,&rdquo; she said. &ldquo;It&rsquo;s one of the strongest networks I have at CHI.&rdquo;</p><p>Hayes received her Social Impact award just 12 years after Abowd received his own in 2007. She said it was an especially proud honor to have the distinction of following in the footsteps of her advisor.</p><p>&ldquo;The way he has instilled in us an ethos of being able to give back, being able to bake in community outcomes with our research outcomes and define good, interesting research problems that also really solve real-world problems, and work in partnership with communities,&rdquo; Hayes said.</p><p>Hayes, whose 30-minute talk at the conference focused on ways in which the community needed to do better in thinking about issues of accessibility, access, racial and gender inequities, and much more, said she thought the CHI community was leading the way as a standard-bearer for diversity, inclusion, and service.</p><p>&ldquo;But we still have a long way to go,&rdquo; she said.</p><p>Her talk, she hoped, would be a call to action to the rest of the community.</p><p>&ldquo;This is our time, and we can control our destinies and we can create truly community-driven innovation,&rdquo; she said.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1557352984</created>  <gmt_created>2019-05-08 22:03:04</gmt_created>  <changed>1557352984</changed>  <gmt_changed>2019-05-08 22:03:04</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Jennifer Mankoff, one of Professor Gregory Abowd’s first of 30 Ph.D graduates in 2001, was inducted into the prestigious CHI Academy this week, and Gillian Hayes (2007), also advised by Abowd, was awarded the Social Impact award.]]></teaser>  <type>news</type>  <sentence><![CDATA[Jennifer Mankoff, one of Professor Gregory Abowd’s first of 30 Ph.D graduates in 2001, was inducted into the prestigious CHI Academy this week, and Gillian Hayes (2007), also advised by Abowd, was awarded the Social Impact award.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-05-08T00:00:00-04:00</dateline>  <iso_dateline>2019-05-08T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-05-08 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>621530</item>      </media>  <hg_media>          <item>          <nid>621530</nid>          <type>image</type>          <title><![CDATA[CHI Awards 2019]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Awards CHI.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Awards%20CHI.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Awards%20CHI.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Awards%2520CHI.jpg?itok=QPWWQnfJ]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Jennifer Mankoff, Gregory Abowd, and Gillian Hayes smiling]]></image_alt>                    <created>1557352606</created>          <gmt_created>2019-05-08 21:56:46</gmt_created>          <changed>1557352606</changed>          <gmt_changed>2019-05-08 21:56:46</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="621184">  <title><![CDATA[IC Researchers Seek to Improve Treatment for Schizophrenia Under New $2.7 Million NIMH Grant]]></title>  <uid>33939</uid>  <body><![CDATA[<p>For the past few years, Georgia Tech School of Interactive Computing Assistant Professor <strong>Munmun De Choudhury</strong> has pursued research that gathers insights about mental health through digital traces individuals leave behind on social media.</p><p>Under a new $2.7 million grant from the <a href="https://www.nimh.nih.gov/index.shtml">National Institutes of Mental Health</a> (NIMH), she and a team of researchers at <a href="https://www.northwell.edu/">Northwell Health</a> will apply that new information in a clinical setting in hopes of improving treatment.</p><p>&ldquo;In our past research, we have gained a number of new insights, but I see an opportunity to influence real world people and outcomes,&rdquo; De Choudhury said. &ldquo;Going beyond just academic and empirical findings, how do you take that information and make a difference in people&rsquo;s lives? What research challenges do such translations pose to the computing domain?&rdquo;</p><p>This grant offers the researchers that opportunity. It will be one of the first in which computing researchers and leading experts in psychiatry research are coming together to influence how treatment can be delivered harnessing patient-contributed data. The grant is funded through a new NIMH program designed to inform and support delivery of high quality mental health services.</p><p>The idea is to build machine learning algorithms based on data that mental health patients voluntarily share with the research team, including both clinicians at Northwell Health and researchers in De Choudhury&rsquo;s lab at <a href="http://www.gatech.edu">Georgia Tech</a>. With these algorithms, they hope to identify different risk markers and symptom changes that appear in social media posts to identify changes and trends in an individual over time.</p><p>By combing a number of different social media sources, primarily Facebook and Twitter, they will look at the use of words or patterns of words an individual uses. In mental illnesses like schizophrenia, the main population they will explore, that is important information to know.</p><p>&ldquo;If they are feeling delusional or experiencing paranoia, what is it that they are saying,&rdquo; De Choudhury said. &ldquo;We can look at social interactions and see whether they might be feeling isolation, which can have a negative impact on mental health. Nuances of language styles, like the way people use articles or pronouns, can say a lot about their psychological state, as well, which has been shown in our and co-investigator (University of Texas Professor) <strong>Jamie Pennebaker</strong>&rsquo;s prior work.&rdquo;</p><p>The population they will focus on comprises younger individuals, largely teens and early 20s, who have had a first episode of schizophrenia. Most will have only recently been diagnosed and admitted to a specialized treatment facility directed by the collaborators on the project in New York. The goal is to use the information gathered in their digital traces to identify risk markers that signal a potential relapse.</p><p>&ldquo;Schizophrenia is a challenging and debilitating illness,&rdquo; De Choudhury said. &ldquo;Even people under treatment have a high chance of relapse with negative outcomes on quality of life, productivity, and functioning. Symptoms often come back, and most mental illnesses are only managed, not cured.&rdquo;</p><p>Better management means that the treatment is timely and highly adaptable to the patient&rsquo;s needs, De Choudhury said. Unfortunately, that&rsquo;s a challenge because, in clinical settings, there is very little knowledge about a patient&rsquo;s day-to-day life. Unlike a disease such as cancer, which has an objective screening that can identify its presence and severity, mental illnesses are based on what is reported. These self-reports are often skewed, based on what a patient wants to tell or remembers.</p><p>&ldquo;In some ways, the treatment paradigm right now is not very evidence based,&rdquo; she said. &ldquo;But to prevent relapse, it&rsquo;s important that we try to be as precise and proactive as possible.&rdquo;</p><p>The project will span four years and began on April 15.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1556739681</created>  <gmt_created>2019-05-01 19:41:21</gmt_created>  <changed>1556739681</changed>  <gmt_changed>2019-05-01 19:41:21</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[This grant offers researchers the opportunity to apply findings of past research to real-world clinical settings.]]></teaser>  <type>news</type>  <sentence><![CDATA[This grant offers researchers the opportunity to apply findings of past research to real-world clinical settings.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-05-01T00:00:00-04:00</dateline>  <iso_dateline>2019-05-01T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-05-01 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>587685</item>      </media>  <hg_media>          <item>          <nid>587685</nid>          <type>image</type>          <title><![CDATA[Munmun De Choudhury]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[munmun portrait_horz.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/munmun%20portrait_horz.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/munmun%20portrait_horz.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/munmun%2520portrait_horz.jpg?itok=wrtogdb-]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Georgia Tech Assistant Professor Munmun De Choudhury]]></image_alt>                    <created>1487686001</created>          <gmt_created>2017-02-21 14:06:41</gmt_created>          <changed>1487783642</changed>          <gmt_changed>2017-02-22 17:14:02</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.ic.gatech.edu/podcasts/ep-3-social-media-and-mental-health]]></url>        <title><![CDATA[The Interaction Hour podcast: Social Media and Mental Health]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="181214"><![CDATA[ic-hcc]]></keyword>          <keyword tid="181215"><![CDATA[ic-social-computing]]></keyword>          <keyword tid="181216"><![CDATA[cc-research]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="621151">  <title><![CDATA[IC’s Caitlyn Seim to Serve as Spring Ph.D. Commencement Speaker]]></title>  <uid>33939</uid>  <body><![CDATA[<p><a href="http://www.ic.gatech.edu/">School of Interactive Computing</a> Ph.D. student <strong>Caitlyn Seim</strong> will serve as commencement speaker for the Georgia Tech Ph.D. graduation ceremony on May 3.</p><p>Seim, who is advised by IC Professor <strong>Thad Starner</strong>, was chosen by a committee of leaders from across campus, including the Office of the Dean of Students, various faculty, and commencement officials. The process included an audition of a speech written by Seim.</p><p>Having recently defended her dissertation for her degree in Human-Centered Computing, Seim said that she is honored by her selection and opportunity to share the stage with Georgia Tech President <strong>Bud Peterson</strong> and Vice Provost for Graduate Education and Faculty Affairs <strong>Bonnie Ferri</strong>.</p><p>&ldquo;I am so thrilled to represent the graduating class, and I can&rsquo;t wait to share my message about the importance of research,&rdquo; Seim said. &ldquo;I love Georgia Tech so much. After all my time here, I still enjoy it as if it were my first day on campus.&rdquo;</p><p>Seim, whose research in wearable computing and passive haptic rehabilitation has been covered extensively by external media, said that in her speech she hopes to help graduates think about a recent realization that she had.</p><p>&ldquo;That is the significant role we have in society&rsquo;s progress,&rdquo; she said. &ldquo;It&rsquo;s about the formation of knowledge and how Ph.D. students are uniquely trained to evaluate fact and expand what society can achieve. My training in the Human-Centered Computing program actually helped me to begin recognizing this by introducing me to the concept of epistemology.&rdquo;</p><p>Looking back, Seim will remember Georgia Tech as a unique student body and a beautiful campus.</p><p>&ldquo;For me, I have to put special emphasis on the academic community,&rdquo; she said. &ldquo;The faculty made learning a great experience, and as a graduate student I felt like I was really part of a community. The student researchers who I mentor continue to impress me and consistently show curiosity, respect, and dedication. It has been a pleasure working with everyone.&rdquo;</p><p>The <a href="http://commencement.gatech.edu/schedule">Ph.D. commencement ceremony</a> will take place at 9-10:30 a.m. Friday, May 3, at McCamish Pavilion. Ferri will also speak. No tickets are required for the event.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1556672878</created>  <gmt_created>2019-05-01 01:07:58</gmt_created>  <changed>1556672878</changed>  <gmt_changed>2019-05-01 01:07:58</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Seim, who is advised by IC Professor Thad Starner, was chosen by a committee of leaders from across campus, including the Office of the Dean of Students, various faculty, and commencement officials.]]></teaser>  <type>news</type>  <sentence><![CDATA[Seim, who is advised by IC Professor Thad Starner, was chosen by a committee of leaders from across campus, including the Office of the Dean of Students, various faculty, and commencement officials.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-04-30T00:00:00-04:00</dateline>  <iso_dateline>2019-04-30T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-04-30 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>611755</item>      </media>  <hg_media>          <item>          <nid>611755</nid>          <type>image</type>          <title><![CDATA[Caitlyn Seim - PHL]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Seim Banner.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Seim%20Banner.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Seim%20Banner.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Seim%2520Banner.jpg?itok=EAolAA9Q]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Caitlyn Seim showing haptic glove]]></image_alt>                    <created>1537470856</created>          <gmt_created>2018-09-20 19:14:16</gmt_created>          <changed>1537470856</changed>          <gmt_changed>2018-09-20 19:14:16</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="181210"><![CDATA[ic-ubicomp-and-wearable]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="620987">  <title><![CDATA[Georgia Tech's Child Study Lab Sees Computer Science as New 'Microscope' for Autism Research]]></title>  <uid>33939</uid>  <body><![CDATA[<p>What if behavior could be mapped and analyzed in much the same way an MRI provides images of the brain or a microscope an up-close look at cells? Both proved to be paradigm shifts in detecting developmental anomalies or diseases like cancer, and <a href="http://www.gatech.edu">Georgia Tech</a> research at the intersection of computing and early childhood behavior could do the same for autism.</p><p>Building upon nearly a decade of research, <a href="http://www.childstudylab.gatech.edu/">Georgia Tech&rsquo;s Child Study Lab</a>, which was established in 2010 by a $10 million grant from the <a href="https://www.nsf.gov/">National Science Foundation</a>, and collaborators at <a href="https://weill.cornell.edu/">Weill Cornell Medical College</a> were awarded last year with a $1.7 million grant from the <a href="https://www.nih.gov/">National Institutes of Health</a>. The grant will help researchers collect new data, using the datasets created over the past decade to develop automated tools that better and more efficiently characterize behaviors that are present and important in typical child development but are often considered to be core, early-emerging markers of autism spectrum disorder (ASD) when absent.</p><p>[VIDEO::https://youtu.be/jVldx01ENHM::aVideoStyle]</p><p><a href="https://www.youtube.com/watch?v=jVldx01ENHM" target="_blank">[RELATED:&nbsp;Using Computer Science to Augment Autism Research at Georgia Tech (VIDEO)]</a></p><p>Psychologists have long understood that there were links between early childhood development and the likelihood of typical language and behavior outcomes throughout life. What they weren&rsquo;t able to do, however, was to study childhood behavior at a granular level similar to that of a microscope. Given the importance of early detection to inform proper interventions, the tedium of human coding and analysis poses a significant challenge.</p><p>&ldquo;That process is manual and driven by humans specifying what happens in a frame of a video,&rdquo; said <strong>Jim Rehg</strong>, a professor in the <a href="http://www.ic.gatech.edu">School of Interactive Computing</a> and the principal investigator on the NIH award. &ldquo;It takes hours upon hours of data collection and analysis.&rdquo;</p><p>Computing could alter that reality, and this work being done at Georgia Tech is a significant reason why.</p><p>&ldquo;Given enough video, we can model the details of behavior,&rdquo; Rehg said. &ldquo;Deep learning, married with the ability to collect the data, allows us to build out how our algorithms work in much the same way computer science has been applied to genetics and imaging to make those more powerful and scalable.&rdquo;</p><p>That has long been the mission of the Child Study Lab, and the latest grant will continue to move the needle forward in autism research at Georgia Tech and beyond. Unlike many other conditions, autism spectrum disorder can&rsquo;t be found by taking a blood test or viewing images of the brain. Doctors must analyze behavior through developmental screenings and comprehensive diagnostic evaluations.</p><p>During screenings, doctors might talk or play with a child to see how they learn, speak, or behave. Do they exhibit typical communicative skills like joint attention, in which two people use gestures or gaze to share their attention with respect to other objects or events? The skills a child demonstrates in these areas are known to be strong indicators of how they will develop throughout childhood and adolescence.</p><p>The challenge here is that, given how important it is to detect ASD at an early age and thus tailor interventions and education to meet the child&rsquo;s specific needs, the manual labor that comes with these screenings and evaluations makes it far less efficient than detection of other developmental challenges. Autism spectrum disorder affects one in 59 children in the United States alone, and not all who are screened are ultimately determined to be one of those individuals.</p><p>The need for objective, automated measurements of behavior is clear, and Rehg &ndash; along with IC Research Scientist <strong>Agata Rozga</strong>, Child Study Lab coordinator <strong>Audrey Southerland</strong>, collaborators at Weill Cornell, and more &ndash; are taking steps in that direction.</p><p>&ldquo;For us, the goal is to use these computational capabilities to extract the important key moments and information to give clinicians or psychologists the ability to more easily examine a child&rsquo;s behavior,&rdquo; Southerland said. &ldquo;If we can provide additional details through technology about the quality or coordination of important social and communicative behaviors, we can hopefully provide behavioral experts with the capability of exploring these behaviors in much greater detail than currently possible.&rdquo;</p><p>The first grant from the NSF funded the creation of the Child Study Lab, which has over the years developed an extensive dataset of behaviors in typically developing children. At the time, it was the first large-scale investment in technology that would assist in modeling and sensing behaviors that underlie developmental conditions like autism spectrum disorder. Additional grants have assisted in studies that use computer vision to measure and detect gaze shifts or wearable technology and machine learning to detect and differentiate between types of problem behaviors.</p><p>The NIH grant brings all the past research together to compare what the sensory data says in relation to human coding, and how that might ultimately serve to develop reliable, objective, automated tools for measuring early, nonverbal communication behaviors.</p><p>&ldquo;The important thing is for us to make sure that whatever we produce is good enough so that we can actually push it out into the field to people who are specializing in this area,&rdquo; Southerland said. &ldquo;We never want to get rid of the human expert in this field, but we want to build technology they can use to augment and streamline their analysis of behavior.&rdquo;</p><p>In addition to the National Institutes of Health and the National Science Foundation, the Child Study Lab has also received funding from the <a href="https://www.simonsfoundation.org/">Simons Foundation</a> and has partnered with external entities like the <a href="https://www.marcus.org/">Marcus Autism Center</a>.</p><p>Southerland and the Child Study Lab are actively seeking families with young children to participate in this study to further develop their automated tools. Anyone interested in playing a part in this exciting work can visit the lab&rsquo;s website to learn more.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1556494599</created>  <gmt_created>2019-04-28 23:36:39</gmt_created>  <changed>1556494599</changed>  <gmt_changed>2019-04-28 23:36:39</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech’s Child Study Lab, which was established in 2010 by a $10 million grant from the National Science Foundation, and collaborators at Weill Cornell Medical College were awarded last year with a $1.7 million grant from the NIH.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech’s Child Study Lab, which was established in 2010 by a $10 million grant from the National Science Foundation, and collaborators at Weill Cornell Medical College were awarded last year with a $1.7 million grant from the NIH.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-04-28T00:00:00-04:00</dateline>  <iso_dateline>2019-04-28T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-04-28 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="http://david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>620985</item>      </media>  <hg_media>          <item>          <nid>620985</nid>          <type>image</type>          <title><![CDATA[Autism and Computing Research at Georgia Tech]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Autism and Computing rotator EDIT2.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Autism%20and%20Computing%20rotator%20EDIT2.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Autism%20and%20Computing%20rotator%20EDIT2.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Autism%2520and%2520Computing%2520rotator%2520EDIT2.jpg?itok=vfXPboIw]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Creating the Next in Autism and Computing Research at Georgia Tech's Child Study Lab]]></image_alt>                    <created>1556487692</created>          <gmt_created>2019-04-28 21:41:32</gmt_created>          <changed>1556487692</changed>          <gmt_changed>2019-04-28 21:41:32</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://www.childstudylab.gatech.edu/]]></url>        <title><![CDATA[Child Study Lab at Georgia Tech]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="620928">  <title><![CDATA[IC's Miranda Parker Uncovering Factors that Lead to CS Programs in Georgia]]></title>  <uid>33939</uid>  <body><![CDATA[<h3>Like the majority of research in IC, it comes down to the people</h3><p><strong>Miranda Parker</strong> was early on in her time as a Ph.D. student in the <a href="http://www.ic.gatech.edu">School of Interactive Computing</a> (IC) when she began her first quantitative study. She wanted to see whether they could model the variables that influence whether a school would or would not adopt computer science (CS) as a class for its students.</p><p>Prior to the study, the hypothesis was that variables like median income or enrollment numbers or the population of students who qualify for free and reduced cost lunch programs could be an indicator of whether or not computer science was implemented. Lower income levels, for example, might correlate to schools that just didn&rsquo;t have the resources to deploy such programs.</p><p>Somewhat to Parker&rsquo;s surprise, the short answer to that question was &ndash; no. No, a higher median income didn&rsquo;t mean more computer science; no, schools with lower free and reduced lunch numbers didn&rsquo;t teach computer science at a higher rate; no, higher enrollment didn&rsquo;t necessarily mean more young students yearning to learn how to code.</p><p>On the surface, that first study might have felt like a failure. If the goal was to prove that income disparity equated to a disparity in who was gaining exposure to a key part of their education, then it may be fair to describe it as such. However, Parker looks back on that study as a key component of what has guided her research at <a href="http://www.gatech.edu">Georgia Tech</a> ever since.</p><p>It wasn&rsquo;t a failure, she said. It just helped open her eyes to some realities she may not have noticed otherwise.</p><p>&ldquo;Part of me wanted my first study to fail because part of me didn&rsquo;t want to be able to say, &lsquo;Oh, yes, these three things mean more computer science,&rsquo;&rdquo; she said. &ldquo;Sure, it&rsquo;s snazzy. It&rsquo;s easy to put on a Facebook post. But it&rsquo;s so much more complicated than that. And I&rsquo;m glad that it&rsquo;s more complicated than that.&rdquo;</p><p>Over the years, Parker, who studies <a href="https://www.ic.gatech.edu/academics/human-centered-computing-phd-program">human-centered computing</a> with a focus on computer science education, has gained a deeper understanding of what might influence a public high school in Georgia to offer computer science education. None of the above items are among them. What she said has shown some correlation is a bit more complex.</p><p>&ldquo;If a school had computer science in 2016, the correlation was that it also had computer science in 2015, 2014, and 2013,&rdquo; she said.</p><p>Okay, but how did it get started in 2013? That&rsquo;s part of the question her research is trying to uncover.</p><p>&ldquo;That&rsquo;s an endless cycle,&rdquo; she explained. &ldquo;You had it before, now you still have it. But how did you get it to begin with?&rdquo;</p><p>One thing she&rsquo;s learned, which can be said for a majority of research in IC, is that it comes down to the people. Who is involved with a school and what connections do they have to a particular subject? If a connection has worked in CS in the past or may be passionate about adding that to the school, the results indicate the school is much more likely to employ that subject.</p><p>Makes sense, right?</p><p>&ldquo;If a school has someone who can teach computer science and there are parents saying we need to teach computer science, then whether it&rsquo;s rural or urban or high or low income, it doesn&rsquo;t matter,&rdquo; Parker said. &ldquo;They will have computer science. But if there&rsquo;s no one there to push them, it&rsquo;s much less likely.&rdquo;</p><p>It&rsquo;s not just a person, either. Organizations like Georgia Tech&rsquo;s <a href="http://constellations.gatech.edu/">Constellations Center for Equity in Computing</a> and the <a href="https://www.ceismc.gatech.edu/">Center for Education Integrating Science, Math, and Computing</a> are also championing K-12 CS educational opportunities.</p><p>But, Parker said, being successful is a bit more complicated than just serving CS up to the masses in communities that are unfamiliar with these and other organizations.</p><p>&ldquo;Computer science isn&rsquo;t the end all, be all,&rdquo; she said. &ldquo;If a school is in a more agricultural-based county, that may benefit the school more than a heavy computer science program would. It&rsquo;s about finding how computer science can most benefit students in different ways for different areas.&rdquo;</p><p>The most encouraging thing about that research, Parker said, was that the failure of her original study showed her one important piece of information.</p><p>&ldquo;You don&rsquo;t need high income to have computer science,&rdquo; she said. &ldquo;It really can be for everyone. That&rsquo;s an important piece of information to know.&rdquo;</p><p>Parker is aiming to finish her Ph.D. work in the fall and will decide between pursuing a faculty position, which she is leaning toward now, or other opportunities that may present themselves down the road. Former Georgia Tech Professor <strong>Mark Guzdial</strong>, now a faculty member at the University of Michigan, is Parker&rsquo;s advisor.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1556229916</created>  <gmt_created>2019-04-25 22:05:16</gmt_created>  <changed>1556229916</changed>  <gmt_changed>2019-04-25 22:05:16</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Miranda Parker is investigating the qualities in high schools that lead to having a CS program in Georgia. One thing she’s learned, which can be said for a majority of research in IC, is that it comes down to the people.]]></teaser>  <type>news</type>  <sentence><![CDATA[Miranda Parker is investigating the qualities in high schools that lead to having a CS program in Georgia. One thing she’s learned, which can be said for a majority of research in IC, is that it comes down to the people.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-04-25T00:00:00-04:00</dateline>  <iso_dateline>2019-04-25T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-04-25 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>620927</item>      </media>  <hg_media>          <item>          <nid>620927</nid>          <type>image</type>          <title><![CDATA[Miranda Parker]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Parker rotator.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Parker%20rotator.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Parker%20rotator.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Parker%2520rotator.jpg?itok=1Bc2dfb9]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Miranda Parker stands by the street]]></image_alt>                    <created>1556228807</created>          <gmt_created>2019-04-25 21:46:47</gmt_created>          <changed>1556228807</changed>          <gmt_changed>2019-04-25 21:46:47</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.ic.gatech.edu/academics/human-centered-computing-phd-program]]></url>        <title><![CDATA[Human-Centered Computing at Georgia Tech]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="620364">  <title><![CDATA[People May Be Able to Find Images on a Computer Based Solely on Their Eye Movements]]></title>  <uid>34773</uid>  <body><![CDATA[<p>When humans try to recall images from memory, they involuntarily move their eyes in a pattern that is similar to when they are actually looking at the image.</p><p><strong>James Hays</strong>, an associate professor in the <a href="https://www.ic.gatech.edu/">School of Interactive Computing</a> and the <a href="http://ml.gatech.edu/">Machine Learning Center at Georgia Tech</a>, and researchers from TU Berlin and Universit&auml;t Regensburg, are looking at how these patterns, known as gaze patterns, can be used to retrieve images from memory so that it&rsquo;s easier to find that same image &ndash; like an adorable dog photo &ndash; stashed away in the digital cloud.</p><p>Through a controlled lab experiment and a real-world scenario, Hays and his co-authors have developed a matching technique using machine learning to help computers understand what image someone is thinking of, and accurately retrieve it from a computer folder &ndash; based solely on eye movements.</p><p>Using eye-tracking software in the lab, the researchers recorded the eye movements of 30 participants as they looked at 100 different indoor and outdoor images, ranging from picturesque lighthouse scenes to cozy living rooms. Participants were then asked to look at a blank screen and recall any of the 100 images they just saw.</p><p>The researchers also conducted a realistic scenario by putting together a mock museum with 20 posters of various sizes and orientations spread throughout the &ldquo;museum.&rdquo; They outfitted each participant with a headset complete with a <a href="https://pupil-labs.com/pupil/">Pupil mobile eye tracker</a>, complete with two eye cameras, and one front-facing camera. Participants were asked to walk around the museum and look at all of the images, taking however long they liked, and in whatever order they preferred. They took anywhere from a few seconds to over a minute looking at each poster.</p><p>After looking at all of the images, participants were asked to look at a blank whiteboard and recall as many of their favorite images as possible, in any order. Participants remembered between 5 and 10 of the total 20 poster images.</p><p>The results from both experiments indicated that the gaze patterns of people looking at a photograph contain a unique signature that computers can use to accurately determine the corresponding photo.</p><p>Using the data collected from the experiments, researchers created spatial histograms, or heat maps, that could be analyzed by their new machine learning technique to determine which photo someone was thinking about. Hays and Co. also used a <a href="https://en.wikipedia.org/wiki/Convolutional_neural_network">Convolutional Neural Network (CNN)</a> to look at the 2,700 collected heat maps.</p><p>&ldquo;The ability to retrieve images using eye movements would be beneficial to those who are disabled or unable to search for images using their hands or voice,&rdquo; said Hays. &ldquo;Also, wearable technology is a huge industry right now, and we believe that tracking motion with the eyes would be a natural by-product of that boom.&rdquo;</p><p>In Hays&rsquo; previous research, <a href="https://arxiv.org/abs/1801.02753">SketchyGAN</a>, people are able to draw (rather than type) what they are looking for to get image search results. But, if images are mislabeled or people can&rsquo;t draw that well, search results are not useful. Other attempts at image retrieval have included various types of brain scans, but those are often too expensive and impractical for everyday use.</p><p>While this new research may prove helpful to people, it does not come without limitations, researchers note. The scalability of the model in part depends on image content and how many images are in the database. The more images the database holds, the more likely it is that several different photos will produce extremely similar gaze patterns.</p><p>One proposed workaround to this potential issue is asking people to make more extensive eye movements than they normally would. At the moment, participants are not asked to do anything more intentional or out of the norm when looking at the images. Researchers think that by putting a small amount of effort back on the user, this would help the computer find the correct image.</p><p>Another foreseen problem is working with people&rsquo;s memories. As people&rsquo;s memories grow weaker with time or age, it will be harder to get a crisp gaze pattern and accurately return the right image. The team plans to explore these potential issues in the future by looking into the influence on memory decay and how it affects image retrieval from long-term memory.</p><p>The authors are also looking into combining gaze tracking with a speech interface, as that could be a rich resource for information. No matter which direction they go, the team believes that eye-movement image retrieval is not only possible but also a significant next step to improving human and computer interaction.</p><p>One might even say that before long, people will be able to find that favorite dog photo in the blink of an eye.</p><p>Further details on this approach to image retrieval can be found in the paper,<a href="http://cybertron.cg.tu-berlin.de/xiwang/files/mi.pdf"> &ldquo;The Mental Image Revealed by Gaze Tracking,&rdquo;</a> which has been accepted at the ACM Conference on Human Factors in Computing Systems (CHI 2019), May 4-9.</p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1555080141</created>  <gmt_created>2019-04-12 14:42:21</gmt_created>  <changed>1555102263</changed>  <gmt_changed>2019-04-12 20:51:03</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[What if we could find images on our computer just by tracking our eye movements? ML@GT assistant professor James Hays explores this idea in new research that will be presented next month at CHI 2019.]]></teaser>  <type>news</type>  <sentence><![CDATA[What if we could find images on our computer just by tracking our eye movements? ML@GT assistant professor James Hays explores this idea in new research that will be presented next month at CHI 2019.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-04-12T00:00:00-04:00</dateline>  <iso_dateline>2019-04-12T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-04-12 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allie.mcfadden@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>620361</item>          <item>620363</item>      </media>  <hg_media>          <item>          <nid>620361</nid>          <type>image</type>          <title><![CDATA[Machine Learning at Georgia Tech and School of Interactive Computing associate professor James Hays collaborated with researchers from TU Berlin and Universität Regensburg to create new eye-tracking software.]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Screen Shot 2019-04-12 at 10.33.09 AM.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Screen%20Shot%202019-04-12%20at%2010.33.09%20AM.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Screen%20Shot%202019-04-12%20at%2010.33.09%20AM.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Screen%2520Shot%25202019-04-12%2520at%252010.33.09%2520AM.png?itok=XWLARbAt]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1555079754</created>          <gmt_created>2019-04-12 14:35:54</gmt_created>          <changed>1555102299</changed>          <gmt_changed>2019-04-12 20:51:39</gmt_changed>      </item>          <item>          <nid>620363</nid>          <type>image</type>          <title><![CDATA[In one experiment, participants were outfitted with a Pupil mobile eye tracker and asked to observe art in a fake museum.]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Screen Shot 2019-04-12 at 10.33.34 AM.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Screen%20Shot%202019-04-12%20at%2010.33.34%20AM.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Screen%20Shot%202019-04-12%20at%2010.33.34%20AM.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Screen%2520Shot%25202019-04-12%2520at%252010.33.34%2520AM.png?itok=UXLQxpbT]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1555079859</created>          <gmt_created>2019-04-12 14:37:39</gmt_created>          <changed>1555079859</changed>          <gmt_changed>2019-04-12 14:37:39</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="134"><![CDATA[Student and Faculty]]></category>          <category tid="135"><![CDATA[Research]]></category>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>      </categories>  <news_terms>          <term tid="134"><![CDATA[Student and Faculty]]></term>          <term tid="135"><![CDATA[Research]]></term>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="620328">  <title><![CDATA[IC Student Brianna Tomlinson Earns Campus Life Scholarship]]></title>  <uid>33939</uid>  <body><![CDATA[<p>School of Interactive Computing Ph.D. student <strong><a href="https://www.ic.gatech.edu/content/brianna-tomlinson">Brianna Tomlinson</a></strong> was awarded the <a href="https://campusservices.gatech.edu/scholarships">Campus Life Scholarship</a> in recognition of her leadership, scholarship, and service to Georgia Tech. The scholarship provides $5,000 from Campus Services and offers a lunch to honor recipients on April 18.</p><p>Tomlinson is involved in the <a href="http://women.cc.gatech.edu/grad.html">Graduate Women@CC</a> group, helping to organize events. She has been involved in some capacity with the group since she came to Georgia Tech six years ago. The group is a collection of female graduate students who strive for professional success for their members. They meet once each month for coffee, where they discuss current projects they are working on, and also help to organize various workshops throughout the year.</p><p>&ldquo;It&rsquo;s great to hear that people think my impact on GradWomen has been a good one, and the work to keep it going has been useful for the greater campus community,&rdquo; Tomlinson said. &ldquo;I&rsquo;m hoping that it will actually help others learn about GradWomen and encourage them to get involved.&rdquo;</p><p>Tomlinson is working toward her Ph.D. in human-centered computing. Her current work is on evaluating effective methods for studying engagement, learning, and transfer for multimodal interactive systems. This includes collaboration on a grant to develop and evaluate accessible auditory displays for PhET Interactive Simulations, a non-profit open educational resource project at the University of Colorado that creates and hosts explorable explanations.</p><p>She is advised by Professor <strong><a href="https://www.cc.gatech.edu/people/bruce-walker">Bruce Walker</a></strong>, who is jointly appointed in the School of Interactive Computing and the School of Psychology.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1555001831</created>  <gmt_created>2019-04-11 16:57:11</gmt_created>  <changed>1555001831</changed>  <gmt_changed>2019-04-11 16:57:11</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The scholarship provides $5,000 from Campus Services and offers a lunch to honor recipients on April 18.]]></teaser>  <type>news</type>  <sentence><![CDATA[The scholarship provides $5,000 from Campus Services and offers a lunch to honor recipients on April 18.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-04-11T00:00:00-04:00</dateline>  <iso_dateline>2019-04-11T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-04-11 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>620327</item>      </media>  <hg_media>          <item>          <nid>620327</nid>          <type>image</type>          <title><![CDATA[Brianna Tomlinson]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[brianna_tomlinson_headshot.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/brianna_tomlinson_headshot.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/brianna_tomlinson_headshot.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/brianna_tomlinson_headshot.jpg?itok=n-Y2E5u_]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Brianna Tomlinson]]></image_alt>                    <created>1555001765</created>          <gmt_created>2019-04-11 16:56:05</gmt_created>          <changed>1555001765</changed>          <gmt_changed>2019-04-11 16:56:05</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="620251">  <title><![CDATA[Georgia Tech’s Newest AI System Explains Its Decisions to People in Real-Time to Understand User Preferences]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Georgia Institute of Technology researchers, in collaboration with Cornell and University of Kentucky, have developed an artificially intelligent (AI) agent that can automatically generate natural language explanations in real-time to convey the motivations behind its actions. The work is designed to give humans engaging with AI agents or robots confidence that the agent is performing the task correctly and can explain a mistake or errant behavior.</p><p>The agent also uses everyday language that non-experts can understand. The explanations, or &ldquo;rationales&rdquo; as the researchers call them, are designed to be relatable and inspire trust in those who might be in the workplace with AI machines or interact with them in social situations.</p><p>&ldquo;If the power of AI is to be democratized, it needs to be accessible to anyone regardless of their technical abilities,&rdquo; said <strong>Upol Ehsan</strong>, Ph.D. student in the School of Interactive Computing at Georgia Tech and lead researcher.</p><p>&ldquo;As AI pervades all aspects of our lives, there is a distinct need for human-centered AI design that makes black-boxed AI systems explainable to everyday users. Our work takes a formative step toward understanding the role of language-based explanations and how humans perceive them.&rdquo;</p><p>Researchers developed a participant study to determine if their AI agent could offer rationales that mimicked human responses. Spectators watched the AI agent play the videogame Frogger and then ranked three on-screen rationales in order of how well each described the AI&rsquo;s game move.</p><p>Of the three anonymized justifications for each move &ndash; a human-generated response, the AI-agent response, and a randomly generated response &ndash; the participants preferred the human-generated rationales first, but the AI-generated responses were a close second.</p><p>Frogger offered the researchers the chance to train an AI in a &ldquo;sequential decision-making environment,&rdquo; which is a significant research challenge because decisions that the agent has already made influence future decisions. Therefore, explaining the chain of reasoning to experts is difficult, and even more so when communicating with non-experts, according to researchers.</p><p>The human spectators understood the goal of Frogger in getting the frog safely home without being hit by moving vehicles or drowned in the river. The simple game mechanics of moving up, down, left or right, allowed the participants to see what the AI was doing, and to evaluate if the rationales on the screen clearly justified the move.&nbsp;</p><p>The spectators judged the rationales based on:</p><ul><li><strong>Confidence</strong> &ndash; the person is confident in the AI to perform its task&nbsp;</li><li><strong>Human-likeness</strong> &ndash; looks like it was made by a human</li><li><strong>Adequate justification</strong> &ndash; adequately justifies the action taken</li><li><strong>Understandability</strong> &ndash; helps the person understand the AI&rsquo;s behavior</li></ul><p>AI-generated rationales that were ranked higher by participants were those that showed recognition of environmental conditions and adaptability, as well as those that communicated awareness of upcoming dangers and planned for them. Redundant information that just stated the obvious or mischaracterized the environment were found to have a negative impact.</p><p>&ldquo;This project is more about understanding human perceptions and preferences of these AI systems than it is about building new technologies,&rdquo; said Ehsan. &ldquo;At the heart of explainability is sensemaking. We are trying to understand that human factor.&rdquo;</p><p>A second related study validated the researchers&rsquo; decision to design their AI agent to be able to offer one of two distinct types of rationales:</p><ul><li><strong>Concise, &ldquo;focused&rdquo; rationales </strong>or</li><li><strong>Holistic, &ldquo;complete picture&rdquo; rationales</strong></li></ul><p>In this second study, participants were only offered AI-generated rationales&nbsp;after watching the AI play Frogger. They were&nbsp;asked to&nbsp;select&nbsp;the answer that&nbsp;they preferred in a scenario&nbsp;where an AI made a mistake or behaved unexpectedly. They did not know the rationales were grouped into the two categories.</p><p>By a 3-to-1 margin, participants favored answers that were classified in the &ldquo;complete picture&rdquo; category. Responses showed that people appreciated the AI thinking about future steps rather than just what was in the moment, which might make them more prone to making another mistake. People also wanted to know more so that they might directly help the AI fix the errant behavior.</p><p>&ldquo;The situated understanding of the perceptions and preferences of people working with AI machines give us a powerful set of actionable insights that can help us design better human-centered, rationale-generating, autonomous agents,&rdquo; said <strong>Mark Riedl</strong>, professor of Interactive Computing and lead faculty member on the project.</p><p>A possible future direction for the research will apply the findings to autonomous agents of various types, such as companion agents, and how they might respond based on the task at hand. Researchers will also look at how agents might respond in different scenarios, such as during an emergency response or when aiding teachers in the classroom.</p><p>The research was <a href="https://www.youtube.com/watch?v=9L4CZ5n7rQY">presented in March</a>&nbsp;at the Association for Computing Machinery&rsquo;s Intelligent User Interfaces 2019 Conference. The paper is titled <em>Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions</em>. Ehsan will present a position paper highlighting the design and evaluation challenges of human-centered Explainable AI systems at the upcoming <em>Emerging Perspectives in Human-Centered Machine Learning</em> workshop at the ACM CHI 2019 conference, May 4-9, in Glasgow, Scotland.</p><div>&nbsp;</div>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1554838973</created>  <gmt_created>2019-04-09 19:42:53</gmt_created>  <changed>1554840417</changed>  <gmt_changed>2019-04-09 20:06:57</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Institute of Technology researchers have developed an artificially intelligent (AI) agent that can automatically generate natural language explanations in real-time to convey the motivations behind its actions.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Institute of Technology researchers have developed an artificially intelligent (AI) agent that can automatically generate natural language explanations in real-time to convey the motivations behind its actions.]]></sentence>  <summary><![CDATA[<p>Georgia Institute of Technology researchers, in collaboration with Cornell and University of Kentucky, have developed an artificially intelligent (AI) agent that can automatically generate natural language explanations in real-time to convey the motivations behind its actions. The work is designed to give humans engaging with AI agents or robots confidence that the agent is performing the task correctly and can explain a mistake or errant behavior.</p>]]></summary>  <dateline>2019-04-09T00:00:00-04:00</dateline>  <iso_dateline>2019-04-09T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-04-09 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu">Joshua Preston</a></p><p>GVU Center, College of Computing</p><p>678.231.0787</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>620255</item>      </media>  <hg_media>          <item>          <nid>620255</nid>          <type>image</type>          <title><![CDATA[Explainable AI for Frogger]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Explainable AI.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Explainable%20AI.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Explainable%20AI.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Explainable%2520AI.png?itok=JvtBz-V4]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[AI study with Frogger]]></image_alt>                    <created>1554840392</created>          <gmt_created>2019-04-09 20:06:32</gmt_created>          <changed>1554840392</changed>          <gmt_changed>2019-04-09 20:06:32</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="620129">  <title><![CDATA[HackGT Hopes to be a ‘Catalyst’ for Underserved Atlanta Students]]></title>  <uid>34773</uid>  <body><![CDATA[<p>Known for its <a href="https://www.cc.gatech.edu/news/613449/students-across-country-participate-hackgt-5">wildly successful hackathons</a> for college students, <a href="https://hack.gt/">HackGT</a> is bringing some of that magic to high school students from across Atlanta with the third annual Catalyst event. Set for April 13 on the Georgia Tech campus, <a href="https://catalyst.hack.gt/#home">Catalyst</a> is a one-day workshop, blended with traditional hackathon challenges.</p><p>The free event will bring more than 400 high school students from 60 schools across the metro Atlanta area. Catalyst aims to expose underserved students to various branches of science, technology, engineering, art and math (STEAM) education and ignite a spark to pursue such interests in the future.</p><p>&ldquo;HackGT 5, BuildGT, Horizons, and many other hackathon related events are built for college students. Given the educational disparities that exist within certain parts of Atlanta, HackGT understands the importance of reaching out to communities beyond Georgia Tech and other collegiate environments,&rdquo; said <strong>Jordan Madison</strong>, computer science (CS) major and HackGT&rsquo;s director of communications.</p><p>In 2017, less than one percent of students in Georgia public schools took the Advanced Placement Computer Science exam. Only two schools in the Atlanta Public School system offered the course. This lack of access motivated the organizers to keep the event completely free, from registration to swag, allowing students from any background the opportunity to participate in Catalyst.</p><p>Sponsored by Amazon and Facebook, with in-kind donations from Disney and Pixar Animation Studios, Catalyst offers four tracks for participants to choose from: software, hardware, gaming, and design.</p><p>Catalyst welcomes participants with no prior STEAM experience, and each track offers a workshop to help participants develop the basic foundations and skills that are needed to complete the track&rsquo;s tasks. Participants will create technology pieces in the workshop that they can take home.</p><p>Within each track, students are divided into small groups and mentored by college student. The mentors provide hands-on support to help students better grasp concepts. The students will also hear from industry professionals about pursuing an education or career in STEAM during panel discussions scheduled for the event.</p><p>&ldquo;Underserved students success in computer science or other STEAM-related fields is mainly linked to their lack of access to resources and opportunities. They have plenty of talent, but no idea about the options that are waiting for them. Events like Catalyst are crucial for exposing more kids to STEAM who might not otherwise have the opportunity to do so,&rdquo; said <strong>PK Graff</strong>, a fellow at the <a href="http://constellations.gatech.edu/">Constellations Center for Equity in Computing at Georgia Tech</a> who teaches computer science at schools in Atlanta Public Schools and serves as an advisory member for Catalyst.</p><p>Registration closes April 5 at <a href="https://catalyst.hack.gt/#registration">https://catalyst.hack.gt/#registration</a></p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1554490830</created>  <gmt_created>2019-04-05 19:00:30</gmt_created>  <changed>1554490830</changed>  <gmt_changed>2019-04-05 19:00:30</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Set for April 13 on the Georgia Tech campus, Catalyst is a one-day workshop, blended with traditional hackathon challenges. ]]></teaser>  <type>news</type>  <sentence><![CDATA[Set for April 13 on the Georgia Tech campus, Catalyst is a one-day workshop, blended with traditional hackathon challenges. ]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-04-05T00:00:00-04:00</dateline>  <iso_dateline>2019-04-05T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-04-05 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allie.mcfadden@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>620128</item>      </media>  <hg_media>          <item>          <nid>620128</nid>          <type>image</type>          <title><![CDATA[Set for April 13 on the Georgia Tech campus, Catalyst is a one-day workshop, blended with traditional hackathon challenges. ]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Screen Shot 2019-04-05 at 2.57.57 PM.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Screen%20Shot%202019-04-05%20at%202.57.57%20PM.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Screen%20Shot%202019-04-05%20at%202.57.57%20PM.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Screen%2520Shot%25202019-04-05%2520at%25202.57.57%2520PM.png?itok=mHr_JNl2]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1554490714</created>          <gmt_created>2019-04-05 18:58:34</gmt_created>          <changed>1554490714</changed>          <gmt_changed>2019-04-05 18:58:34</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="606703"><![CDATA[Constellations Center]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="620110">  <title><![CDATA[Six Members of GT Computing Awarded Prestigious Fellowships]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Each year, Georgia Tech&rsquo;s College of Computing is home to a number of students and faculty who are recognized by the computing community with fellowships from industry across the field.</p><p>This year is no different as six GT Computing individuals have been awarded fellowships with four different companies, including J.P. Morgan, IBM, Snap, and Facebook. Only those who accepted their awards are listed below.</p><p><strong>J.P. Morgan Chase &amp; Co.</strong></p><p><a href="https://www.jpmorgan.com/global/technology/ai/awards">J.P. Morgan Chase &amp; Co.</a> awarded <strong>Charles David Byrd</strong> (Research Scientist and Ph.D. student advised by Professor <strong>Tucker Balch</strong>) and Assistant Professor <strong>Xu Chu</strong> for efforts in artificial intelligence research. It is the company&rsquo;s first AI Research Awards, which are aimed at studying the use of AI and machine learning in areas including investment advice, risk management, digital assistants, and trading behavior. Only 47 fellowships were awarded by the company.</p><p>Byrd&rsquo;s work, along with Balch, is focused on machine learning for financial applications, investigating mutual fund portfolio inference, intraday equity market forecasting, stock market simulation, and machine learning approaches to the evaluation of market efficiency. Byrd has been recognized in the past as the 2018 Graduate Student Instructor of the Year Award in the School of Interactive Computing.</p><p>Chu&rsquo;s research interests revolve around two themes: using data management technologies to make machine learning more usable and using machine learning to tackle hard data management problems like data integration. Chu also earned the Microsoft Research Ph.D. Fellowship in 2015.</p><p><strong>IBM</strong></p><p>Ph.D. student <strong>Stacey Truex</strong> of the School of Computer Science was named a <a href="https://www.research.ibm.com/university/awards/2019_phd_fellowship_awards.shtml">2019 IBM Ph.D. Fellow</a>. The Fellowship, which has been around since the 1950s, recognizes and supports outstanding graduate students who are focused on solving problems that are fundamental to innovation. This includes pioneering work in areas like cognitive computing and augmented intelligence, quantum computing, blockchain, data-centric systems, advanced analytics, security, radical cloud innovation, and more. This highly-competitive award was given to only 16 Ph.D. students worldwide.</p><p>Truex (advised by Professor <strong>Ling Liu</strong>) focuses on research from two complementary perspectives: 1) privacy, security, and trust in machine learning models and algorithmic decision making, and 2) secure, privacy-preserving artificial intelligence systems, services, and applications.</p><p><strong>Snap, Inc.</strong></p><p><a href="https://snapresearchfs.splashthat.com/">Snap, Inc., recognized</a> Ph.D. student <strong>Harsh Agrawal</strong> of the School of Interactive Computing with the 2019 Snap Research Fellowship and Scholarship. This fellowship recognizes students carrying out research in areas of computer science relevant to the company, including computer graphics, computer vision, machine learning, data mining, computational imaging, human-computer interaction, and other related fields. Each awardee will receive a $10,000 award and an offer for a full-time paid internship with the company.</p><p>Agrawal (advised by Assistant Professor <strong>Dhruv Batra</strong>) does research at the intersection of computer vision and natural language processing. Prior to joining Georgia Tech, he spent time as a research engineer at Snap Research, where he was responsible for building large-scale infrastructure for visual recognition, search and developed algorithms for low-shot instance detection.</p><p><strong>Facebook</strong></p><p><a href="https://research.fb.com/announcing-the-2019-facebook-fellows-and-emerging-scholars/">Facebook Research announced the selection of 21 Fellows and seven Emerging Scholars</a> this year out of more than 900 submitted applications from Ph.D. students all over the world. Among the awardees were <strong>Abhishek Das</strong> with the Facebook Fellowship and <strong>Vanessa Oguamanam </strong>with the Emerging Scholar Award. The Facebook Fellowship program, now in its eighth year, is designed to encourage and support doctoral students engaged in innovative research in computer science and engineering.</p><p>Das (advised by Dhruv Batra) does research in deep learning and its applications in building agents that can see, think, talk, and act. His research has been supported by fellowships from Facebook, Adobe, and Snap, Inc., over the years. Oguamanam, who is in the School of Interactive Computing, pursues research in educational technology, human-computer interaction for development, diversity in STEM, and entrepreneurship. She is co-advised by Associate Professor <strong>Betsy DiSalvo</strong> and Assistant Professor <strong>Neha Kumar</strong>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1554416628</created>  <gmt_created>2019-04-04 22:23:48</gmt_created>  <changed>1554416628</changed>  <gmt_changed>2019-04-04 22:23:48</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[J.P. Morgan, IBM, Snap, and Facebook awarded six College of Computing faculty and students.]]></teaser>  <type>news</type>  <sentence><![CDATA[J.P. Morgan, IBM, Snap, and Facebook awarded six College of Computing faculty and students.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-04-04T00:00:00-04:00</dateline>  <iso_dateline>2019-04-04T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-04-04 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>620109</item>      </media>  <hg_media>          <item>          <nid>620109</nid>          <type>image</type>          <title><![CDATA[2019 College of Computing Fellowships]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[CoC Fellowships.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/CoC%20Fellowships.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/CoC%20Fellowships.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/CoC%2520Fellowships.png?itok=gPPPfgIJ]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Harsh Agrawal, Xu Chu, Abhishek Das, Vanessa Oguamanam, Charles David Byrd, and Stacey Truex]]></image_alt>                    <created>1554416151</created>          <gmt_created>2019-04-04 22:15:51</gmt_created>          <changed>1554416151</changed>          <gmt_changed>2019-04-04 22:15:51</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="145171"><![CDATA[Cybersecurity]]></term>          <term tid="39431"><![CDATA[Data Engineering and Science]]></term>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="619749">  <title><![CDATA[Coda, Georgia Tech’s newest and largest home in Tech Square, was envisioned in a digital world years before it became a part of Midtown’s skyline]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Georgia Tech&rsquo;s vision for Tech Square&rsquo;s newest structure, the <strong><a href="https://codatechsquare.com/">Coda</a></strong> building, was only an idea in 2015 when initial development talks began. The first tenants started moving in this month after more than two years of construction and much anticipation.</p><p>But researchers in the Georgia Tech IMAGINE Lab didn&rsquo;t have to wait for brick and steel to start being laid or watch a &ldquo;construction cam&rdquo; on a website to envision the possibilities for the new building. They were able to use their expertise in digital imaging, 3D modeling, and augmented reality technologies to create Tech Square in a digital model that included Coda in its earliest concept.</p><p>In 2015, the IMAGINE Lab, part of the Center for Spatial Planning Analytics and Visualization at Georgia Tech, was tasked by stakeholders at the institute to create a pilot project for a quick visual tool for planning the future Coda building.</p><p>&ldquo;The main goal of the digital application was to quickly visualize a few possible options with building concepts that included 20, 30 and 40 stories, and allow people to interact with the models and see how the cityscape in midtown would be altered,&rdquo; said <strong>Miro Malesevic</strong>, digital designer at the IMAGINE Lab.</p><blockquote><p><strong>In essence, the researchers gave decision makers a virtual time machine to the future that brought the building to life and showed how it might be situated in Tech Square and impact the area.</strong></p></blockquote><p>The visualization tool came in the form of an augmented reality app on mobile devices that allowed users to point the screens at a 2D physical map of Tech Square and watch a 3D model of the space come to life on the screen. Users could tap the screen to start with a 20-story building and tap twice more to end up with a structure twice the height (Coda eventually ended up with 21 levels).</p><p>Users could also understand how the length of shadows cast by the building or the structure itself might occlude views at the street level or other buildings. The digital AR application even provided a glimpse of the possibility for traffic simulations.</p><p>&ldquo;Use of the 3D AR application has an advantage over traditional 2D blueprints as it provides an individual user with 3D perspective of the design, interaction with the environment, and the ability to use simulations to help in decision-making,&rdquo; said Malesevic, who worked on the project.</p><p>The powerful tool was built within a week, thanks to the IMAGINE Lab&rsquo;s 3D modeling library, compiled over a 20-year period.</p><p>Over the years the IMAGINE Lab has produced numerous architectural visualizations for Georgia Tech, non-profit, and local private organizations supporting economic development efforts at the city and state level.</p><p>The third phase of Tech Square was announced in September. It includes preliminary plans for a two-tower complex at the northwest corner of West Peachtree and Fifth streets and possibly a retail plaza as well as an underground parking deck.</p><p>The design team in the IMAGINE Lab is already building this next version of Tech Square inside their digital world. The rest of us will have to wait and see how it turns out sometime in 2022 or later.</p><p>&nbsp;</p><p><strong>Story</strong>: Joshua Preston</p><p><strong>Video</strong>: Noah Posner</p><p><strong>Video Editing</strong>:&nbsp;Terence Rushin</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1553709240</created>  <gmt_created>2019-03-27 17:54:00</gmt_created>  <changed>1553777995</changed>  <gmt_changed>2019-03-28 12:59:55</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[In 2015, the IMAGINE Lab, part of the Center for Spatial Planning Analytics and Visualization at Georgia Tech, was tasked by stakeholders at the institute to create a pilot project for a quick visual tool for planning the future Coda building.]]></teaser>  <type>news</type>  <sentence><![CDATA[In 2015, the IMAGINE Lab, part of the Center for Spatial Planning Analytics and Visualization at Georgia Tech, was tasked by stakeholders at the institute to create a pilot project for a quick visual tool for planning the future Coda building.]]></sentence>  <summary><![CDATA[<p>In 2015, the IMAGINE Lab, part of the Center for Spatial Planning Analytics and Visualization at Georgia Tech, was tasked by stakeholders at the institute to create a pilot project for a quick visual tool for planning the future Coda building.</p>]]></summary>  <dateline>2019-03-27T00:00:00-04:00</dateline>  <iso_dateline>2019-03-27T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-03-27 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jpreston@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>GVU Center at Georgia Tech</p><p><a href="mailto:jpreston@cc.gatech.edu">Joshua Preston</a></p><p>678.231.0787</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>619759</item>      </media>  <hg_media>          <item>          <nid>619759</nid>          <type>image</type>          <title><![CDATA[Early Coda Concept in Augmented Reality]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Coda Concept 2015.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Coda%20Concept%202015.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Coda%20Concept%202015.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Coda%2520Concept%25202015.png?itok=3b81MPsW]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1553710231</created>          <gmt_created>2019-03-27 18:10:31</gmt_created>          <changed>1553710231</changed>          <gmt_changed>2019-03-27 18:10:31</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://youtu.be/ThoGpLmBJ2o]]></url>        <title><![CDATA[VIDEO: Early Coda Concept in Augmented Reality]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>      </groups>  <categories>          <category tid="131"><![CDATA[Economic Development and Policy]]></category>          <category tid="179355"><![CDATA[Building Construction]]></category>          <category tid="142"><![CDATA[City Planning, Transportation, and Urban Growth]]></category>          <category tid="143"><![CDATA[Digital Media and Entertainment]]></category>      </categories>  <news_terms>          <term tid="131"><![CDATA[Economic Development and Policy]]></term>          <term tid="179355"><![CDATA[Building Construction]]></term>          <term tid="142"><![CDATA[City Planning, Transportation, and Urban Growth]]></term>          <term tid="143"><![CDATA[Digital Media and Entertainment]]></term>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39531"><![CDATA[Energy and Sustainable Infrastructure]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="619508">  <title><![CDATA[3 IC Faculty Members Awarded Promotions]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Three tenure awards and promotions for faculty in Georgia Tech&rsquo;s School of Interactive Computing (IC) were announced this week. These appointments will become effective Aug. 15, 2019 after Board of Regents approval.</p><p><strong>Devi Parikh</strong> received tenure and was elevated to the position of associate professor. Parikh joined IC in 2016 and also currently works as a research scientist at Facebook AI Research (FAIR). She earned her Ph.D. from Carnegie Mellon University in 2009. Her research focus is in artificial intelligence (AI) at the intersection of machine learning and computer vision, and she has been recently recognized as one of the top women in AI in publications like <em><a href="https://www.vogue.com/projects/13548844/women-in-ai/">Vogue</a></em> and <em><a href="https://www.forbes.com/sites/mariyayao/2017/05/18/meet-20-incredible-women-advancing-a-i-research/2/#3b0a91c84ede">Forbes</a></em>.</p><p><strong>Dhruv Batra</strong>, who joined IC in 2016, received tenure and an elevation to associate professor. His machine learning and computer vision research has been featured in the <a href="https://www.bostonglobe.com/ideas/2015/04/01/how-automatically-detect-most-important-people-photograph/tZND3z3epWTJu4Gvf9FSRN/story.html"><em>Boston Globe</em></a>, <a href="https://www.newsweek.com/artificial-intelligence-algorithm-taught-recognise-humor-413832"><em>Newsweek</em></a>, and other media outlets. Batra earned his Ph.D. from Carnegie Mellon in 2010 and is also a FAIR research scientist.</p><p><strong>Karen Liu</strong>, who already has tenure, was promoted to the position of full professor. Liu received her Ph.D. from the University of Washington in 2005. Her research focus is in computer graphics and robotics, including physics-based animation, character animation, optimal control, reinforcement learning, and computational biomechanics. Along with her students, she founded the physics simulator <a href="http://dartsim.github.io/">DART</a>, which won the Grand Prize of Open Source Software World Challenge in 2016. Additional research has been featured in places like <em><a href="https://www.nbcnews.com/mach/science/these-new-gadgets-could-be-game-changers-senior-living-ncna791841">NBC News</a></em>, among others.</p><p>All three faculty are also members of Georgia Tech&#39;s <a href="http://ml.gatech.edu/">Center for Machine Learning</a>, <a href="http://gvu.gatech.edu">GVU Center</a>, and <a href="http://robotics.gatech.edu">Institute for Robotics and Intelligent Machines</a>.</p><p>&ldquo;Each of these faculty members has done tremendous work in Georgia Tech&rsquo;s School of Interactive Computing,&rdquo; IC Chair <strong>Ayanna Howard</strong> said. &ldquo;They are recognized not only for their incredible research but also for their teaching and leadership service in the community. This is a well-deserved recognition of their hard work hard work and accomplishments.&rdquo;</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1553267946</created>  <gmt_created>2019-03-22 15:19:06</gmt_created>  <changed>1553613200</changed>  <gmt_changed>2019-03-26 15:13:20</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Devi Parikh and Dhruv Batra were awarded tenure and elevated to associate professor, while Karen Liu was elevated to full professor.]]></teaser>  <type>news</type>  <sentence><![CDATA[Devi Parikh and Dhruv Batra were awarded tenure and elevated to associate professor, while Karen Liu was elevated to full professor.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-03-22T00:00:00-04:00</dateline>  <iso_dateline>2019-03-22T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-03-22 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>619507</item>      </media>  <hg_media>          <item>          <nid>619507</nid>          <type>image</type>          <title><![CDATA[IC Promotions 2019]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[devidhruvkaren.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/devidhruvkaren.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/devidhruvkaren.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/devidhruvkaren.png?itok=jrffm_wD]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Devi Parikh, Karen Liu, Dhruv Batra]]></image_alt>                    <created>1553267589</created>          <gmt_created>2019-03-22 15:13:09</gmt_created>          <changed>1553267589</changed>          <gmt_changed>2019-03-22 15:13:09</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="619523">  <title><![CDATA[Meet IC: Atlanta Native Matthew Guzdial Merges Passions for Machine Learning and Creativity]]></title>  <uid>33939</uid>  <body><![CDATA[<p>The School of Interactive Computing (IC) is the unique home to one of the widest varieties of computing researchers in the country. Part iSchool and part computer science, IC merges disciplines to address problems at the center of humans and computing.</p><p>In this collaborative environment, IC student researchers are impacting domains including artificial intelligence, robotics, health care, social computing, data visualization and analytics, and more. As a result, their backgrounds are as varied as their research areas.</p><p>In this series of Q&amp;As, we&rsquo;d like you to meet some of our talented graduate students. Today, meet <strong>Matthew Guzdial</strong>, a machine learning and creativity researcher under advisor <strong>Mark Riedl</strong> who has already gained plenty of attention for his work on artificial intelligence in video games.</p><p><strong>Advisor:</strong> IC Associate Professor Mark Riedl</p><p><strong>Research Focus Areas:</strong> Creativity and machine learning</p><p><strong>Hometown:</strong> Atlanta, Ga.</p><p><strong>High school:</strong> Lakeside High School</p><p><strong>Undergraduate Degree:</strong> B.S. in Computational Media at Georgia Tech</p><p><strong>Current Degree Program:</strong> Ph.D. in Computer Science</p><p><strong>Tell us a little bit about your research:</strong></p><p>In creativity and machine learning, I&rsquo;m interested both in how we get machine learning approaches to work in creative domains, as well as how we take cognitive models of creativity to improve machine learning approaches.</p><p><strong>Given the focus on creativity, is there something in your research you have created or would like to create?</strong></p><p>In terms of a &ldquo;thing&rdquo; I am building, I&rsquo;ve been collaborating with a team of undergrads and a master&rsquo;s student on an intelligent level editor for the past few years (<a href="https://youtu.be/UkMeM5Ty1lA">video</a> and <a href="https://arxiv.org/abs/1901.06417">paper</a>). What I&rsquo;d really like, though, is to combine this with our work on automated game generation to create a new, intelligent automated game generation tool that would allow anyone to make a 2-D game in, hopefully, an intuitive way.</p><p><strong>Do you have a favorite place to hang out on campus or in the city?</strong></p><p>On campus, I mostly hang out at my desk or on an armchair in my lab. In the city, I&rsquo;m a fan of Bookhouse Pub and the restaurant One Eared Stag.</p><p><strong>It sounds like you spend a lot of time with research. Do you have any other hobbies you like to do in your spare time?</strong></p><p>I&rsquo;m an avid runner, so I squeeze in a minimum of one 3-plus-mile run each week and aim for 2-3. I&rsquo;m also hugely into podcasts, with my favorites currently being My Brother, My Brother and Me, Punch Up the Jam, and Friends at the Table.</p><p><strong>What is your favorite memory from your years at Georgia Tech?</strong></p><p>I was part of this kind of wild production of After the Quake at DramaTech, where we built a visualization for one of the actors that he could &ldquo;conduct&rdquo; with his hands through a Kinect, Processing, and a Projector. That was hugely fun to go see.</p><p><strong>What is your proudest accomplishment?</strong></p><p>Ask me in, like, six months, and I&rsquo;ll say this Ph.D.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1553284195</created>  <gmt_created>2019-03-22 19:49:55</gmt_created>  <changed>1553284195</changed>  <gmt_changed>2019-03-22 19:49:55</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Meet Matthew Guzdial, a machine learning and creativity researcher under advisor Mark Riedl who has already gained plenty of attention for his work on artificial intelligence in video games.]]></teaser>  <type>news</type>  <sentence><![CDATA[Meet Matthew Guzdial, a machine learning and creativity researcher under advisor Mark Riedl who has already gained plenty of attention for his work on artificial intelligence in video games.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-03-22T00:00:00-04:00</dateline>  <iso_dateline>2019-03-22T00:00:00-04:00</iso_dateline>  <gmt_dateline>2019-03-22 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>619522</item>      </media>  <hg_media>          <item>          <nid>619522</nid>          <type>image</type>          <title><![CDATA[Matthew Guzdial]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Guzdial_Matthew_thumb.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Guzdial_Matthew_thumb.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Guzdial_Matthew_thumb.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Guzdial_Matthew_thumb.jpg?itok=aSCLu3R7]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Matthew Guzdial]]></image_alt>                    <created>1553283810</created>          <gmt_created>2019-03-22 19:43:30</gmt_created>          <changed>1553283810</changed>          <gmt_changed>2019-03-22 19:43:30</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://guzdial.com/]]></url>        <title><![CDATA[Learn More About Matthew Guzdial's Research]]></title>      </link>          <link>        <url><![CDATA[https://www.ic.gatech.edu/news/612183/georgia-tech-researchers-develop-ai-can-create-entirely-new-games]]></url>        <title><![CDATA[Georgia Tech Researchers Develop AI That Can Create Entirely New Games]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="618999">  <title><![CDATA[Michael Best to Speak at U.N. for Release of Report on Digital Gender Equality]]></title>  <uid>33939</uid>  <body><![CDATA[<p>As many around the world celebrate International Women&rsquo;s Day on Friday, March 8, a number of <a href="http://www.gatech.edu">Georgia Tech</a> faculty and students are making their own contributions to promoting women&rsquo;s rights and gender equality. Among them is Georgia Tech Associate Professor <strong>Michael Best</strong>, who will speak at the United Nations next week during the formal release of a <a href="https://docs.wixstatic.com/ugd/04bfff_e53606000c594423af291b33e47b7277.pdf">research report</a> by the <a href="https://www.equals.org/">EQUALS Global Partnership</a>, a coalition of more than 90 partners from government, industry, and academia that he helped found in 2015.</p><p>Best, who holds appointments in the <a href="https://inta.gatech.edu/">Sam Nunn School of International Affairs</a> and the <a href="http://www.ic.gatech.edu">School of Interactive Computing</a>, will offer closing remarks on the report&rsquo;s launch during the 63<sup>rd</sup> session of the <a href="http://www.unwomen.org/en/csw">Commission on the Status of Women</a>, the principal global intergovernmental body dedicated to the promotion of gender equality and empowerment of women.</p><p>The report, titled <em>Taking Stock: Data and Evidence on Gender Equality in Digital Access, Skills and Leadership</em>, highlights the impacts of technology on women in various contexts like jobs and wages, security and privacy, cyber threats, and new technologies such as artificial intelligence.</p><p>Among the report&rsquo;s results are a few key findings:</p><ul><li>While gender gaps are observable in most aspects of information and communications technology (ICT) access, skills, and leadership, the picture is still complex with large regional variations.</li><li>There is no one final strategy for eliminating gender digital inequalities.</li><li>The dominant approaches to gender equality in ICT access, skills and leadership mostly frame issues in binary (male/female) terms, thereby masking the relevance of other pertinent identities.</li><li>To ensure privacy and safety as well as full participation in the digital economy, women and girls should have equal opportunities to develop adequate basic and advanced digital skills.</li><li>Developments in digital technologies open new pathways to gender diversity and inclusion; however, lack of attention to gender dynamics hampers the potential for true progress.</li></ul><p>&ldquo;This report offers a comprehensive look at the issues affecting women and girls equality in a digital age,&rdquo; Best said. &ldquo;While surfacing many current challenges, it also sketches pathways forward towards achieving digital gender equality.&rdquo;</p><p>Best joined Georgia Tech&rsquo;s faculty in 2004 and has directed the <a href="https://cs.unu.edu/">United Nations University Institute on Computing and Society </a>(UNU-CS) in China since 2015. He co-founded the EQUALS Global Partnership and the <a href="https://www.equals.org/research">EQUALS Research Group</a>, the latter of which is responsible for the report.</p><p>EQUALS works to reverse the increasing digital gender divide, aiming to close the gap by 2030 and supporting the U.N. Sustainable Development Goals by empowering women through their use of information and communication technologies.</p><p>&ldquo;From its inception, the EQUALS Global Partnership has focused on evidence and data to illuminate the intersections of gender with ICTs,&rdquo; Best said. &ldquo;We have positioned the EQUALS Research Group at the forefront of this investigation, shining a light onto the realities and possibilities for women and girls in a digital age.&rdquo;</p><p>The report will be released on March 15 at the United Nations Headquarters in New York.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1552070180</created>  <gmt_created>2019-03-08 18:36:20</gmt_created>  <changed>1552070180</changed>  <gmt_changed>2019-03-08 18:36:20</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Michael Best will speak at the United Nations next week during the formal release of a research report by the EQUALS Global Partnership, a coalition of more than 90 partners from government, industry, and academia that he helped found in 2015.]]></teaser>  <type>news</type>  <sentence><![CDATA[Michael Best will speak at the United Nations next week during the formal release of a research report by the EQUALS Global Partnership, a coalition of more than 90 partners from government, industry, and academia that he helped found in 2015.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-03-08T00:00:00-05:00</dateline>  <iso_dateline>2019-03-08T00:00:00-05:00</iso_dateline>  <gmt_dateline>2019-03-08 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="http://david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>618997</item>      </media>  <hg_media>          <item>          <nid>618997</nid>          <type>image</type>          <title><![CDATA[EQUALS graphic]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[EQUALS.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/EQUALS.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/EQUALS.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/EQUALS.png?itok=n9VsHGfb]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[EQUALS - International Women's Day 2019]]></image_alt>                    <created>1552069697</created>          <gmt_created>2019-03-08 18:28:17</gmt_created>          <changed>1552069697</changed>          <gmt_changed>2019-03-08 18:28:17</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.iac.gatech.edu/news-events/features/michael-best-united-nations]]></url>        <title><![CDATA['They Need Us. And We Need Them.']]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="132"><![CDATA[Institute Leadership]]></category>          <category tid="134"><![CDATA[Student and Faculty]]></category>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="151"><![CDATA[Policy, Social Sciences, and Liberal Arts]]></category>      </categories>  <news_terms>          <term tid="132"><![CDATA[Institute Leadership]]></term>          <term tid="134"><![CDATA[Student and Faculty]]></term>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="151"><![CDATA[Policy, Social Sciences, and Liberal Arts]]></term>      </news_terms>  <keywords>          <keyword tid="907"><![CDATA[Michael Best]]></keyword>          <keyword tid="167256"><![CDATA[Sam Nunn School of International Affairs]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="180738"><![CDATA[digital gender equality]]></keyword>          <keyword tid="86981"><![CDATA[gender equality]]></keyword>          <keyword tid="23281"><![CDATA[international women&#039;s day]]></keyword>          <keyword tid="2628"><![CDATA[united nations]]></keyword>          <keyword tid="10230"><![CDATA[equality]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39511"><![CDATA[Public Service, Leadership, and Policy]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="618690">  <title><![CDATA[Ph.D. Candidate Caitlyn Seim Earns Prestigious Neuroscience:Translate Grant From Stanford]]></title>  <uid>33939</uid>  <body><![CDATA[<p>School of Interactive Computing Ph.D. candidate <strong>Caitlyn Seim</strong> was awarded one of Stanford University&rsquo;s exclusive <a href="https://neuroscience.stanford.edu/research/programs/neurosciencetranslate">Neuroscience:Translate grants</a>. The grant is part of a new program to support translational neuroscience research to find practical solutions for unmet patient needs.</p><p>Seim earned the grant based on her research into passive haptic stimulation, which resulted in a glove that could be used to assist in stroke rehabilitation. Stroke is one of the leading causes of disability around the globe, impacting millions of survivors each year.&nbsp; Survivors often lose function in their arms or hands, making it difficult to perform everyday functions like dressing or eating.&nbsp; Spasticity and tone can also cause hands to be involuntarily clenched in a rigid position &ndash; a problem for which there are few effective treatments.</p><p>Survivors lack options when it comes to rehabilitation, and existing methods can be strenuous, costly, or painful. Building on previous work, Seim is using the funding to investigate a novel stimulation method using a wireless, wearable device that may provide therapy on the go and to patients who do not have access to high-intensity rehabilitation.</p><p>Seim started working on this research during her time at Georgia Tech with Professor <strong>Thad Starner</strong>. She&rsquo;s now collaborating with <strong>Maarten Lansberg</strong> of the Stanford University Medical Center and <strong>Allison Okamura</strong>, professor of Mechanical Engineering at Stanford. Seim will join Stanford as a postdoctoral researcher this summer, where she will continue to work on this project.</p><p>Seim said that she and Starner are launching a company to put the device on the market &ndash; with the goal of translating research outcomes to clinical solutions.&nbsp; &ldquo;If we can help, then it&#39;s all worth it,&rdquo; Seim said.</p><p>To connect with Seim about this project, you can email her at <a href="mailto:ceseim@gatech.edu">ceseim@gatech.edu</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1551488288</created>  <gmt_created>2019-03-02 00:58:08</gmt_created>  <changed>1551488288</changed>  <gmt_changed>2019-03-02 00:58:08</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Seim was awarded for her research into passive haptic stimulation that could assist in stroke recovery.]]></teaser>  <type>news</type>  <sentence><![CDATA[Seim was awarded for her research into passive haptic stimulation that could assist in stroke recovery.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-03-01T00:00:00-05:00</dateline>  <iso_dateline>2019-03-01T00:00:00-05:00</iso_dateline>  <gmt_dateline>2019-03-01 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>611755</item>      </media>  <hg_media>          <item>          <nid>611755</nid>          <type>image</type>          <title><![CDATA[Caitlyn Seim - PHL]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Seim Banner.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Seim%20Banner.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Seim%20Banner.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Seim%2520Banner.jpg?itok=EAolAA9Q]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Caitlyn Seim showing haptic glove]]></image_alt>                    <created>1537470856</created>          <gmt_created>2018-09-20 19:14:16</gmt_created>          <changed>1537470856</changed>          <gmt_changed>2018-09-20 19:14:16</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.ic.gatech.edu/news/611757/good-vibrations-passive-haptic-learning-could-be-key-rehabilitation]]></url>        <title><![CDATA[Good Vibrations: Passive Haptic Learning Could Be a Key to Rehabilitation]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="104221"><![CDATA[passive haptic learning]]></keyword>          <keyword tid="180696"><![CDATA[PHL]]></keyword>          <keyword tid="167732"><![CDATA[Stroke]]></keyword>          <keyword tid="179165"><![CDATA[stroke recovery]]></keyword>          <keyword tid="170072"><![CDATA[Caitlyn Seim]]></keyword>          <keyword tid="1944"><![CDATA[Thad Starner]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="77691"><![CDATA[wearable technology]]></keyword>          <keyword tid="167386"><![CDATA[Stanford]]></keyword>          <keyword tid="1304"><![CDATA[neuroscience]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="618567">  <title><![CDATA[Researchers Use Social Media to Model Stress Following Incidents of Gun Violence on Campus]]></title>  <uid>33939</uid>  <body><![CDATA[<p>An algorithm developed by researchers at <a href="http://gatech.edu">Georgia Tech</a> can quantify periods of high stress on college campuses and could better inform appropriate responses by counselors, deans of students, faculty, and student populations themselves.</p><p>Using Reddit posts following incidents of gun violence on 12 American campuses as a test bed for their algorithm, researchers were able to identify sharp upticks in stress levels in the weeks immediately following these events and also the common words or phrases that increased or decreased during that period.</p><p>&ldquo;You can always make the indirect inference that you&rsquo;re seeing higher stress levels due to a specific event, like on-campus violence,&rdquo; said <a href="http://www.munmund.net/"><strong>Munmun De Choudhury</strong></a>, an assistant professor in Georgia Tech&rsquo;s <a href="http://www.ic.gatech.edu">School of Interactive Computing</a>. &ldquo;But what does that mean? Currently, we work on our hunches about the level of impact. This work can provide insight into these types of events by quantifying stress levels. What is the impact, and how?&rdquo;</p><p>To study the impact and the algorithm&rsquo;s ability to detect it, De Choudhury and Georgia Tech Ph.D. student <a href="http://koustuv.com/"><strong>Koustuv Saha</strong></a> brainstormed the types of events that could impact students&rsquo; lives the most. They determined something unique and local to their specific campus, like incidents of violence, would offer an abundance of interaction between students on social media.</p><p>To be able to measure stress levels in those time periods immediately following these instances, they built a classifier trained on separate control data &ndash; unrelated posts of high stress (class, crises, etc.) and low stress (general news, frivolous posts about cats, etc.).</p><p>Applying the algorithm to the 12 campus groups, they found that there was, not surprisingly, higher stress surrounding those events. More importantly, though, they were able to identify aspects of that stress that weren&rsquo;t readily available by the simple knowledge that on-campus violence induces a negative response from students. For example, while &ldquo;class&rdquo; was a word that commonly came up in high-stress posts before the incident, in that short period following any discussion of academics significantly dropped. On the other hand, words that were rarely seen throughout the year &ndash; social words, like &ldquo;friends&rdquo; and &ldquo;people&rdquo; &ndash; suddenly appeared.</p><p>&ldquo;They were words that indicated concern or solidarity, bonding words,&rdquo; De Choudhury said. &ldquo;We can see that there is a different sense of community. All of this is actionable, because if class is not a concern at that time, perhaps we need to adapt things at the campus level that can better meet the students&rsquo; needs, like peer support groups or things like that.&rdquo;</p><p>While the approach was tested only for college campuses encountering gun violence, Saha said that he could imagine a similar approach transferring to other settings. The challenge would be adjusting it to account for the size and makeup of the community.</p><p>&ldquo;On campus, they are younger students who already interact on Reddit with each other,&rdquo; he said. &ldquo;If you&rsquo;re talking larger-scale incidents, perhaps nationally, you have a much more diverse community which doesn&rsquo;t all communicate via the same medium or in the same way.&rdquo;</p><p>This research was published in the paper <a href="http://koustuv.com/papers/PACM_HCI_CSCW2018_Stress.pdf"><em>Modeling Stress with Social Media Around Incidents of Gun Violence on College Campus</em></a>. It was presented at the 21<sup>st</sup> ACM Conference on Computer-Support Cooperative Work and Social Computing (CSCW).</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1551301602</created>  <gmt_created>2019-02-27 21:06:42</gmt_created>  <changed>1551301602</changed>  <gmt_changed>2019-02-27 21:06:42</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Using Reddit posts following incidents of gun violence on 12 American campuses as a test bed for their algorithm, researchers were able to identify sharp upticks in stress levels in the weeks immediately following these events.]]></teaser>  <type>news</type>  <sentence><![CDATA[Using Reddit posts following incidents of gun violence on 12 American campuses as a test bed for their algorithm, researchers were able to identify sharp upticks in stress levels in the weeks immediately following these events.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-02-27T00:00:00-05:00</dateline>  <iso_dateline>2019-02-27T00:00:00-05:00</iso_dateline>  <gmt_dateline>2019-02-27 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>618566</item>      </media>  <hg_media>          <item>          <nid>618566</nid>          <type>image</type>          <title><![CDATA[Students on Campus]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[students on campus.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/students%20on%20campus.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/students%20on%20campus.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/students%2520on%2520campus.jpg?itok=BRR3DJ4Y]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Students sit together in the grass on campus]]></image_alt>                    <created>1551301507</created>          <gmt_created>2019-02-27 21:05:07</gmt_created>          <changed>1551301507</changed>          <gmt_changed>2019-02-27 21:05:07</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://medium.com/acm-cscw/modeling-stress-with-social-media-around-incidents-of-gun-violence-on-college-campuses-291e62c79203]]></url>        <title><![CDATA[Blog: Modeling Stress with Social Media Around Incidents of Gun Violence on College Campuses]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="167543"><![CDATA[social media]]></keyword>          <keyword tid="180672"><![CDATA[gun violence]]></keyword>          <keyword tid="167229"><![CDATA[stress]]></keyword>          <keyword tid="180673"><![CDATA[modeling stress]]></keyword>          <keyword tid="180674"><![CDATA[social media and gun violence]]></keyword>          <keyword tid="109"><![CDATA[Georgia Tech]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="89321"><![CDATA[Munmun De Choudhury]]></keyword>          <keyword tid="180675"><![CDATA[koustuv saha]]></keyword>          <keyword tid="180676"><![CDATA[cscw]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="618556">  <title><![CDATA[Novel App Uses AI to Guide, Support Cancer Patients]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Artificial Intelligence is helping to guide and support some 50 breast cancer patients in rural Georgia through a novel mobile application that gives them personalized recommendations on everything from side effects to insurance.<br /><br />The app, called MyPath, adapts to each stage in a patient&rsquo;s cancer journey. So the information available on the app &ndash; which runs on a tablet computer &ndash; regularly changes based on each patient&rsquo;s progress. Are you scheduled for surgery? MyPath will tell you what you need to know the day before.<br /><br />&ldquo;Patients have told us, &lsquo;It just seemed to magically know what I needed,&rsquo;&rdquo; said&nbsp;<a href="https://www.cc.gatech.edu/people/elizabeth-mynatt">Elizabeth Mynatt</a>, principal investigator for the work and Distinguished Professor in the&nbsp;<a href="https://www.ic.gatech.edu/">School of Interactive Computing</a>&nbsp;at Georgia Tech.<br /><br />Mynatt, who is also Executive Director of the&nbsp;<a href="http://www.ipat.gatech.edu/">Institute for People and Technology</a>, believes that MyPath is the first healthcare app capable of personalization (through its application of AI) for holistic cancer care. In addition to incorporating a patient&rsquo;s medical data, the app also addresses a variety of other relevant issues such as social and emotional needs.<br /><br />She presented the work February 15 at the 2019 meeting of the American Association for the Advancement of Science. The research has been sponsored by the National Cancer Institute.<br /><br /><strong>National Recognition</strong><br /><br />In January MyPath was recognized by iSchools, a consortium of some 100 institutions worldwide (including Georgia Tech) dedicated to advancing the information field. Maia Jacobs, who recently received her Ph.D. from Georgia Tech for her work on MyPath, was named winner of the 2019 iSchools Doctoral Dissertation Award.<br /><br />According to iSchools, &ldquo;the Award Committee felt [that Jacobs&rsquo; work] was timely and important, and lauded its impact in how patients manage their health.&rdquo; Jacobs, now a postdoctoral fellow at Harvard, is currently exploring how to expand MyPath to other diseases.<br /><br />The work was also honored in 2016 when it was featured in a report to President Barack Obama by the President&rsquo;s Cancer Panel. The report, Improving Cancer-Related Outcomes with Connected Health, aimed to &ldquo;help patients manage their health information and participate in their own care,&rdquo; according to a Georgia Tech story at the time.<br /><br /><strong>The Beginning</strong><br /><br />Six years ago Mynatt&rsquo;s team began working with the Harbin Clinic in Rome, Georgia. &ldquo;They have a tremendous program in holistic cancer care where they recognize that their patients, who are from a large rural area, face a variety of challenges to be able to successfully navigate the cancer journey,&rdquo; Mynatt said.<br /><br />But the Harbin doctors and cancer navigators &ndash; people who help patients through the cancer journey &ndash; wanted a better way to stay connected to patients on a regular basis. The navigators, in particular, found that they tended to interact with patients a great deal at diagnosis, but less frequently over time. And that meant that although there are many recommendations for, say, lowering anxiety, they weren&rsquo;t necessarily being communicated.<br /><br />Said Mynatt, &ldquo;We wondered how technology could amplify what these great people are doing.&rdquo;<br /><br /><strong>How it Works</strong><br /><br />MyPath begins with a mobile library of resources compiled from the American Cancer Society and other reputable organizations. Then, it is personalized with each patient&rsquo;s diagnosis and treatment plan, including the dates for specific procedures. Patients also complete regular surveys that help inform the system &ndash; and caregivers &ndash; of their changing needs and symptoms.<br /><br />The result is a system that provides each patient with resources and suggestions specific to their personal situation. Because MyPath knows, for example, that you have stage 2 breast cancer and will be undergoing a lumpectomy on a specific date, when you click on the category &ldquo;Preparing for Surgery&rdquo; it will suggest relevant articles to prepare you for what&rsquo;s ahead. Have you reported nausea in the system&rsquo;s survey? MyPath will bring your attention to resources that can help combat the side effect. The system also provides quick access to contact information for specific caregivers.<br /><br />Other apps &ndash; and the Internet &ndash; aren&rsquo;t personalized. That means slogging through a great deal of often technical information that&rsquo;s not relevant to your situation. In contrast, &ldquo;Every day MyPath puts the right resources at your fingertips to help you through your cancer journey,&rdquo; Mynatt said.<br /><br /><strong>More than Medical</strong><br /><br />Some of MyPath&rsquo;s most popular features have nothing to do directly with cancer. Buttons for &ldquo;Emotional Support&rdquo; and &ldquo;Day to Day Matters&rdquo; are regularly consulted by patients. &ldquo;When we asked them about how they used the tablet for healthcare, many patients would talk to us about playing Angry Birds, which they would download to distract them during chemo sessions,&rdquo; Mynatt said.<br /><br />MyPath is the second generation of the app. Patient feedback from its predecessor, My Journey Compass, led to changes including the personalization. Development continues. For example, Mynatt&rsquo;s team is hoping to expand the app for use by cancer survivors, who often face additional challenges like hormone replacement therapy. The team is also working on a version that individual patients could download, which would make the app available to many more users.<br /><br />This work is sponsored by the National Cancer Institute, part of the National Institutes of Health, under award RO1 CA195653. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1551292516</created>  <gmt_created>2019-02-27 18:35:16</gmt_created>  <changed>1551292516</changed>  <gmt_changed>2019-02-27 18:35:16</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Artificial Intelligence is helping to guide and support some 50 breast cancer patients in rural Georgia through a novel mobile application that gives them personalized recommendations on everything from side effects to insurance.]]></teaser>  <type>news</type>  <sentence><![CDATA[Artificial Intelligence is helping to guide and support some 50 breast cancer patients in rural Georgia through a novel mobile application that gives them personalized recommendations on everything from side effects to insurance.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-02-27T00:00:00-05:00</dateline>  <iso_dateline>2019-02-27T00:00:00-05:00</iso_dateline>  <gmt_dateline>2019-02-27 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Elizabeth Thomson</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>617953</item>      </media>  <hg_media>          <item>          <nid>617953</nid>          <type>image</type>          <title><![CDATA[Tablet computer running MyPath app]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[mypath_5799.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/mypath_5799.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/mypath_5799.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/mypath_5799.jpg?itok=tgnQKJBR]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[MyPath application on a tablet computer]]></image_alt>                    <created>1550355258</created>          <gmt_created>2019-02-16 22:14:18</gmt_created>          <changed>1550355258</changed>          <gmt_changed>2019-02-16 22:14:18</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="172776"><![CDATA[MyPath]]></keyword>          <keyword tid="10989"><![CDATA[Beth Mynatt]]></keyword>          <keyword tid="2493"><![CDATA[health care]]></keyword>          <keyword tid="2835"><![CDATA[ai]]></keyword>          <keyword tid="2556"><![CDATA[artificial intelligence]]></keyword>          <keyword tid="109"><![CDATA[Georgia Tech]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="12888"><![CDATA[IPaT]]></keyword>          <keyword tid="118671"><![CDATA[Maia Jacobs]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="617730">  <title><![CDATA[College to Host More Than 60 Undergraduate Women for Inaugural I.AM.GradComputing]]></title>  <uid>33939</uid>  <body><![CDATA[<p>To support the success of women in computing, Georgia Tech this week is hosting I.AM.GradComputing, a research-focused workshop for undergraduate women in computing.</p><p>The inaugural event begins Thursday and is organized by Georgia Tech&rsquo;s <a href="http://www.cc.gatech.edu">College of Computing</a>, <a href="http://www.ic.gatech.edu">School of Interactive Computing</a>, and the College&rsquo;s <a href="https://www.cc.gatech.edu/student-life/gt-computing-community/oec-office">Office of Outreach, Enrollment and Community</a>.</p><p>Between 60-70 undergraduate women from institutes located in the Southeast are participating thanks to a gift from Google. Among those schools represented will be Georgia Tech, Kennesaw State, Spelman College, and Agnes Scott College. Attendance is based on acceptance of an application submitted by interested students.</p><p>&ldquo;We are encouraged by the trajectory of women who are electing to pursue graduate degrees in computing, but there&rsquo;s so much more left to accomplish,&rdquo; said <strong>Ayanna Howard</strong>, chair of the School of Interactive Computing and one of the organizers of the event.</p><p>&ldquo;We want to engage with and provide guidance to women in all of these different critical areas like networking and branding and the benefits of a graduate degree. This event is going to be a wonderful opportunity to do that.&rdquo;</p><p>Following a welcome dinner on Thursday, the I.Am.GradComputing workshop will feature a series of sessions on Friday. The sessions will cover relevant topics including:</p><ul><li>Tools and tips on research opportunities</li><li>Networking and personal brand building</li><li>Career planning</li><li>Building self-confidence</li><li>Achieving a healthy work-life balance</li></ul><p>Attendees will have an opportunity to engage with experienced women in computing, like Howard, IC faculty members <strong>Amy Bruckman</strong>, <strong>Beki Grinter</strong>, <strong>Rosa Arriaga</strong>, and others.</p><p>The goal of the workshop, according to Howard, is to better prepare these women to succeed in computing-related careers, and to ultimately increase the number of undergraduate women pursuing graduate degrees in computing-related fields.</p><p>I.AM.GradComputing wraps up Saturday with a hackathon centered around AI for social good. During this event, scheduled for six hours, students will be encouraged to conceptualize or create an artificial intelligence application that addresses a social issue of their choice.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1550026848</created>  <gmt_created>2019-02-13 03:00:48</gmt_created>  <changed>1550026848</changed>  <gmt_changed>2019-02-13 03:00:48</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Between 60-70 undergraduate women from institutes located in the Southeast are participating thanks to a gift from Google.]]></teaser>  <type>news</type>  <sentence><![CDATA[Between 60-70 undergraduate women from institutes located in the Southeast are participating thanks to a gift from Google.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-02-12T00:00:00-05:00</dateline>  <iso_dateline>2019-02-12T00:00:00-05:00</iso_dateline>  <gmt_dateline>2019-02-12 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>617729</item>      </media>  <hg_media>          <item>          <nid>617729</nid>          <type>image</type>          <title><![CDATA[Women of robotics]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[women_in_robotics.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/women_in_robotics.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/women_in_robotics.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/women_in_robotics.jpg?itok=RTARgIck]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Women of Georgia Tech Robotics]]></image_alt>                    <created>1550026547</created>          <gmt_created>2019-02-13 02:55:47</gmt_created>          <changed>1550026547</changed>          <gmt_changed>2019-02-13 02:55:47</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="8469"><![CDATA[women in computing]]></keyword>          <keyword tid="208"><![CDATA[computing]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="39401"><![CDATA[OEC]]></keyword>          <keyword tid="144291"><![CDATA[Office of Outreach Enrollment and Community]]></keyword>          <keyword tid="180502"><![CDATA[graduate computing]]></keyword>          <keyword tid="1051"><![CDATA[Computer Science]]></keyword>          <keyword tid="8471"><![CDATA[grace hopper]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="617728">  <title><![CDATA[Team of Researchers Headed to SXSW EDU to Discuss VR in Education]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Classrooms in Cobb County, Ga., are using virtual reality (VR) to venture inside plant cells. Students in Mumbai, India, are using VR to explore the Louvre Museum. But are the learning outcomes actually better for the kids?</p><p>Georgia Tech and Stanford University researchers will discuss this and other crucial questions about the benefits and challenges of using VR in the classroom during a panel next month at South by Southwest EDU (SXSW EDU) 2019.</p><p>The panel, <em>Virtually Real: Using Immersive Tech in Education</em>, is set for 5 p.m. March 4&nbsp;in Room 11AB of the Austin Convention Center. <a href="https://schedule.sxswedu.com/2019/events/PP87095">Further information about the panel can be found here.</a> It will feature Georgia Tech&rsquo;s <strong>Neha Kumar</strong> and <strong>Tamara Pearson</strong>, Stanford&rsquo;s <strong>Aditya Vishwanath</strong>, and Cobb County Schools&rsquo; <strong>Sally Creel</strong>.</p><p>Teachers, school administrators, and others attending the panel can expect a lively and insightful discussion. The panelists will use their research findings from the Cobb County and Mumbai projects as a springboard to discuss:</p><ul><li>Social implications of using VR in the classroom</li><li>Implications for resource-constrained populations</li><li>Physical challenges like dizziness or nausea that can affect users of VR or other immersive technologies</li><li>How to maintain engagement when VR is no longer a novel technology</li></ul><p>Along with sharing their research and lessons learned, the panelists hope to have an open conversation with attendees about their experiences, questions, or concerns about using VR in the classroom to improve learning outcomes.</p><p>The SXSW EDU Conference &amp; Festival is an annual event that &ldquo;cultivates and empowers a community of engaged stakeholders to advance teaching and learning.&rdquo; Along with panel sessions for leading educational experts, the four-day event offers attendees workshops, interactive learning experiences, film screenings, early-stage startups, and business and networking opportunities.</p><p><strong>Panelists</strong></p><ul><li>Neha Kumar is an assistant professor, jointly appointed in Georgia Tech&rsquo;s College of Computing and Sam Nunn School of International Affairs. Her research lies at the intersection of human-centered computing and global development.</li><li>Aditya Vishwanath is a Knight-Hennessy Scholar at Stanford University, pursuing his Ph.D. in learning sciences and technology design. He completed his undergraduate degree at Georgia Tech&rsquo;s College of Computing.</li><li>Tamara Pearson is the associate director of school and community engagement at the Center for Education Integrating Science, Mathematics and Computing (CEISMC) at Georgia Tech. Her current work focuses on partnering with schools and districts to help develop innovative curriculum and programs, as well as understanding how to best engage populations historically underrepresented in STEM fields.</li><li>Sally Creel is the STEM and Innovation supervisor at Cobb County Schools. She coordinated implementation of VR resources in the local schools for the team&rsquo;s study, including recruiting classrooms and teachers to participate.</li></ul><p><strong>Spread the word!</strong></p><p>Well-attended sessions at SXSW EDU tend to benefit from the strong support of the networks surrounding the speakers themselves, as well as attendees. Help the panelists by spreading the word about their talk on social media. <a href="https://www.sxswedu.com/social-media-marketing-toolkit/">You can access a social media toolkit here.</a></p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1550012665</created>  <gmt_created>2019-02-12 23:04:25</gmt_created>  <changed>1550012665</changed>  <gmt_changed>2019-02-12 23:04:25</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The panel, Virtually Real: Using Immersive Tech in Education, is set for 5 p.m. March 4 in Room 11AB of the Austin Convention Center.]]></teaser>  <type>news</type>  <sentence><![CDATA[The panel, Virtually Real: Using Immersive Tech in Education, is set for 5 p.m. March 4 in Room 11AB of the Austin Convention Center.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-02-12T00:00:00-05:00</dateline>  <iso_dateline>2019-02-12T00:00:00-05:00</iso_dateline>  <gmt_dateline>2019-02-12 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>617717</item>      </media>  <hg_media>          <item>          <nid>617717</nid>          <type>image</type>          <title><![CDATA[SXSW EDU Social]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Screen Shot 2019-02-12 at 3.46.35 PM.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Screen%20Shot%202019-02-12%20at%203.46.35%20PM.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Screen%20Shot%202019-02-12%20at%203.46.35%20PM.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Screen%2520Shot%25202019-02-12%2520at%25203.46.35%2520PM.png?itok=Y9ZFMt6x]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[See you at SXSW EDU March 4-7, 2019]]></image_alt>                    <created>1550004824</created>          <gmt_created>2019-02-12 20:53:44</gmt_created>          <changed>1550004824</changed>          <gmt_changed>2019-02-12 20:53:44</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://schedule.sxswedu.com/2019/events/PP87095]]></url>        <title><![CDATA[Virtually Real: Using Immersive Tech in Education]]></title>      </link>          <link>        <url><![CDATA[https://www.cc.gatech.edu/news/605000/vr-taking-students-where-once-only-ms-frizzle-and-magic-school-bus-could]]></url>        <title><![CDATA[VR Taking Students Where Once Only Ms. Frizzle and the Magic School Bus Could]]></title>      </link>          <link>        <url><![CDATA[https://www.cc.gatech.edu/content/researchers-work-kids-mumbai-examine-classroom-potential-virtual-reality]]></url>        <title><![CDATA[Researchers Work with Kids in Mumbai to Examine Classroom Potential of Virtual Reality]]></title>      </link>          <link>        <url><![CDATA[https://www.sxswedu.com/]]></url>        <title><![CDATA[SXSW EDU 2019]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="145251"><![CDATA[virtual reality]]></keyword>          <keyword tid="148381"><![CDATA[vr]]></keyword>          <keyword tid="138871"><![CDATA[Neha Kumar]]></keyword>          <keyword tid="177678"><![CDATA[aditya vishwanath]]></keyword>          <keyword tid="180500"><![CDATA[Sally Creel]]></keyword>          <keyword tid="172657"><![CDATA[Tamara Pearson]]></keyword>          <keyword tid="109"><![CDATA[Georgia Tech]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="167256"><![CDATA[Sam Nunn School of International Affairs]]></keyword>          <keyword tid="411"><![CDATA[CEISMC]]></keyword>          <keyword tid="174526"><![CDATA[Cobb County Schools]]></keyword>          <keyword tid="180501"><![CDATA[VR in education]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="617450">  <title><![CDATA[More Than 120 Students Participate in Interactivity@GT 2019]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Georgia Tech hosted its annual Interactivity event on Jan. 31 in the Historic Academy of Medicine.</p><p>More than 120 students and representatives from 61 companies participated in the annual showcase and job fair for graduate students enrolled in one of three master&rsquo;s programs at Georgia Tech &ndash; M.S. in Human-Computer Interaction, M.S. in Digital Media, and Master of Industrial Design.</p><p>New this year, Interactivity, which is presented by the GVU Center and sponsored by Mailchimp, included a traditional job fair. Eighteen companies participated in the fair, which was focused specifically on user experience-related jobs.</p><p>As in past years, Interactivity kicked off with a morning poster session for students to share research projects with visiting industry partners. After lunch, students took part in &ldquo;one-minute madness,&rdquo; an opportunity for each student to take the stage and give a one-minute elevator pitch about themselves, their interests, and their work.</p><p>&ldquo;Interactivity is unique because it provides a one-stop shop for companies looking for world-class HCI and UX talent,&rdquo; said <strong><a href="https://www.ic.gatech.edu/people/richard-henneman">Dick Henneman</a></strong>, a professor of the practice in the School of Interactive Computing and the Director of the MS-HCI program. &ldquo;We experimented this year by including a traditional career fair for our MS-HCI, MID, and MSDM students. Judging by the reaction from both students and company recruiters, it was a huge hit that will continue in the future.&rdquo;</p><p>Interactivity has proven successful over the years for students looking to enter industry as a STEM professional. Since 2014-18, in fact, more than 50 percent of graduates from the MS-HCI program took jobs at major companies in five of the top 10 metro areas for STEM professionals &ndash; 28.7 percent to Atlanta, 15.8 percent to San Francisco, Calif., and 6.4 percent to Seattle, Wash.</p><p>Learn more in the graphic below, or&nbsp;<a href="https://public.tableau.com/views/HCIgrads-2014-2018/Dashboard1?:embed=y&amp;:display_count=yes&amp;publish=yes:showVizHome=no#2">click the link to interact with the graphic in a new window</a>.</p><p>For more information on Georgia Tech&rsquo;s affiliated master&rsquo;s programs and Interactivity in general, follow the links below:</p><ul><li><a href="http://mshci.gatech.edu/">Master of Science in Human-Computer Interaction</a></li><li><a href="https://dm.lmc.gatech.edu/">Master of Science in Digital Media</a></li><li><a href="https://id.gatech.edu/mid">Master of Industrial Design</a></li><li><a href="http://interactivity.cc.gatech.edu/">Interactivity@GT</a></li></ul>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1549485972</created>  <gmt_created>2019-02-06 20:46:12</gmt_created>  <changed>1549572141</changed>  <gmt_changed>2019-02-07 20:42:21</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Representatives from 61 companies also attended the annual event for master's students at Georgia Tech.]]></teaser>  <type>news</type>  <sentence><![CDATA[Representatives from 61 companies also attended the annual event for master's students at Georgia Tech.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-02-06T00:00:00-05:00</dateline>  <iso_dateline>2019-02-06T00:00:00-05:00</iso_dateline>  <gmt_dateline>2019-02-06 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>617442</item>          <item>617514</item>      </media>  <hg_media>          <item>          <nid>617442</nid>          <type>image</type>          <title><![CDATA[Interactivity 2019]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[IMG_2984.JPG]]></image_name>            <image_path><![CDATA[/sites/default/files/images/IMG_2984.JPG]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/IMG_2984.JPG]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/IMG_2984.JPG?itok=eZDVh22W]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Interactivity 2019]]></image_alt>                    <created>1549484478</created>          <gmt_created>2019-02-06 20:21:18</gmt_created>          <changed>1549484478</changed>          <gmt_changed>2019-02-06 20:21:18</gmt_changed>      </item>          <item>          <nid>617514</nid>          <type>image</type>          <title><![CDATA[MS-HCI graduate job placements]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[mshci grap placement graphic.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/mshci%20grap%20placement%20graphic.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/mshci%20grap%20placement%20graphic.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/mshci%2520grap%2520placement%2520graphic.png?itok=uaMD0teV]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1549571937</created>          <gmt_created>2019-02-07 20:38:57</gmt_created>          <changed>1549571937</changed>          <gmt_changed>2019-02-07 20:38:57</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://public.tableau.com/views/HCIgrads-2014-2018/Dashboard1?:embed=y&amp;:display_count=yes&amp;publish=yes:showVizHome=no#2]]></url>        <title><![CDATA[MS-HCI Graduate Job Placement Map]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="14646"><![CDATA[human-computer interaction]]></keyword>          <keyword tid="107441"><![CDATA[ms-hci]]></keyword>          <keyword tid="180428"><![CDATA[ms digital media]]></keyword>          <keyword tid="124"><![CDATA[Digital Media]]></keyword>          <keyword tid="3128"><![CDATA[Industrial Design]]></keyword>          <keyword tid="180429"><![CDATA[dick henneman]]></keyword>          <keyword tid="2483"><![CDATA[interactive computing]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="177254"><![CDATA[GTComputing]]></keyword>          <keyword tid="109"><![CDATA[Georgia Tech]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="617424">  <title><![CDATA[Two Computing Professors Among Finalists in Dean Search]]></title>  <uid>34541</uid>  <body><![CDATA[<p>Georgia Tech announced today that four finalists have been chosen in the College of Computing dean search.</p><p>The search began last summer following the announcement that Zvi Galil, John P. Imlay Jr. Dean of Computing, would <a href="https://www.cc.gatech.edu/news/606918/dean-zvi-galil-step-down-after-next-academic-year">step down as dean at the end of the 2018/19 academic year</a>.</p><p>Among the finalists announced today are College of Computing Executive Associate Dean and Professor <a href="https://www.cc.gatech.edu/~isbell/"><strong>Charles Isbell</strong></a> and <strong>Ellen</strong> <strong>Zegura</strong>, Fleming Chair and Professor in the School of Computer Science.</p><p>As part of the final selection process, each candidate will visit campus and present an open seminar addressing their broad vision for the College of Computing.</p><p>The hour-long seminars are open to all students, faculty, and staff. Interested individuals can attend in person, watch real-time via live stream, or watch a post-event video of each candidate presentation.</p><p>The finalists are included below in order of their campus seminar presentations:</p><ul><li><strong>Charles Isbell</strong>,&nbsp;professor and executive associate dean for the College of Computing at the Georgia Institute of Technology, will present an open seminar on&nbsp;<strong>Feb. 19, at 11 a.m. in Clough Undergraduate Learning Commons, Room 152.</strong><br />&nbsp;</li><li><strong>Kathleen Fisher</strong>, chair of the Computer Science Department at Tufts University, will present an open seminar on&nbsp;<strong>Feb. 21, at 11 a.m. in Clough Undergraduate Learning Commons, Room 152</strong><br />&nbsp;</li><li><strong>Radha Poovendran</strong>, professor and chair of the Electrical and Computer Engineering Department at the University of Washington, will present an open seminar on&nbsp;<strong>Feb. 26, at 11 a.m. in Clough Undergraduate Learning Commons, Room 152.</strong><br />&nbsp;</li><li><strong>Ellen Zegura</strong>, Fleming Professor in the School of Computer Science and executive faculty co-director of the Center for Serve-Learn-Sustain at the Georgia Institute of Technology, will present an open seminar on&nbsp;<strong>Feb. 28, at 11 a.m.</strong>&nbsp;<strong>in Clough Undergraduate Learning Commons, Room 152.</strong></li></ul><p>Additional details can be found on the College of Computing&nbsp;<a href="http://www.provost.gatech.edu/dean-computing">dean search site</a>, including each respective candidate&rsquo;s bio and curriculum vitae, as well as links to the seminars and surveys. Note that Georgia Tech login credentials are required to access the live stream and post-event videos. Surveys for the College of Computing dean search will be available through midnight on March 3.</p>]]></body>  <author>Tess Malone</author>  <status>1</status>  <created>1549477369</created>  <gmt_created>2019-02-06 18:22:49</gmt_created>  <changed>1549571638</changed>  <gmt_changed>2019-02-07 20:33:58</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Four finalists have been chosen for the College of Computing dean search.]]></teaser>  <type>news</type>  <sentence><![CDATA[Four finalists have been chosen for the College of Computing dean search.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-02-06T00:00:00-05:00</dateline>  <iso_dateline>2019-02-06T00:00:00-05:00</iso_dateline>  <gmt_dateline>2019-02-06 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[albert.snedeker@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Albert Snedeker, News &amp; Media Relations Manager</p><p><a href="mailto:albert.snedeker@cc.gatech.edu">albert.snedeker@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>617425</item>      </media>  <hg_media>          <item>          <nid>617425</nid>          <type>image</type>          <title><![CDATA[Dean Search]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[deansearch.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/deansearch.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/deansearch.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/deansearch.jpeg?itok=BUtOjaQw]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Dean Search]]></image_alt>                    <created>1549477400</created>          <gmt_created>2019-02-06 18:23:20</gmt_created>          <changed>1549477400</changed>          <gmt_changed>2019-02-06 18:23:20</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="606703"><![CDATA[Constellations Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="617001">  <title><![CDATA[Fairness in Machine Learning Conference Comes to Atlanta]]></title>  <uid>34541</uid>  <body><![CDATA[<p>Fairness in&nbsp;machine learning (ML) is becoming one of the most pressing issues in society. This week, more than 500 people are in Atlanta for the&nbsp;Fairness, Accountability, and Transparency (FAT) conference, Jan. 29 through 31,&nbsp;to discuss improving ethics in ML.</p><p>As more and more products and services come to rely on artificial intelligence and ML, ethical issues continue to arise. According to School of Computer Science Assistant Professor&nbsp;<a href="http://jamiemorgenstern.com/"><strong>Jamie Morgenstern</strong></a>, who is one of the conference&#39;s program chairs, this is because much of the data used to train these systems&nbsp;is&nbsp;historical and often reflects societal biases of the time.</p><p><strong><a href="https://www.cc.gatech.edu/news/610888/jamie-morgenstern-wants-bring-fairness-machine-learning" target="_blank">[RELATED:&nbsp;Jamie Morgenstern Wants to Bring Fairness to Machine Learning]</a></strong></p><p>The FAT conference was established to mitigate these issues by developing awareness of this inherent bias. Morgenstern defines each term as follows:</p><ul><li><strong>Fairness:</strong> This can also be called predictive equity. Systems should do a similarly good job improving services for all groups.</li><li><strong>Accountability: </strong>Researchers should be able to explain why computational systems behave the way they do.</li><li><strong>Transparency:</strong> A system should be understandable to the population it will serve.</li></ul><p>Because these issues impact more than just computer science, and&nbsp;ML now touches everything from policy to business, conference attendees&nbsp;include lawyers, policymakers, and a variety&nbsp;of industry representatives.</p><p>&ldquo;If we&rsquo;re just having this conversation ourselves as computer scientists, we will invariably get it wrong,&rdquo; Morgenstern said. &ldquo;We want to promote a broad, diverse population to come together, network, and be externally visible in this field.&rdquo;</p><p><strong><a href="https://www.cc.gatech.edu/news/616279/human-rights-may-help-shape-artificial-intelligence-2019" target="_blank">[RELATED:&nbsp;&#39;Human Rights&#39; May Help Shape Artificial Intelligence in 2019]</a></strong></p><p>Now in its second year, the conference is affiliated with ACM this year. The program chairs are Morgenstern and Data &amp; Society founder and Microsoft Research Principal Researcher <strong>danah boyd</strong>. Local Chairs Professor <strong>Deven Desai</strong> of Georgia Tech&#39;s Scheller College of Business and <strong>Brandeis Marshall</strong> of Spelman College have also been critical to the conference&rsquo;s mission.</p><p>Georgia Tech also has a paper at the conference: <a href="https://dl.acm.org/authorize?N675456"><strong><em>A Taxonomy of Ethical Tensions in Inferring Mental Health States from Social Media</em></strong></a> by School of Interactive Computing (IC) Ph.D. student <a href="http://steviechancellor.com/"><strong>Stevie Chancellor</strong></a>, Dr.<strong> Michael Birnbaum</strong>, University of Rochester Professor<strong> Eric Caine</strong> and Associate Professor<strong> Vincent Silenzio, </strong>and IC Assistant Professor <a href="http://www.munmund.net/"><strong>Munmun De Choudhury</strong></a><strong>.</strong></p>]]></body>  <author>Tess Malone</author>  <status>1</status>  <created>1548710770</created>  <gmt_created>2019-01-28 21:26:10</gmt_created>  <changed>1549075003</changed>  <gmt_changed>2019-02-02 02:36:43</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[SCS Assistant Professor Jamie Morgenstern acts as program chair for important machine learning conference.]]></teaser>  <type>news</type>  <sentence><![CDATA[SCS Assistant Professor Jamie Morgenstern acts as program chair for important machine learning conference.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-01-28T00:00:00-05:00</dateline>  <iso_dateline>2019-01-28T00:00:00-05:00</iso_dateline>  <gmt_dateline>2019-01-28 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[tess.malone@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Tess Malone, Communications Officer</p><p><a href="mailto:tess.malone@cc.gatech.edu">tess.malone@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>617002</item>      </media>  <hg_media>          <item>          <nid>617002</nid>          <type>image</type>          <title><![CDATA[Scales ]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[2000px-Unbalanced_scales2.svg_.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/2000px-Unbalanced_scales2.svg_.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/2000px-Unbalanced_scales2.svg_.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/2000px-Unbalanced_scales2.svg_.png?itok=AaIjQnFO]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Scales]]></image_alt>                    <created>1548711313</created>          <gmt_created>2019-01-28 21:35:13</gmt_created>          <changed>1548711313</changed>          <gmt_changed>2019-01-28 21:35:13</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>          <category tid="129"><![CDATA[Institute and Campus]]></category>      </categories>  <news_terms>          <term tid="129"><![CDATA[Institute and Campus]]></term>      </news_terms>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="617061">  <title><![CDATA[See and Say: Abhishek Das Working to Provide Crucial Communication Tools for Intelligent Agents]]></title>  <uid>33939</uid>  <body><![CDATA[<p>School of Interactive Computing Ph.D. student <a href="https://abhishekdas.com/"><strong>Abhishek Das</strong></a> remembers the moment his interests in computer vision and language began to come into focus. It was early in his time as a Ph.D. student when he came across an algorithm that could generate a one-line natural language description of an image with incredible accuracy. When he saw the results, it seemed almost magical, he said.</p><p>&ldquo;I was blown away because you could give it any image, and it would generate a fairly plausible sentence,&rdquo; he said. &ldquo;I had never seen that before.&rdquo;</p><p>Six months later, there were papers being published on question answering, where the algorithm could not only generate a sentence but could even answer questions about the image. He was similarly floored by the impressive results.</p><p>He was advised by <strong><a href="https://www.cc.gatech.edu/~dbatra/">Dhruv Batra</a></strong> and also working closely with <strong><a href="https://www.cc.gatech.edu/~parikh/">Devi Parikh</a></strong>, both assistant professors at Virginia Tech at the time. When they joined Georgia Tech, Das brought his thirst for research in that space to Atlanta, as well. Now, nearly two years later, he has published a number of research papers in projects ranging from <a href="https://visualdialog.org/">visual dialogue</a> to a task called &ldquo;<a href="https://embodiedqa.org/">embodied question answering</a>.&rdquo; He is working toward additional research involving multiple agents, and sees a world not far off that takes advantage of all of this simulated research to develop hardware for assistive tech like in-home robots.</p><h3>&#39;It feels within reach...&#39;</h3><p>It&rsquo;s a future that has been featured in popular culture for years &ndash; think about <a href="http://thejetsons.wikia.com/wiki/Rosey">Rosie, the robot maid who first appeared on <em>The Jetsons </em>in 1962</a> &ndash; but is one that Das is beginning to see on the horizon.</p><p>&ldquo;It feels within reach, the vision that we see in science fiction,&rdquo; he said. &ldquo;Movies of robots that you can talk to or give instructions to.&rdquo;</p><p>While people outside of the research sphere may just see the cold steel exterior of these imagined robots, it requires so many different elements to develop a viable foundation. This includes work in computer vision, which involves analysis of visual information by a machine, and language, which involves written or verbal communication and instruction. Das works at the intersection of both domains.</p><p>Broadly, his research has been in developing algorithms and intelligent agents that can see, talk, and ultimately act on that understanding in physical environments, taking actions like navigation or executing instructions.</p><p><a href="https://embodiedqa.org/paper.pdf">Findings from a recent research project</a> were published and <a href="https://www.youtube.com/watch?v=gz2VoDrvX-A&amp;feature=youtu.be&amp;t=1h29m14s">presented</a> at the <a href="http://cvpr2018.thecvf.com/">2018 Computer Vision and Pattern Recognition conference</a> in Salt Lake City, Utah. They explored an idea called embodied question answering. In this project, there is an agent that is asked a question and must ascertain an answer by moving through and inquiring about other aspects of its environment.</p><p>&ldquo;It combines these three modalities: computer vision, language understanding, and reinforcement learning to take actions in this environment,&rdquo; Das said.</p><p>The application here could be an assistive robot that could take a question or a command &ndash; &ldquo;Where are my keys,&rdquo; for example &ndash; and provide an answer or perform a task based on its understanding of the environment. He&rsquo;s also conducting similar work with multiple agents, which could help coordinate to perform certain tasks.</p><p>&ldquo;I&rsquo;m not currently working with the hardware side of things,&rdquo; he said. &ldquo;All of this is simulation, but these are the end goals. The vision is that these will make it to robots with these sorts of capabilities. And, more importantly, the algorithms that I&rsquo;m building will hopefully generalize and be useful for a wide variety of tasks.&rdquo;</p><h3>A culture of collaboration</h3><p>Das&rsquo; work has received extensive media attention and he has had the opportunity to work under some prestigious grants and fellowships. Currently, he is supported by fellowships from Facebook, Adobe, and Snap. He was recently awarded fellowships from Facebook, Microsoft Research, NVIDIA. He declined the latter two and accepted Facebook.</p><p>One of the great benefits, he said, of working at Georgia Tech in this space has been the opportunity to collaborate with individuals who are conducting research in complementary domains.</p><p>&ldquo;On my floor in the College of Computing, there are people who are experts in computer vision, natural language processing, reinforcement learning, in robotics, or other areas, and it&rsquo;s always awesome to bounce ideas off of them,&rdquo; he said.</p><p>&ldquo;Just this semester, I was taking (Associate Professor) <a href="https://www.cc.gatech.edu/~chernova/"><strong>Sonia Chernova</strong></a>&rsquo;s course in human-robot interaction, and we prototyped a version of a tabletop embodied robot that could actually implement a very primitive version of the embodied question answering algorithm. That was a very interesting experience.&rdquo;</p><p>Das is gaining new valuable experience this semester, as well. Having interned three times at Facebook AI Research, he is spending this semester in London interning with DeepMind, where he will work in areas related to this general space of agents that can see, talk, and act.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1548872368</created>  <gmt_created>2019-01-30 18:19:28</gmt_created>  <changed>1548872368</changed>  <gmt_changed>2019-01-30 18:19:28</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[School of Interactive Computing student Abhishek Das has published a number of research papers in projects ranging from visual dialogue to a task called “embodied question answering.”]]></teaser>  <type>news</type>  <sentence><![CDATA[School of Interactive Computing student Abhishek Das has published a number of research papers in projects ranging from visual dialogue to a task called “embodied question answering.”]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-01-30T00:00:00-05:00</dateline>  <iso_dateline>2019-01-30T00:00:00-05:00</iso_dateline>  <gmt_dateline>2019-01-30 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="http://david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>617059</item>      </media>  <hg_media>          <item>          <nid>617059</nid>          <type>image</type>          <title><![CDATA[Abhishek Das]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Abhishek Das.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Abhishek%20Das.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Abhishek%20Das.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Abhishek%2520Das.jpeg?itok=k-_VZJ-u]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Abhishek Das]]></image_alt>                    <created>1548872305</created>          <gmt_created>2019-01-30 18:18:25</gmt_created>          <changed>1548872305</changed>          <gmt_changed>2019-01-30 18:18:25</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="176750"><![CDATA[Abhishek Das]]></keyword>          <keyword tid="11506"><![CDATA[computer vision]]></keyword>          <keyword tid="180344"><![CDATA[nlp]]></keyword>          <keyword tid="23981"><![CDATA[natural language processing]]></keyword>          <keyword tid="173615"><![CDATA[dhruv batra]]></keyword>          <keyword tid="173616"><![CDATA[devi parikh]]></keyword>          <keyword tid="180345"><![CDATA[embodied question answering]]></keyword>          <keyword tid="176752"><![CDATA[visual dialogue]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="1051"><![CDATA[Computer Science]]></keyword>          <keyword tid="667"><![CDATA[robotics]]></keyword>          <keyword tid="2352"><![CDATA[robots]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="616821">  <title><![CDATA[Seeing is Believing: Atlanta Ranks #7 for STEM Professionals]]></title>  <uid>32045</uid>  <body><![CDATA[<p>Choosing a job based on its location is never easy. This is particularly true for science, technology, engineering, and math (STEM) professionals who often have multiple job offers in different cities.</p><p>But thanks to a <a href="https://public.tableau.com/views/Atlanta7STEM-friendlycity2019/Dashboard1?:embed=y&amp;:display_count=yes&amp;publish=yes&amp;:showVizHome=no" target="_blank">new data visualization created by Georgia Tech</a>, comparing the 100 largest metro areas in the United States just got a whole lot easier.</p><p>The interactive tool visualizes data compiled and published by personal finance site WalletHub, which was featured in a recent <a href="https://www.ajc.com/news/world/atlanta-named-one-the-best-metro-areas-for-stem-professionals/5dCFqvf8XOmQ5ZARSV85dL/">Atlanta Journal-Constitution story</a>. It allows users to easily navigate and understand the rankings for each city in three categories: professional opportunities, STEM-friendliness,&nbsp;and quality of life.</p><p>According to the data, <a href="https://www.cc.gatech.edu/about/atlanta" target="_blank">Atlanta</a> ranks as the #7 top city in the U.S. for STEM professionals. The city ranks #1 for job openings for STEM graduates per capita and #2 for the quality of engineering opportunities.</p>]]></body>  <author>Ben Snedeker</author>  <status>1</status>  <created>1548346553</created>  <gmt_created>2019-01-24 16:15:53</gmt_created>  <changed>1548871425</changed>  <gmt_changed>2019-01-30 18:03:45</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A new GT Computing data visualization features rankings for the top 100 U.S. cities. ]]></teaser>  <type>news</type>  <sentence><![CDATA[A new GT Computing data visualization features rankings for the top 100 U.S. cities. ]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-01-24T00:00:00-05:00</dateline>  <iso_dateline>2019-01-24T00:00:00-05:00</iso_dateline>  <gmt_dateline>2019-01-24 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[albert.snedeker@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Albert Snedeker, Communications Manager</p><p><a href="mailto:albert.snedeker@cc.gatech.edu?subject=ATL%20STEM%20Pros">albert.snedeker@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>584932</item>      </media>  <hg_media>          <item>          <nid>584932</nid>          <type>image</type>          <title><![CDATA[Coda - Renderings ]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Coda2.Updated.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Coda2.Updated.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Coda2.Updated.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Coda2.Updated.jpg?itok=3a0jKtPF]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1481562001</created>          <gmt_created>2016-12-12 17:00:01</gmt_created>          <changed>1481562001</changed>          <gmt_changed>2016-12-12 17:00:01</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="489"><![CDATA[atlanta]]></keyword>          <keyword tid="2301"><![CDATA[entrepreneur]]></keyword>          <keyword tid="167258"><![CDATA[STEM]]></keyword>          <keyword tid="180290"><![CDATA[STEM professionals]]></keyword>          <keyword tid="46361"><![CDATA[GT computing]]></keyword>          <keyword tid="2556"><![CDATA[artificial intelligence]]></keyword>          <keyword tid="9167"><![CDATA[machine learning]]></keyword>          <keyword tid="667"><![CDATA[robotics]]></keyword>          <keyword tid="145071"><![CDATA[fintech]]></keyword>          <keyword tid="292"><![CDATA[Biotech]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="616001">  <title><![CDATA[AAAI 2019: Charles Isbell Named a 2019 Fellow and Ashok Goel to Give Invited Talk at AI Conference]]></title>  <uid>34773</uid>  <body><![CDATA[<p>Professors and students from <a href="http://ml.gatech.edu/">The Machine Learning Center at Georgia Tech (ML@GT)</a> are kicking off the New Year presenting some of their latest research at the 33<sup>rd</sup> AAAI Conference on Artificial Intelligence (AAAI-19) in Honolulu, Hawaii, Jan. 27 through Feb. 1.</p><p>Focused solely on artificial intelligence, the conference brings together more than 2,000 artificial intelligence (AI) researchers from academia and industry.</p><p>One of the highlights of the conference for Georgia Tech will be the recognition of<strong> Charles Isbell</strong> as a <a href="https://twitter.com/RealAAAI/status/1072132592017625088">2019 AAAI Fellow</a>. Isbell, Associate Executive Dean for the <a href="https://www.cc.gatech.edu/">College of Computing</a>, is being recognized for his more than two decades of significant and sustained technical contributions to the field of AI.</p><p>Also well known for <a href="https://www.popsci.com/heres-how-an-ai-tricked-students-into-thinking-it-was-their-ta">his contributions to AI</a>, ML@GT&rsquo;s <strong>Ashok Goel</strong> is one of the conference&rsquo;s invited speakers and will be discussing <em>Experiments in Teaching AI</em>. In his talk, Goel will present several experiments on teaching cognitive systems in online and blended learning settings. Goel &shy;&ndash; a professor in the <a href="https://www.ic.gatech.edu/">School of Interactive Computing at Georgia Tech</a> &ndash; will also share results and draw out some general principles for teaching AI, as well as using AI to teach AI.</p><p>AAAI-19 will also feature five Georgia Tech-led research papers. According to the conference website, accepted papers touch on a variety of topics within the field including natural language processing, robotics, deep learning, and knowledge representation, and can be applied to transportation, commerce, sustainability, healthcare, and other important industries.</p><p>Georgia Tech&rsquo;s five papers are:</p><ul><li><em><a href="https://www.cc.gatech.edu/~isbell/papers/aaai2019composable.pdf">Composable Modular Reinforcement Learning</a></em></li><li><em><a href="https://arxiv.org/abs/1804.04164">Understanding Story Characters, Movie Actors and Their Versatility with Gaussian Representations</a></em></li><li><em><a href="https://arxiv.org/pdf/1811.05831.pdf">Revisiting Projection-Free Optimization For Strongly Convex Constraint Sets</a></em></li><li><em><a href="https://arxiv.org/pdf/1711.06232.pdf">A Novel Framework for Robustness Analysis of Visual QA Models</a></em></li><li><em><a href="https://arxiv.org/pdf/1809.01852.pdf">GAMENet: Graph Augmented MEmory Networks for Recommending Medication Combination</a></em></li></ul><p>The <a href="https://sites.google.com/view/kegworkshop/">Knowledge Extraction from Games</a> workshop taking place on Jan. 27 was organized by Georgia Tech Computer Science Ph.D. candidate <strong>Matthew Guzdial </strong>and his peers from Pomona College and Drexel University. The workshop will explore approaches and questions to the automated extraction of design elements, music, character graphics, and other &ldquo;knowledge&rdquo; from games.</p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1546886235</created>  <gmt_created>2019-01-07 18:37:15</gmt_created>  <changed>1548866363</changed>  <gmt_changed>2019-01-30 16:39:23</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech will present five papers and bring home several big honors at AAAI 2019.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech will present five papers and bring home several big honors at AAAI 2019.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-01-17T00:00:00-05:00</dateline>  <iso_dateline>2019-01-17T00:00:00-05:00</iso_dateline>  <gmt_dateline>2019-01-17 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allie.mcfadden@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>616622</item>      </media>  <hg_media>          <item>          <nid>616622</nid>          <type>image</type>          <title><![CDATA[Ashok Goel will present a keynote speech and Charles Isbell will honored as a 2019 Fellow at AAAI.]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[aaai.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/aaai.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/aaai.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/aaai.jpg?itok=CCMaqlXb]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1547822281</created>          <gmt_created>2019-01-18 14:38:01</gmt_created>          <changed>1547822281</changed>          <gmt_changed>2019-01-18 14:38:01</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="606703"><![CDATA[Constellations Center]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="616279">  <title><![CDATA['Human Rights' May Help Shape Artificial Intelligence in 2019]]></title>  <uid>32045</uid>  <body><![CDATA[<p>Ethics and accountability will be among the most significant challenges for artificial intelligence (AI) in 2019, according to a survey of researchers at Georgia Tech&rsquo;s College of Computing.</p><p>In response to an email query about AI developments that can be expected in 2019, most of the researchers &ndash; whether talking about <a href="http://ml.gatech.edu/">machine learning</a> (ML), <a href="http://www.robotics.gatech.edu/">robotics</a>, <a href="http://vis.gatech.edu/">data visualizations</a>, <a href="https://gtnlp.wordpress.com/">natural language processing</a>, or other facets of AI &ndash; touched on the growing importance of recognizing the needs of people in AI systems.</p><p>&ldquo;In 2019, I hope we will see AI researchers and practitioners start to frame the debate about proper and improper uses of artificial intelligence and machine learning in terms of human rights,&rdquo; said Associate Professor <a href="http://eilab.gatech.edu/mark-riedl"><strong>Mark Riedl</strong></a>.</p><p><strong><a href="https://youtu.be/o-YLQJ-oRqE" target="_blank">[RELATED: Is AI Coming For My Job?]</a></strong></p><p>&ldquo;More and more, interpretability and fairness are being recognized as critical issues to address to ensure AI appropriately interacts with society,&rdquo; said Ph.D. student&nbsp;<strong><a href="https://fredhohman.com/">Fred Hohman</a></strong>.</p><h4><strong>Taking on algorithmic bias</strong></h4><p>Questions about the rights of end users of AI-enabled services and products are becoming a priority, but Riedl said more is needed.</p><p>&ldquo;Companies are making progress in recognizing that AI systems may be biased in prejudicial ways. [However,] we need to start talking about the next step: remedy. How do people seek remedy if they believe an AI system made a wrong decision?&rdquo; said Riedl.</p><p>Assistant Professor <a href="http://jamiemorgenstern.com/"><strong>Jamie Morgenstern</strong></a> sees algorithmic bias as an ongoing concern in 2019 and gave banking as an example of an industry that may be in the news for its algorithmic decision-making.</p><p>&ldquo;I project that we&rsquo;ll have more high-profile examples of financial systems that use machine learning having worse rates of lending to women, people of color, and other communities historically underrepresented in the &lsquo;standard&rsquo; American economic system,&rdquo; Morgenstern said.</p><p><strong><a href="https://www.cc.gatech.edu/news/615576/georgia-tech-researchers-working-improve-fairness-ml-pipeline" target="_blank">[RELATED:&nbsp;Researchers Working To Improve Fairness in the ML Pipeline]</a></strong></p><p>In recent years corporate responses to cases of bias have been hit or miss, but Assistant Professor <a href="http://www.munmund.net/"><strong>Munmun De Choudhury</strong></a> said 2019 may see a shift in how tech companies balance their shareholders&rsquo; interests with the interests of their customers and society.</p><p>&ldquo;[Companies] will be increasingly subject to governmental regulation and will be forced to come up with safeguards to address misuse and abuse of their technologies, and will even consider broader partnerships with their market competitors to achieve this. For some corporations, business interests may take a backseat to ethics until they regain customer trust,&rdquo; said De Choudhury.</p><h4><strong>Working toward more transparency</strong></h4><p>One way companies can regain that trust is through sharing their algorithms with the public, our experts said.</p><p>&ldquo;Developers tend to walk around feeling objective because &lsquo;it&rsquo;s the algorithm that is determining the answer&rsquo;. Moving forward, I believe that the algorithms will have to be increasingly &lsquo;inspectable&rsquo; and developers will have to explain their answers,&rdquo; Executive Associate Dean and Professor <a href="https://www.cc.gatech.edu/fac/Charles.Isbell/"><strong>Charles Isbell</strong></a><strong>.</strong></p><p>Ph.D. student&nbsp;<a href="https://www.cc.gatech.edu/~ypinter3/"><strong>Yuval Pinter</strong></a> agreed. In the coming year, &ldquo;[I] think we will see that researchers are trying to [develop] techniques and tests that can help us to better understand what&rsquo;s going on in the actual wiring of our very fancy machine learning models.</p><p>&ldquo;This is not only for curiosity but also because legal applications or regulation in various countries are starting to require that algorithmic decision-making programs be able to explain why they are doing what they are doing,&rdquo; said Pinter.</p><p>Regents&rsquo; Professor <a href="https://www.cc.gatech.edu/aimosaic/faculty/arkin/"><strong>Ron Arkin</strong></a> believes that these concerns are becoming more central precisely because artificial intelligence will continue to grow in importance in our everyday lives.</p><p><strong><a href="https://www.ic.gatech.edu/podcasts/ep-1-pt-1-whos-behind-wheel" target="_blank">[RELATED: Who&#39;s Behind the Wheel?]</a></strong></p><p>&ldquo;Despite continued hype and omnipresent doomsayers, panic and fear over the growth of AI and robotics should begin to subside in 2019 as the benefits to people&rsquo;s lives are becoming more apparent to the world.</p><p>&ldquo;However, I expect to see lawyers jumping into the fray so we may also see lawsuits determining policy for self-driving cars [and other applications] more so than government regulation or the legal system,&rdquo; said Arkin.</p>]]></body>  <author>Ben Snedeker</author>  <status>1</status>  <created>1547238989</created>  <gmt_created>2019-01-11 20:36:29</gmt_created>  <changed>1548430063</changed>  <gmt_changed>2019-01-25 15:27:43</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech researchers say ethics and transparency are likely top 2019 trends in the burgeoning field of AI.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech researchers say ethics and transparency are likely top 2019 trends in the burgeoning field of AI.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-01-15T00:00:00-05:00</dateline>  <iso_dateline>2019-01-15T00:00:00-05:00</iso_dateline>  <gmt_dateline>2019-01-15 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[Georgia Tech experts highlight need to address bias and transparency in ongoing debate about role of AI]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[albert.snedeker@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Albert Snedeker, Communications Manager</p><p><a href="mailto:albert.snedeker@cc.gatech.edu?subject=2019%20AI%20Predictions">albert.snedeker@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>616435</item>      </media>  <hg_media>          <item>          <nid>616435</nid>          <type>image</type>          <title><![CDATA[GT Computing 2019 AI Predictions]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Predictions rotator_final main.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Predictions%20rotator_final%20main.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Predictions%20rotator_final%20main.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Predictions%2520rotator_final%2520main.png?itok=qleCxe30]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[GT Computing 2019 AI Predictions]]></image_alt>                    <created>1547573803</created>          <gmt_created>2019-01-15 17:36:43</gmt_created>          <changed>1547573803</changed>          <gmt_changed>2019-01-15 17:36:43</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="545781"><![CDATA[Institute for Data Engineering and Science]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="2556"><![CDATA[artificial intelligence]]></keyword>          <keyword tid="9167"><![CDATA[machine learning]]></keyword>          <keyword tid="180204"><![CDATA[algorithmic bias]]></keyword>          <keyword tid="2947"><![CDATA[transparency]]></keyword>          <keyword tid="180205"><![CDATA[riedl]]></keyword>          <keyword tid="180206"><![CDATA[hohman]]></keyword>          <keyword tid="175631"><![CDATA[isbell]]></keyword>          <keyword tid="180207"><![CDATA[de choudhury]]></keyword>          <keyword tid="180208"><![CDATA[morgenstern]]></keyword>          <keyword tid="180209"><![CDATA[arkin]]></keyword>          <keyword tid="180210"><![CDATA[2019 trends]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>          <term tid="39541"><![CDATA[Systems]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="615833">  <title><![CDATA[Seth Hutchinson Named New Executive Director of IRIM]]></title>  <uid>33939</uid>  <body><![CDATA[<p>The Georgia Institute of Technology has selected <strong>Seth Hutchinson</strong> as the new executive director of the&nbsp;<a href="http://www.robotics.gatech.edu/">Institute for Robotics and Intelligent Machines</a>&nbsp;(IRIM).&nbsp;<a href="https://www.cc.gatech.edu/~seth/">Hutchinson</a>&nbsp;is a professor and KUKA Chair for Robotics in Georgia Tech&rsquo;s College of Computing and has served as associate director of IRIM.</p><p>Before joining Georgia Tech in January 2018, he was a professor of electrical and computer engineering at the University of Illinois at Urbana-Champaign. Hutchinson holds a bachelor of science, master of science and Ph.D. in electrical engineering from Purdue University.</p><p>&ldquo;Seth is internationally known for his work in robotics as evidenced by his more than 200 publications, his editor-in-chief role of the&nbsp;<em>IEEE Transactions on Robotics</em>&nbsp;and his recent selection as president-elect of the IEEE Robotics and Automation Society,&rdquo; said Chaouki Abdallah, Georgia Tech&rsquo;s executive vice president for research. &ldquo;I am pleased that he will be the new executive director of Georgia Tech&rsquo;s Institute for Robotics and Intelligent Machines, and I look forward to working with him toward the goal of making Georgia Tech the leader in robotics, autonomy and manufacturing.&rdquo;&nbsp;</p><p>Hutchinson&rsquo;s research interests lie in vision-based control, motion planning, planning under uncertainty, pursuit-evasion, localization and mapping, locomotion and bio-inspired robotics. Hutchinson is the coauthor of two books, &ldquo;<em>Principles of Robot Motion - Theory, Algorithms, and Implementations</em>,&rdquo; and &ldquo;<em>Robot Modeling and Control</em>.&rdquo;</p><p>&ldquo;The robotics research happening here at Georgia Tech is among the best in the world, from actuators to high-level reasoning,&rdquo; he said. &ldquo;I honestly cannot think of a place I&rsquo;d rather be right now than here, working with this group of people.&rdquo;</p><p>At Georgia Tech, IRIM serves as an umbrella under which robotics researchers, educators and students from across campus can come together to advance the many high-powered and diverse robotics activities.&nbsp;</p><p>IRIM&rsquo;s mission is to create new and exciting opportunities for faculty collaboration; educate the next generation of robotics experts, entrepreneurs, and academic leaders; and partner with industry and government to pursue truly transformative robotics research. IRIM serves more than 90 faculty members, 180 graduate students and 40 robotics labs. The robotics program at Georgia Tech attracts more than $60 million in research annually.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1546538215</created>  <gmt_created>2019-01-03 17:56:55</gmt_created>  <changed>1546538215</changed>  <gmt_changed>2019-01-03 17:56:55</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Hutchinson is a professor and KUKA Chair for Robotics in Georgia Tech’s College of Computing and has served as associate director of IRIM.]]></teaser>  <type>news</type>  <sentence><![CDATA[Hutchinson is a professor and KUKA Chair for Robotics in Georgia Tech’s College of Computing and has served as associate director of IRIM.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2019-01-03T00:00:00-05:00</dateline>  <iso_dateline>2019-01-03T00:00:00-05:00</iso_dateline>  <gmt_dateline>2019-01-03 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>John Toon</p><p>Research News</p><p><a href="mailto:jtoon@gatech.edu">jtoon@gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>615814</item>      </media>  <hg_media>          <item>          <nid>615814</nid>          <type>image</type>          <title><![CDATA[Seth Hutchinson, executive director of IRIM]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[seth-hutchinson-9688.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/seth-hutchinson-9688.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/seth-hutchinson-9688.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/seth-hutchinson-9688.jpg?itok=S4b0XovF]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Seth Hutchinson with robotics lab]]></image_alt>                    <created>1546526715</created>          <gmt_created>2019-01-03 14:45:15</gmt_created>          <changed>1546526715</changed>          <gmt_changed>2019-01-03 14:45:15</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="129"><![CDATA[Institute and Campus]]></category>          <category tid="132"><![CDATA[Institute Leadership]]></category>          <category tid="134"><![CDATA[Student and Faculty]]></category>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="129"><![CDATA[Institute and Campus]]></term>          <term tid="132"><![CDATA[Institute Leadership]]></term>          <term tid="134"><![CDATA[Student and Faculty]]></term>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="169760"><![CDATA[Seth Hutchinson]]></keyword>          <keyword tid="78271"><![CDATA[IRIM]]></keyword>          <keyword tid="78811"><![CDATA[Institute for Robotics and Intelligent Machines]]></keyword>          <keyword tid="180037"><![CDATA[IRIM director]]></keyword>          <keyword tid="667"><![CDATA[robotics]]></keyword>      </keywords>  <core_research_areas>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="615418">  <title><![CDATA[Assistant Professor Dhruv Batra Earns Prestigious ECASE-Army Award]]></title>  <uid>33939</uid>  <body><![CDATA[<p><a href="http://www.ic.gatech.edu/">School of Interactive Computing</a> Assistant Professor <strong>Dhruv Batra</strong> was recently selected as a recipient of the prestigious Early Career Award for Scientists and Engineers (ECASE-Army) by the Army Research Office, providing five years&rsquo; worth of research funding to make artificial intelligence (AI) systems more transparent, explainable, and trustworthy.</p><p>The award, which provides a total of $1 million over the course of the grant, comes as a result of Batra&rsquo;s selection for a similar early-career award by the Army Research Office Young Investigator Program in 2014.</p><p>The research Batra&rsquo;s lab will pursue with the funding addresses a fundamental challenge in development of AI &ndash; their &ldquo;black-box&rdquo; nature, the consequent difficulty humans face in identifying why or how AI systems fail, and how to improve upon those technologies. When a self-driving car from a major tech company, for example, suffered its first fatality in 2015, legal and regulatory agencies understandably questioned what went wrong. The challenge at the time was providing a sufficient answer to that question.</p><p>&ldquo;Your response can&rsquo;t just be, &lsquo;Well, there was this machine learning box in there, and it just didn&rsquo;t detect the car. We don&rsquo;t know why,&rsquo;&rdquo; said Batra, who is also a member of the <a href="http://ml.gatech.edu">Machine Learning</a> and <a href="http://gvu.gatech.edu">GVU</a> Centers.</p><p>Batra&rsquo;s research aims to create AI systems that can more readily explain what they do and why. This could come in the form of natural language or visual explanations, both of which &ndash; computer vision and natural language processing &ndash; are central areas of focus in Batra&rsquo;s lab. The machine could, for example, identify regions in image that provide support for its predictions, potentially assisting a user&rsquo;s understanding of what the machine can or cannot do.</p><p>It&rsquo;s an important area of study for a few reasons, Batra said. He classifies AI technology into three levels of maturity:</p><ul><li>Level 1 is technology that is in its infancy. It is not near deployment to everyday users, and the consumers of the technology are researchers. The goal for transparency and explanation is to help researchers and developers to understand the failure modes and current limitations, and deduce how to improve the technology &ndash; &ldquo;actionable insight,&rdquo; as Batra called it.<br />&nbsp;</li><li>Level 2 is when things are working to a degree, enough so that the technology can and has been deployed.<br /><br />&ldquo;The technology may be mature in a narrow range, and you can ship the product,&rdquo; Batra said. &ldquo;Like face detection or fingerprint technology. It&rsquo;s built into products and being used at agencies, airports, or other places.&rdquo;<br /><br />In such cases, you want explanations and interpretability that helps build appropriate trust with users. Users can understand when the system reliably works and when it might not work &ndash; face detection in bad lighting, for example &ndash; and make efforts to use in a more appropriate setting.<br />&nbsp;</li><li>Level 3 is typically a fairly narrow category where the AI is better &ndash; sometimes significantly so &ndash; than the human. Batra used chess-playing and Go-playing bots as an example. The best chess-playing bots convincingly outperform the best humans and reliably hand a resounding defeat to the average human player.<br /><br />&ldquo;We already know bots play much better than humans,&rdquo; he said. &ldquo;In such cases, you don&rsquo;t need to improve the machine and you already trust its skill level. You want the machine to give you explanations not so that you can improve the AI, but so that you can improve yourself.&rdquo;</li></ul><p>Batra envisions scenarios where the techniques his lab develops could assist at all three levels, but the experiments will take place between Levels 1 and 2. They will work in Visual Question Answering, which are agents that answer natural language questions about visual content, and other areas of maturity that may reach the product level in five or more years.</p><p>The funding will begin for Batra in January. Batra has served as an assistant professor at Georgia Tech since Fall 2016. <a href="https://www.cc.gatech.edu/~dbatra/">Visit his website for more information about his research.</a></p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1544812745</created>  <gmt_created>2018-12-14 18:39:05</gmt_created>  <changed>1544812745</changed>  <gmt_changed>2018-12-14 18:39:05</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The award will provide $1 million worth of funding over the course of the next five years.]]></teaser>  <type>news</type>  <sentence><![CDATA[The award will provide $1 million worth of funding over the course of the next five years.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-12-14T00:00:00-05:00</dateline>  <iso_dateline>2018-12-14T00:00:00-05:00</iso_dateline>  <gmt_dateline>2018-12-14 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>586461</item>      </media>  <hg_media>          <item>          <nid>586461</nid>          <type>image</type>          <title><![CDATA[Dhruv Batra]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[DhruvBatra.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/DhruvBatra.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/DhruvBatra.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/DhruvBatra.jpg?itok=ImTmEvl-]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1485377710</created>          <gmt_created>2017-01-25 20:55:10</gmt_created>          <changed>1485377710</changed>          <gmt_changed>2017-01-25 20:55:10</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="173615"><![CDATA[dhruv batra]]></keyword>          <keyword tid="179995"><![CDATA[ecase-army]]></keyword>          <keyword tid="1633"><![CDATA[PECASE]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="2556"><![CDATA[artificial intelligence]]></keyword>          <keyword tid="173614"><![CDATA[visual question answering]]></keyword>          <keyword tid="179996"><![CDATA[VQA]]></keyword>          <keyword tid="2835"><![CDATA[ai]]></keyword>          <keyword tid="179997"><![CDATA[explainable AI]]></keyword>          <keyword tid="8494"><![CDATA[HCI]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="615032">  <title><![CDATA[Computing Professors Recognized With Prestigious ACM Fellowships]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Two Georgia Tech College of Computing faculty members have been named as Fellows of the Association for Computing Machinery (ACM).</p><p>In an announcement made today, Executive Associate Dean <strong>Charles Isbell</strong> and School of Interactive Computing Professor <strong>Amy Bruckman</strong> were named as two of 56 ACM Fellows selected for 2018.</p><p>According to the ACM news release, &ldquo;the accomplishments of the 2018 ACM Fellows underpin the technologies that define the digital age and greatly impact our professional and personal lives. ACM Fellows are composed of an elite group that represents less than 1 percent of the Association&rsquo;s global membership.&rdquo;</p><p>Isbell, a Georgia Tech alumnus, was named as an ACM Fellow &ldquo;for contributions to interactive machine learning; and for contributions to increasing access and diversity in computing.&rdquo;</p><p>The organization selected Bruckman for her &ldquo;contributions to collaborative computing and foundational work in Internet research ethics.&rdquo;</p><p>&ldquo;In society, when we identify our tech leaders, we often think of men and women in industry who have made technologies pervasive while building major corporations,&rdquo; said ACM President Cherri M. Pancake. &ldquo;At the same time, the dedication, collaborative spirit and creativity of the computing professionals who initially conceived and developed these technologies goes unsung. The ACM Fellows program publicly recognizes the people who made key contributions to the technologies we enjoy. Even when their work did not directly result in a specific technology, they have made major theoretical contributions that have advanced the science of computing. We are honored to add a new class of Fellows to ACM&rsquo;s ranks and we look forward to the guidance and counsel they will provide to our organization.&rdquo;</p><p>Underscoring ACM&rsquo;s global reach, the 2018 Fellows hail from universities, companies and research centers in Finland, Greece, Israel, Sweden, Switzerland, and the US.</p><p>The 2018 Fellows have been cited for numerous contributions in areas including accessibility, augmented reality, algorithmic game theory, data mining, storage, software and the World Wide Web.</p><p>ACM will formally recognize its 2018 Fellows at the annual Awards Banquet, to be held in San Francisco on June 15, 2019. Additional information about the 2018 ACM Fellows, as well as previous ACM Fellows, is available through the ACM Fellows site.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1544044771</created>  <gmt_created>2018-12-05 21:19:31</gmt_created>  <changed>1544044771</changed>  <gmt_changed>2018-12-05 21:19:31</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[In an announcement made today, Executive Associate Dean Charles Isbell and School of Interactive Computing Professor Amy Bruckman were named as two of 56 ACM Fellows selected for 2018.]]></teaser>  <type>news</type>  <sentence><![CDATA[In an announcement made today, Executive Associate Dean Charles Isbell and School of Interactive Computing Professor Amy Bruckman were named as two of 56 ACM Fellows selected for 2018.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-12-05T00:00:00-05:00</dateline>  <iso_dateline>2018-12-05T00:00:00-05:00</iso_dateline>  <gmt_dateline>2018-12-05 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Ben Snedeker</p><p>Communications Manager</p><p><a href="mailto:albert.snedeker@cc.gatech.edu">albert.snedeker@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>615031</item>      </media>  <hg_media>          <item>          <nid>615031</nid>          <type>image</type>          <title><![CDATA[ACM Fellows]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[ACM Fellows.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/ACM%20Fellows.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/ACM%20Fellows.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/ACM%2520Fellows.jpg?itok=bM9_LcqC]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Amy Bruckman and Charles Isbell]]></image_alt>                    <created>1544044546</created>          <gmt_created>2018-12-05 21:15:46</gmt_created>          <changed>1544044546</changed>          <gmt_changed>2018-12-05 21:15:46</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="8472"><![CDATA[amy bruckman]]></keyword>          <keyword tid="10664"><![CDATA[charles isbell]]></keyword>          <keyword tid="3047"><![CDATA[ACM]]></keyword>          <keyword tid="113911"><![CDATA[ACM Fellows]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="208"><![CDATA[computing]]></keyword>          <keyword tid="172908"><![CDATA[Association for Computing Machinery]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="613667">  <title><![CDATA[Georgia Tech Ph.D. Student Wins Best Paper Honorable Mention at VISxAI 2018]]></title>  <uid>34773</uid>  <body><![CDATA[<p><a href="https://www.cse.gatech.edu/">Georgia Tech Computational Science and Engineering (CSE)</a> Ph.D. student <strong>Fred Hohman</strong> was recently recognized with an honorable mention for best paper at this year&rsquo;s&nbsp;VISxAI workshop. The workshop is a part of the&nbsp;<a href="http://ieeevis.org/year/2018/welcome">IEEE VIS 2018</a> conference.</p><p>Hohman&rsquo;s &ldquo;explorable&rdquo; article <a href="https://idyll.pub/post/dimensionality-reduction-293e465c2a3443e8941b016d/">The Beginner&rsquo;s Guide to Dimensionality</a> Reduction was created in collaboration with <strong>Matt Conlen</strong> of the University of Washington. Using a dataset of artworks from the Metropolitan Museum of Art in New York City, Hohman and Conlen explore the methods that data scientists use to visualize high-dimensional data.</p><p>Visualizing the myriad connections between all of the different features of each artwork in a high-dimensional graph could provide new insights. However, as Hohman says in the article, humans can&rsquo;t see so many dimensions all at once.</p><p>Dimensionality reduction algorithms reduce the number of random variables by collecting a set of principal variables that retain the variation present in the data. This allows the data to be presented in fewer dimensions, which can be more easily processed by human viewers. This kind of projection is called an&nbsp;<em>embedding</em>.</p><p>The guide teaches users about embeddings and compares some of the most popular dimensionality reduction algorithms used today to create them. The article also contains a list of pros and cons for each of the algorithms to help readers use this technique for their own data. All of the algorithms mentioned are open-source Python implementations.</p><p>&ldquo;Explorable and interactive articles are a great medium for teaching concepts that haven&rsquo;t seen much usage and attention in academia yet,&rdquo; said Hohman. &ldquo;It&rsquo;s really great to see recognition for our article, which helps people learn and engage with complicated concepts through interactive visualizations that are easily accessible on the web,&rdquo; said Hohman.</p><p>IEEE VIS is the flagship conference on visualization and visual analytics. Hohman was also a panelist at this year&rsquo;s event, and his advisor, CSE Associate Professor <strong>Polo Chau</strong>, served as a co-organizer of VISxAI. IEEE VIS was held Oct. 21-26 in Berlin, Germany.</p><p>For more information on Georgia Tech&rsquo;s presence at IEEE VIS, explore highlights with the <a href="https://gvu.gatech.edu/vis-2018">GVU Center&rsquo;s interactive overview.</a></p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1541101463</created>  <gmt_created>2018-11-01 19:44:23</gmt_created>  <changed>1543857599</changed>  <gmt_changed>2018-12-03 17:19:59</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Hohman and Conlen demonstrate how artwork from the Metropolitan Museum of Art can be categorized using machine learning techniques.]]></teaser>  <type>news</type>  <sentence><![CDATA[Hohman and Conlen demonstrate how artwork from the Metropolitan Museum of Art can be categorized using machine learning techniques.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-11-01T00:00:00-04:00</dateline>  <iso_dateline>2018-11-01T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-11-01 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allie.mcfadden@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>613693</item>      </media>  <hg_media>          <item>          <nid>613693</nid>          <type>image</type>          <title><![CDATA[ML@GT Ph.D. student Fred Hohman collaborated with Matt Conlen of the University of Washington to create an explorable paper about high-dimensional data visualization.]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[me6-1 copy.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/me6-1%20copy.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/me6-1%20copy.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/me6-1%2520copy.jpg?itok=Kjf2Rr9R]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1541105775</created>          <gmt_created>2018-11-01 20:56:15</gmt_created>          <changed>1541605649</changed>          <gmt_changed>2018-11-07 15:47:29</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="614861">  <title><![CDATA[New IC Assistant Professor Matthew Gombolay Takes Flight at Georgia Tech]]></title>  <uid>33939</uid>  <body><![CDATA[<p><a href="http://www.ic.gatech.edu">School of Interactive Computing</a> Assistant Professor <strong><a href="https://www.cc.gatech.edu/people/matthew-gombolay">Matthew Gombolay</a></strong> was always interested in space and aviation. He had taken some flying lessons as a teen and after college decided that he wanted to finish his pilot certification. He had received some prodding from his then-girlfriend &ndash; now wife &ndash; in the form of a flight lesson in the Washington D.C. area.</p><p>Although he had done classes when he was around 15 or 16 years old, he treated it like it was a brand-new experience for him.</p><p>&ldquo;It kind of was,&rdquo; said Gombolay, who had backed out of his lessons when he was younger after being told he was ready to fly solo about eight hours in. &ldquo;I got a little shy and embarrassed and quiet, but I always wanted to do it.&rdquo;</p><p>And he did, receiving his certification after finishing his undergraduate degree. It was everything he had hoped. It was a different experience than his studies and his research, which is mostly an exercise of his mental capabilities. Flying required mental effort, but also physical &ndash; his hands, his feet, coordination of his body. It was something that he appreciated.</p><p>But it was also something that helped guide him on the path he wanted to follow for his research.</p><h3><strong>Earning his wings</strong></h3><p>His studies lie in a number of areas, namely robotics, artificial intelligence (AI), machine learning (ML), human factors engineering, human-robot interaction, planning and scheduling, queuing theory, real-time systems, and operations research.</p><p>A lot of that was borne out of a specific experience he had during a flight lesson.</p><p>&ldquo;I was flying the aircraft, and my instructor told me to plot a diversion to another airport because we were going to pretend that the airport I was headed to had some weather that would prevent me from landing,&rdquo; he explained. &ldquo;That&rsquo;s a lot of work. You have to fly the plane, you have to get out a map and do all the segments you&rsquo;ll take, measure the angles, measure distances, calculate fuel burn and figure out how you&rsquo;ll change your flight plan.&rdquo;</p><p>So, when given the directive to use autopilot while doing the calculations, Gombolay input his altitude and heading and stuck his head into his calculations.</p><p>It was a mistake.</p><p>&ldquo;She told me I broke the first rule,&rdquo; he said. &ldquo;You have to aviate, navigate, then communicate. I was so desperate to handle the workload that I turned over the first duty to an autopilot and didn&rsquo;t really know how it worked.</p><p>&ldquo;If there was Mt. Everest in front of us, it wasn&rsquo;t going to steer away. If there was another plane, it wasn&rsquo;t going to steer away. If it was low on fuel, it wasn&rsquo;t going to tell me to turn back.&rdquo;</p><p>It was a realization of how quickly and easily humans are willing to trust automated systems that may not be entirely prepared to handle that workload. Your willingness to be vulnerable is a huge choice, he said.</p><p>&ldquo;I can trust something, but that doesn&rsquo;t make it trustworthy,&rdquo; he said.</p><p>This realization helped guide him to human factors engineering during his graduate studies at the Massachusetts Institute of Technology, where he earned his Ph.D. in Autonomous Systems in 2017.</p><h3><strong>Making robots personal</strong></h3><p>Since joining Georgia Tech in the beginning of the fall semester, Gombolay has been growing his lab and beginning a handful of new projects that build on some of his past research.</p><p>One recently-funded project done in collaboration with MIT focuses on how humans make decisions as part of a team &ndash; the strategies, the styles, etc. Using a video game as an example, he explained that some individuals may prefer an aggressive approach versus a defensive.</p><p>&ldquo;These different stylistic things emerge naturally in how humans solve problems,&rdquo; Gombolay said. &ldquo;But that diversity isn&rsquo;t very pleasant for machine learning algorithms because the average of two different people is not a third good person. It&rsquo;s just an ugly mess.&rdquo;</p><p>His lab is looking at ways to synthesize policies that can leverage all of the data about styles and strategies and tailor to individual differences. Health care is an example, Gombolay said. Consider a physical therapist who wants to teach a robot how to take care of a patient at home. Each therapist has his or her own unique style of stretching, massaging, and strength-training their patients, and each patient has a unique malady, response profile, or anatomy.</p><p>&ldquo;Most algorithms today that would be put on a robot to help it learn how to care for a patient would either apply a one-size-fits-all model, which can result in a blend that helps nobody, or train from scratch for each new patient-therapist combination, which would take way too long to be a practical solution.</p><p>&ldquo;We want to leverage every robot&rsquo;s collective experience while still being able to tailor the behavior to each individual.&rdquo;</p><p>Other areas of focus for his lab include manufacturing, health care, and new areas in reinforcement learning. He is currently funding three students, and his lab includes one research scientist.</p><h3><strong>A few hobbies</strong></h3><p>When he&rsquo;s not doing research &ndash; or maybe flying a plane &ndash; Gombolay is usually taking part in one of his other hobbies, like tennis or building models of Star Wars or Star Trek ships with his LEGOs and MegaBloks.</p><p>He&rsquo;s also a musician, who started on the violin and piano before adding an alto saxophone to the mix and later a guitar. The guitar is his instrument of choice nowadays, and he&rsquo;s spent a lot of time using it in bands in college &ndash; church, a cover band, talent shows and the like.</p><p>He&rsquo;s found a couple of people on campus, like fellow IC Professor <strong><a href="https://www.cc.gatech.edu/people/seth-hutchinson">Seth Hutchinson</a></strong>, who don&rsquo;t mind getting together for a jam session now and again.</p><p>As the Star Wars and Star Trek models might suggest, Gombolay has always been fascinated by space and space travel. It&rsquo;s influenced his path in research and, who knows, in another life he might have been an astronaut.</p><p>&ldquo;Maybe,&rdquo; he said when asked whether that was ever an ambition. &ldquo;Who knows? Maybe one day I&rsquo;ll be on a rocket to Mars. I&rsquo;ll take my wife and the kid with me. She&rsquo;s a physician, so she&rsquo;ll take care of us.&rdquo;</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1543614191</created>  <gmt_created>2018-11-30 21:43:11</gmt_created>  <changed>1543614191</changed>  <gmt_changed>2018-11-30 21:43:11</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Inspired by an experience in a flying lesson, Georgia Tech's Matthew Gombolay is researching how to make robotics more personal and trustworthy.]]></teaser>  <type>news</type>  <sentence><![CDATA[Inspired by an experience in a flying lesson, Georgia Tech's Matthew Gombolay is researching how to make robotics more personal and trustworthy.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-11-30T00:00:00-05:00</dateline>  <iso_dateline>2018-11-30T00:00:00-05:00</iso_dateline>  <gmt_dateline>2018-11-30 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>614858</item>      </media>  <hg_media>          <item>          <nid>614858</nid>          <type>image</type>          <title><![CDATA[Matthew Gombolay main]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Main image.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Main%20image.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Main%20image.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Main%2520image.jpeg?itok=EkVec6Od]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Matthew Gombolay]]></image_alt>                    <created>1543613417</created>          <gmt_created>2018-11-30 21:30:17</gmt_created>          <changed>1543613417</changed>          <gmt_changed>2018-11-30 21:30:17</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.ic.gatech.edu/content/artificial-intelligence-machine-learning]]></url>        <title><![CDATA[Artificial Intelligence and Machine Learning]]></title>      </link>          <link>        <url><![CDATA[https://www.ic.gatech.edu/content/robotics-computational-perception]]></url>        <title><![CDATA[Robotics and Computational Perception]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="175375"><![CDATA[matthew gombolay]]></keyword>          <keyword tid="2483"><![CDATA[interactive computing]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="8494"><![CDATA[HCI]]></keyword>          <keyword tid="667"><![CDATA[robotics]]></keyword>          <keyword tid="109"><![CDATA[Georgia Tech]]></keyword>          <keyword tid="78271"><![CDATA[IRIM]]></keyword>          <keyword tid="78811"><![CDATA[Institute for Robotics and Intelligent Machines]]></keyword>          <keyword tid="4137"><![CDATA[aeronautics]]></keyword>          <keyword tid="78851"><![CDATA[HRI]]></keyword>          <keyword tid="78841"><![CDATA[human-robot interaction]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>      </keywords>  <core_research_areas>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="614427">  <title><![CDATA[Georgia Tech Will Show Off Latest Research at AI’s ‘Hottest’ Conference]]></title>  <uid>34773</uid>  <body><![CDATA[<p>It is uncommon to hear about a machine learning and artificial intelligence (AI) conference selling out like Taylor Swift concert, but the <a href="https://nips.cc/Conferences/2018">Neural Information Processing Systems (NeurIPS)</a> conference did just that.</p><p>The conference sold out in <a href="https://medium.com/syncedreview/nips-tickets-sell-out-in-less-than-12-minutes-e3aab37ab36a">less than 12 minutes</a> for its Dec. 2 - 8 gathering in Montreal, Quebec. As one of the biggest AI conferences in the world, tech companies like Google, Microsoft, and Facebook come to find new talent, while renowned researchers present their latest work.</p><p>A large number of Georgia Tech faculty and students will be among the throngs of attendees. With 26 papers by more than 23 Georgia Tech authors and several workshops to participate in, the Yellow Jackets are one of the leading contributors to the conference program.</p><p><strong>Byron Boots</strong> and <strong>Dhruv Batra</strong>, assistant professors in the Machine Learning Center at Georgia Tech (ML@GT) and the <a href="https://www.ic.gatech.edu/">School of Interactive Computing,</a> are serving as area chairs.</p><p>&ldquo;We are thrilled to be a top performing university at a conference of NeurIPS&rsquo; caliber. Our faculty and students continue to push boundaries and revolutionize our field, and it shows at events like this,&rdquo; said <strong>Irfan Essa,</strong> <a href="http://ml.gatech.edu/">ML@GT</a> director.</p><p>As NeurIPS has increased in popularity since its first meeting in 1987, the conference receives thousands of submissions each year with a record high of 3,240 submissions in 2017. Over the years the content has shifted from examining biological and artificial neural networks and to focus more on AI, statistics, and machine learning.</p><p>Below is a list of Georgia Tech&rsquo;s spotlight presentations, posters, and workshops being featured at NeurIPS next month.</p><p>&nbsp;</p><p><strong>Spotlights</strong></p><ul><li><a href="https://arxiv.org/pdf/1801.03423.pdf">A Smoothed Analysis of the Greedy Algorithm for the Linear Contextual Bandit Problem</a></li></ul><p>Sampath Kannan, Jamie Morgenstern, Aaron Roth, Bo Waggoner, and Steven Wu</p><ul><li><a href="https://arxiv.org/pdf/1811.05016.pdf">Learning Temporal Point Processes via Reinforcement Learning</a></li></ul><p>Shuang Li, Shuai Xiao, Shixiang Zhu, Nan Du, Yao Xie, Le Song</p><ul><li><a href="https://arxiv.org/pdf/1805.10611.pdf">Robust Hypothesis Testing Using Wasserstein Uncertainty Sets</a></li></ul><p>RUI GAO, Liyan Xie, Yao Xie, Huan Xu</p><ul><li><a href="https://arxiv.org/pdf/1807.07531.pdf">Limited Memory Kelley&rsquo;s Method Converges for Composite Convex and Submodular Objectives</a></li></ul><p>Song Zhou, Swati Gupta, and Madeleine Udell</p><ul><li><a href="https://www.seas.upenn.edu/~xsi/data/nips18.pdf">Learning Loop Invariants for Program Verification</a></li></ul><p>Xujie Si, Hanjun Dai, Mukund Raghothaman, Mayur Naik, and Le Song</p><ul><li><a href="https://arxiv.org/pdf/1807.10455.pdf">Acceleration through Optimistic No-Regret Dynamics</a></li></ul><p>Jun-Kun Wang and Jacob Abernethy</p><p>&nbsp;</p><p><strong>Posters</strong></p><ul><li><a href="https://arxiv.org/pdf/1805.10755.pdf">Dual Policy Iteration</a></li></ul><p>Wen Sun, Geoff Gordon, Byron Boots, and Drew Bagnell</p><ul><li><a href="https://arxiv.org/abs/1810.13400">Differentiable MPC for End-to-End Planning and Control</a></li></ul><p>Brandon Amos, Jake Sacks, Ivan Dario Jimenez, Byron Boots, and Zico Kolter</p><ul><li><a href="https://arxiv.org/pdf/1809.08820.pdf">Orthogonally Decoupled Variational Gaussian Processes</a></li></ul><p>Hugh Samilbeni, Ching-An Cheng, Byron Boots, and Marc Deisenroth</p><ul><li><a href="https://arxiv.org/pdf/1810.12369.pdf">Learning and Inference in Hilbert Space with Quantum Graphical Models</a></li></ul><p>Sid Srinivasan, Carlton Downey, and Byron Boots</p><ul><li><a href="https://arxiv.org/abs/1811.00103">The Price of Fair PCA: One Extra Dimension</a></li></ul><p>Samira Samadi, Uthaipon Tantipongpipat, Mohit Singh, Jamie Morgenstern, and Santosh Vempala</p><ul><li><a href="https://arxiv.org/pdf/1801.03423.pdf">A Smoothed Analysis of the Greedy Algorithm for the Linear Contextual Bandit Problem</a></li></ul><p>Sampath Kannan, Jamie Morgenstern, Aaron Roth, Bo Waggoner, and Steven Wu</p><ul><li><a href="https://arxiv.org/pdf/1811.05016.pdf">Learning Temporal Point Processes via Reinforcement Learning</a></li></ul><p>Shuang Li, Shuai Xiao, Shixiang Zhu, Nan Du, Yao Xie, Le Song</p><ul><li><a href="https://arxiv.org/pdf/1805.10611.pdf">Robust Hypothesis Testing Using Wasserstein Uncertainty Sets</a></li></ul><p>RUI GAO, Liyan Xie, Yao Xie, Huan Xu</p><ul><li><a href="https://arxiv.org/pdf/1807.07531.pdf">Limited Memory Kelley&rsquo;s Method Converges for Composite Convex and Submodular Objectives</a></li></ul><p>Song Zhou, Swati Gupta, and Madeleine Udell</p><ul><li><a href="https://arxiv.org/pdf/1810.11896.pdf">Smoothed Analysis of Discrete Tensor Decomposition and Assemblies of Neurons</a></li></ul><p>Nima Anari, Amin Saberi, Wolfgang Maass, Robert Legenstein, Christos Papadimitriou, and Santosh Vempala</p><ul><li><a href="https://arxiv.org/abs/1803.06416">Differential Privacy for Growing Databases</a></li></ul><p>Rachel Cummings, Sara Krehbiel, Kevin Lai, and Uthaipon (Tao) Tantipongpipat.</p><ul><li><a href="https://arxiv.org/pdf/1808.10056.pdf">Differentially Private Change-Point Detection</a></li></ul><p>Rachel Cummings, Sara Krehbiel, Yajun Mei, Rui Tuo, and Wanrong Zhang</p><ul><li><a href="https://www.cs.rice.edu/~as143/Papers/topkapi.pdf">Topkapi: Parallel and Fast Sketches for Finding Top-K Frequent Elements</a></li></ul><p>Ankush Mandal, He Jiang, Anshumali Shrivastava, and Vivek Sarkar</p><ul><li><a href="https://arxiv.org/pdf/1810.03649.pdf">Overcoming Language Priors in Visual Question Answering with Adversarial Regularization</a></li></ul><p>Sainandan Ramakrishnan, Aishwarya Agrawal, and Stefan Lee</p><ul><li><a href="https://arxiv.org/abs/1806.06004">Partially Supervised Image Captioning</a></li></ul><p>Peter Anderson, Stephen Gould, and Mark Johnson</p><ul><li><a href="https://arxiv.org/pdf/1805.09298.pdf">Learning towards Minimum Hyperspherical Energy</a></li></ul><p>Weiyang Liu, Rongmei Lin, Zhen Liu, Lixin Liu, Zhiding Yu, Bo Dai, and Le Song</p><ul><li><a href="https://nips.cc/Conferences/2018/Schedule?showEvent=11921">Coupled Variational Bayes via Optimization Embedding</a></li></ul><p>Bo Dai, Hanjun Dai, Niao He, Weiyang Liu, Zhen Liu, Jianshu Chen, Lin Xiao, and Le Song</p><ul><li><a href="https://www.seas.upenn.edu/~xsi/data/nips18.pdf">Learning Loop Invariants for Program Verification</a></li></ul><p>Xujie Si, Hanjun Dai, Mukund Raghothaman, Mayur Naik, and Le Song</p><ul><li><a href="https://papers.nips.cc/paper/7667-cooperative-neural-networks-conn-exploiting-prior-independence-structure-for-improved-classification.pdf">Cooperative Neural Networks (CoNN): Exploiting Prior Independence Structure for Improved Classification</a></li></ul><p>Harsh Shrivastava, Eugene Bart, Bob Price, Hanjun Dai, Bo Dai, Srinivas Aluru</p><ul><li><a href="https://arxiv.org/pdf/1803.02312.pdf">Dimensionality Reduction for Stationary Time Series via Stochastic Nonconvex Optimization</a></li></ul><p>Minshuo Chen, Lin Yang, Mengdi Wang, and Tuo Zhao</p><ul><li><a href="https://arxiv.org/pdf/1806.01660.pdf">Towards Understanding Acceleration Tradeoff between Momentum and Asynchrony in Distributed Nonconvex Stochastic Optimization</a></li></ul><p>Tianyi Liu, Shiyang Li, Jianping Shi, Enlu Zhou, and Tuo Zhao</p><ul><li><a href="https://arxiv.org/pdf/1612.02803.pdf">The Physical Systems behind Optimization Algorithms</a></li></ul><p>Lin Yang, Raman Arora, Vladimir Braverman, and Tuo Zhao</p><ul><li><a href="https://arxiv.org/pdf/1810.11098.pdf">Provable Gaussian Embedding with One Observation</a></li></ul><p>Ming Yu, Zhuoran Yang, Tuo Zhao, Mladen Kolar, and Zhaoran Wang</p><ul><li><a href="https://arxiv.org/pdf/1805.09298.pdf">Learning Towards Minimum Hyperspherical Energy</a></li></ul><p>Weiyang Liu, Rongmei Lin, Zhen Liu, Lixin Liu, Zhiding Yu, Bo Dai, and Le Song</p><ul><li><a href="https://arxiv.org/pdf/1807.10455.pdf">Acceleration through Optimistic No-Regret Dynamics</a></li></ul><p>Jun-Kun Wang and Jacob Abernethy</p><ul><li><a href="https://arxiv.org/abs/1810.09593">MiME: Multilevel Medical Embedding of Electronic Health Records for Predictive Healthcare</a></li></ul><p>Edward Choi, Cao Xiao, Walter F. Stewart, and Jimeng Sun</p><p>&nbsp;</p><p><strong>Workshops</strong></p><ul><li>Workshop on AI in Finance</li></ul><p>Tucker Balch, School of Interactive Computing Professor and Associate Chair, is an invited speaker.</p><ul><li><a href="https://nips2018vigil.github.io/">Visually-Grounded Interaction and Language (ViGIL)</a></li></ul><p>Georgia Tech organizers include Erik Wijmans, Samyak Datta, Stefan Lee, Peter Anderson, Dhruv Batra, and Devi Parikh.</p><ul><li><a href="https://sites.google.com/view/nips18-ilr">Imitation Learning and its Challenges in Robotics</a></li></ul><p>Interactive Computing Ph.D. student Mustafa Mukadam is organizing the workshop.&nbsp;</p><ul><li><a href="https://blackinai.github.io/">2nd Black in AI Workshop</a></li></ul><p>Application of The Hilbert Schmit Independence Criterion to Lexical Geographic Variation in Lyon, France&nbsp;by Taha Merghani&nbsp;</p><ul><li><a href="https://www.wordplay2018.com/">Wordplay: Reinforcement and Language Learning in Text-based Games</a></li></ul><p>Playing Text-Adventure Games with Graph-Based Deep Reinforcement Learning&nbsp;<br />Prithviraj Ammanabrolu and Mark O. Riedl&nbsp;</p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1542661844</created>  <gmt_created>2018-11-19 21:10:44</gmt_created>  <changed>1543611722</changed>  <gmt_changed>2018-11-30 21:02:02</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech will present 26 papers at NeurIPS, a premier AI conference happening December 2-8 in Montreal, Quebec.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech will present 26 papers at NeurIPS, a premier AI conference happening December 2-8 in Montreal, Quebec.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-11-19T00:00:00-05:00</dateline>  <iso_dateline>2018-11-19T00:00:00-05:00</iso_dateline>  <gmt_dateline>2018-11-19 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allie.mcfadden@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>614435</item>      </media>  <hg_media>          <item>          <nid>614435</nid>          <type>image</type>          <title><![CDATA[NeurIPS 2018 will be held in Montreal, Quebec and is one of the premier AI conferences around the world. Photo Credit: Tourism Quebec]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[tourism-montreal-greater-montreal-convention-and-tourism-bureau-gmctb-photo.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/tourism-montreal-greater-montreal-convention-and-tourism-bureau-gmctb-photo.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/tourism-montreal-greater-montreal-convention-and-tourism-bureau-gmctb-photo.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/tourism-montreal-greater-montreal-convention-and-tourism-bureau-gmctb-photo.jpg?itok=03FpceKB]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1542663792</created>          <gmt_created>2018-11-19 21:43:12</gmt_created>          <changed>1542810166</changed>          <gmt_changed>2018-11-21 14:22:46</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="614766">  <title><![CDATA[Georgia Tech Researchers Helping Develop Game to Improve STEM Learning in Chronically-Ill Children]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Georgia Tech researchers are partnering with a Georgia-based game developer on a $1.5 million <a href="https://www.nih.gov/">National Institutes of Health</a> (NIH) Small Business Innovation Research grant to help chronically-ill children maintain their educational development.</p><p>With an emphasis on science, technology, engineering, and math (STEM) subjects, researchers from the Schools of <a href="http://www.ic.gatech.edu">Interactive Computing</a> and <a href="https://bme.gatech.edu/">Biomedical Engineering</a> are teaming with <a href="https://www.th.ru.st/">Thrust Interactive, Inc.</a>, to create digital games that can help these kids that tend to miss a lot of school due to their illnesses.</p><p>Associate Professor <a href="https://www.cc.gatech.edu/people/betsy-disalvo"><strong>Betsy DiSalvo</strong></a> (IC) and Associate Professor <strong><a href="http://www.ien.gatech.edu/people/faculty/wilbur-lam">Wilbur Lam</a></strong> (BME) are leading the project, which will span two years under the current terms of the grant. Their goal is to take advantage of the time chronically-ill children spend in waiting rooms, having transfusions, or other times spent outside of the classroom.</p><p>The digital games are based on physical tabletop games created by members of Lam&rsquo;s lab. Led by Dr. <strong>Elaissa Hardy</strong>&nbsp;(Emory), a team of BME undergraduate students originally created the tabletop games to help kids in the hospital with sickle cell disease engage with STEM subjects.</p><p>Lam&rsquo;s lab has worked with DiSalvo and Thrust for the past two years to pilot test digital versions of these games. The new NIH grant will be used to develop findings from the pilot testing so the research team can better understand how to create a scalable model that can be used in hospitals across the country.</p><p>Another challenge the team wants to address is the difficulty children face in discussing their diseases with others. Common illnesses such as diabetes and asthma, as well as those less common like sickle cell and cystic fibrosis, can be challenging topics for children, particularly in their early teen years.</p><p>&ldquo;The middle schoolers we interviewed told us it was awkward to talk about their disease,&rdquo; DiSalvo said. &ldquo;Sometimes, they got bullied or had issues finding ways to discuss it with their peers. Previous research has shown that if you can have kids play a game around their disease, they&rsquo;ll engage about it more in conversation with peers and families.</p><p>&ldquo;It can diminish the stigma, and it also positions them as experts. When children feel like they have expertise, they are usually willing to dive deeper and learn more to maintain their expert position.&rdquo;</p><p>A better understanding of their disease at this age is critical for young people beginning to take charge of managing their own care, according to the researchers.</p><p>&ldquo;These adolescents are beginning to transition into adulthood, so managing their illness is beginning to become their responsibility,&rdquo; DiSalvo said. &ldquo;Those transitions are difficult because, in doctor visits, parents tend to dominate the conversation while kids sit in the background, not really asking questions or engaging. It&rsquo;s important to change that dynamic at this age.&rdquo;</p><p>The researchers are investigating three different approaches to the digital games to determine the best learning experience outcomes. They will test content using:</p><ul><li>Pictures and words</li><li>Pictures and audio</li><li>Pictures, words, and audio.</li></ul><p>Follow-up comprehension tests after will help determine which approach leads to the best results. Those tests will take up the first year of the project, with the second year focused on testing the application in live hospital settings.</p><p>&ldquo;We want it to be so fun and engaging that they don&rsquo;t think of it as an educational game,&rdquo; said Sarah Boyd, a Thrust Interactive team member who will work on design.</p><p>&ldquo;It&rsquo;s fun, and they&rsquo;re learning. There are existing approaches relating to education of disease, but they aren&rsquo;t as engaging. We want a fun and engaging game first, but then they&rsquo;re going to be learning about their health as they engage.&rdquo;</p><p>Thrust Interactive has elicited help from <strong>Paul Jenkins</strong>, a comic book writer and video game creator who has been involved with <em>Teenage Mutant Ninja Turtles</em>, a number of Marvel Comics titles, and video games like <em>God of War</em> and <em>The Darkness</em>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1543510014</created>  <gmt_created>2018-11-29 16:46:54</gmt_created>  <changed>1543510014</changed>  <gmt_changed>2018-11-29 16:46:54</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[With an emphasis on STEM subjects, researchers from the Schools of Interactive Computing and Biomedical Engineering are teaming with Thrust Interactive, Inc., to create digital games that can help these kids learn.]]></teaser>  <type>news</type>  <sentence><![CDATA[With an emphasis on STEM subjects, researchers from the Schools of Interactive Computing and Biomedical Engineering are teaming with Thrust Interactive, Inc., to create digital games that can help these kids learn.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-11-29T00:00:00-05:00</dateline>  <iso_dateline>2018-11-29T00:00:00-05:00</iso_dateline>  <gmt_dateline>2018-11-29 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>614765</item>      </media>  <hg_media>          <item>          <nid>614765</nid>          <type>image</type>          <title><![CDATA[Video game on tablet STOCK]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[pexels-photo-1310121.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/pexels-photo-1310121.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/pexels-photo-1310121.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/pexels-photo-1310121.jpeg?itok=AmsAWlFl]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Mom and daughter look at a tablet together sitting on the couch.]]></image_alt>                    <created>1543509687</created>          <gmt_created>2018-11-29 16:41:27</gmt_created>          <changed>1543509687</changed>          <gmt_changed>2018-11-29 16:41:27</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://spark.adobe.com/page/a7Lw2tHg90iZz/]]></url>        <title><![CDATA[Computer Science Education Week at Georgia Tech]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="176756"><![CDATA[School of Biomedical Engineering]]></keyword>          <keyword tid="11961"><![CDATA[betsy disalvo]]></keyword>          <keyword tid="14681"><![CDATA[Wilbur Lam]]></keyword>          <keyword tid="179817"><![CDATA[STEM learning]]></keyword>          <keyword tid="177206"><![CDATA[CSEd]]></keyword>          <keyword tid="1051"><![CDATA[Computer Science]]></keyword>          <keyword tid="11355"><![CDATA[computer science education]]></keyword>          <keyword tid="179818"><![CDATA[CSed week]]></keyword>          <keyword tid="2449"><![CDATA[video games]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="614367">  <title><![CDATA[Designing a Better Future: IC Ph.D. Student Ari Schlesinger Keeps Tech Focus on Equity, Inclusion]]></title>  <uid>33939</uid>  <body><![CDATA[<p><strong>Ari Schlesinger</strong> was spending time at <a href="https://www.microsoft.com/en-us/research/">Microsoft Research</a> (MSR) in Cambridge, United Kingdom, shortly after a Microsoft AI chatbot made headlines for devolving into a racist, sexist mess within 24 hours of launch in 2016.</p><p>After the incident, an influx of think pieces about the chatbot, named &ldquo;Tay,&rdquo; attempted to explain that racism was a design issue. If designed better, they contended, chatbots wouldn&rsquo;t encounter these problems.</p><p>Schlesinger and her MSR collaborators <a href="https://static1.squarespace.com/static/5a8b405a18b27d5478196dca/t/5a8b690d24a694d7072d25a1/1519085853799/chi18-schlesinger-LetsTalkAboutRace.pdf">wrote a piece in response</a>, contending that it&rsquo;s not just a design flaw, but a problem with how tech firms and, more broadly, designers think about these issues in general.</p><p>&ldquo;We wanted to point out research opportunities to figure out ways that we can do better at considering issues like race and identity when designing systems to avoid creating something like a chatbot that reproduces the types of problems that Tay produced,&rdquo; she said. &ldquo;It&rsquo;s important that we really identify the central issue that causes these problems. It&rsquo;s hard to address a problem if you can&rsquo;t name it.&rdquo;</p><p>These questions are central to the research she is conducting at Georgia Tech, where she is now a Ph.D. student advised by Professors <strong><a href="https://www.ic.gatech.edu/people/beki-grinter">Beki Grinter</a></strong> and <strong><a href="https://www.ic.gatech.edu/people/keith-edwards">Keith Edwards</a></strong> in the School of Interactive Computing. Recently, she was a finalist for the <a href="https://gvu.gatech.edu/gvu-graduate-student-awards-program-2018">Foley Scholarship</a>, where she was recognized for her research into ways enterprises can operationalize strategies to support software development with fairness in mind.</p><h4><strong>Understand the social impact</strong></h4><p>It wasn&rsquo;t a straightforward path to studying equity, inclusion, and fairness in computer science (CS), however.</p><p>In 2012, Schlesinger&rsquo;s second year pursuing a CS major through Harvey Mudd at Pitzer College she had a realization. CS degrees, she noticed, were not focusing on the vast social impacts that they were producing. She began to worry that an awareness of this social change might be missing in many CS educational environments.</p><p>&ldquo;About a year and a half into my degree, I was just like &ndash; these machines, these programs, they&rsquo;re ubiquitous,&rdquo; she said. &ldquo;They&rsquo;re in everything. They&rsquo;re changing the world, and we&rsquo;re not talking about that.&rdquo;</p><p>It was this realization that led her to course correct during her undergraduate degree. She redefined her major at Pitzer College, adjusted the trajectory of her career to pursue research full-time, and honed in on an area she says is vital to introducing mechanisms in enterprise, education, and beyond that protect against bias and exclusion.</p><p>One of the benefits of being at Pitzer College for her undergraduate degree was that Schlesinger was given the opportunity to define her own major. Her interests at the intersection of computer science, humanities, and social sciences led to a degree she designed, called &ldquo;technology and social change.&rdquo;</p><p>The social impacts at the heart of technology and CS are central to her interests, and she found that CS education was one of the few spaces she had experienced in computing that was really thinking about social impact. Upon graduation, she took a position at Harvey Mudd College running a <a href="https://www.nsf.gov/">National Science Foundation</a> grant in CS education called &ldquo;<a href="http://csteachingtips.org/">CS Teaching Tips</a>.&rdquo;</p><p>While there was a semblance of an ethics requirement in most CS degrees, whether or not it was a priority was unclear, and that was what concerned Schlesinger.</p><p>&ldquo;Who is teaching it? How is it defined? What&rsquo;s being covered? Often what you learn about ethics and these social concerns in CS depends on who you know and what you&rsquo;re exposed to,&rdquo; she explained. &ldquo;Sometimes in academia, I think we have these siloing problems, where one discipline does this and another does that and it&rsquo;s very hard to move between the two.&rdquo;</p><p>It&rsquo;s important, she said, that CS departments have someone within them who bring all of these disparate fields together, introducing people to literature and ideas they may not otherwise see in their respective disciplines.</p><p>It was this focus that drew her to Georgia Tech, specifically as it pertained to advisors Grinter and Edwards. She was looking for graduate advisors who could get excited about this idea investigating and implementing equity and inclusion within things like programming languages or artificial intelligence. Thinking more broadly, she knew that this wasn&rsquo;t just a problem within CS education, but within technology as a whole.</p><p>&ldquo;The pot is full,&rdquo; she said. &ldquo;Those questions of who gets an advantage or not when we are designing software or when we build computing systems. Technical systems have the opportunity to minimize expansion of harm, but they also have the opportunity to further discriminate. What can we do to stop hurting each other?&rdquo;</p><h4><strong>&lsquo;The next step depends on you&rsquo;</strong></h4><p>Her future work at Georgia Tech will follow a similar path, examining some of these issues of equity and bias in online communities.</p><p>There are rampant issues of harassment and discrimination in more traditional online communities. More specifically, there are issues of diversity and inclusion within open-source communities, where programmers interact and work on a tech product that might be widely adopted and will ultimately reflect some piece of those interactions.</p><p>&ldquo;Online communities seem to be places where many people of color, women, people with various marginalized identities are harassed,&rdquo; Schlesinger said. &ldquo;That happens in the tech workplace, and it happens in these open-source spaces. Part of our work will look at this distilled problem space and ask questions about what is the connection between inclusion, discrimination and online communities. Are there ways these spaces are designed that inhibit good behaviors or promote bad?&rdquo;</p><p>Of course, her next step in this research only one approach, and she said that it&rsquo;s important to note there are many paths to pursue. Asked what the next step in this space should be, Schlesinger turned it around.</p><p>&ldquo;The answer is that there is a clear step for everybody and the world would be a better place if we took that next step, but the next step depends on you,&rdquo; she said. &ldquo;Who you are, what you&rsquo;re doing, where you work, what you think about. There is something to do, but what that is depends on you.&rdquo;</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1542409236</created>  <gmt_created>2018-11-16 23:00:36</gmt_created>  <changed>1542409236</changed>  <gmt_changed>2018-11-16 23:00:36</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Questions of who is advantaged when designing software are central took tech development. Ari Schlesinger is shining a spotlight on those issues.]]></teaser>  <type>news</type>  <sentence><![CDATA[Questions of who is advantaged when designing software are central took tech development. Ari Schlesinger is shining a spotlight on those issues.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-11-16T00:00:00-05:00</dateline>  <iso_dateline>2018-11-16T00:00:00-05:00</iso_dateline>  <gmt_dateline>2018-11-16 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>614366</item>      </media>  <hg_media>          <item>          <nid>614366</nid>          <type>image</type>          <title><![CDATA[Ari Schlesinger]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Ari Schlesinger.JPG]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Ari%20Schlesinger.JPG]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Ari%20Schlesinger.JPG]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Ari%2520Schlesinger.JPG?itok=Sf6pKTXb]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Ari Schlesinger]]></image_alt>                    <created>1542408354</created>          <gmt_created>2018-11-16 22:45:54</gmt_created>          <changed>1542408354</changed>          <gmt_changed>2018-11-16 22:45:54</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="170073"><![CDATA[Ari Schlesinger]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="179742"><![CDATA[technology and social change]]></keyword>          <keyword tid="179743"><![CDATA[equity and computing]]></keyword>          <keyword tid="306"><![CDATA[equity]]></keyword>          <keyword tid="208"><![CDATA[computing]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="613426">  <title><![CDATA[IC Professors John Stasko and Gregory Abowd Earn Test of Time Awards]]></title>  <uid>33939</uid>  <body><![CDATA[<p>A pair of professors in the School of Interactive Computing were recognized for test of time awards at two conferences this month, demonstrating the lasting impact of their research in respective fields.</p><p>Professor <strong>Gregory Abowd</strong>&rsquo;s work with former Ph.D. student <strong>Shwetak Patel</strong> and postdoctoral student and research scientist <strong>Matt Reynolds</strong> was recognized at UbiComp 2018 in Singapore earlier this month. The paper was presented at the Pervasive 2008 Conference and was titled <em><a href="https://pdfs.semanticscholar.org/9132/a2f02b3a0a4e928285975ec9789b1210c63c.pdf">Detecting Human Movement by Differential Air Pressure Sensing in HVAC System Ductwork: An Exploration in Infrastructure Mediated Sensing</a></em>.</p><p>The paper presented an approach to detect movement and room transition throughout an entire house through sensing at only one point in the home. At the time, it was a new class of human activity monitoring they called &ldquo;infrastructure mediated sensing, and it detected things like disruptions in airflow caused by human movement.</p><p>This approach presents a cost-effective advantage to installing motion sensors throughout an entire home.</p><p>This is the second straight such 10-year recognition Abowd, Patel and Reynolds have received at UbiComp. Their paper <em><a href="https://homes.cs.washington.edu/~shwetak/papers/ubicomp2007_flick.pdf">At the Flick of a Switch: Detecting and Classifying Unique Electrical Events on the Residential Power Line</a></em>, written with <strong>Julie Kientz</strong> and <strong>Tom Robertson</strong>, <a href="https://www.cc.gatech.edu/news/596144/ic-faculty-alumni-awarded-10-year-impact-award-ubicomp-2017">was recognized at UbiComp 2017</a>.</p><p>Professor <strong>John Stasko</strong> received a similar designation this year at IEEE VIS 2018 for research he performed while on sabbatical at Microsoft Research in Fall 2007. Stasko, along with other members on his team, proposed two alternative trend visualizations that use static depictions of trends: one which shows traces of all trends overlaid simultaneously in one display and a second that uses a small multiples display to show the trend traces side-by-side.</p><p>The paper, titled <em><a href="https://www.cc.gatech.edu/~john.stasko/papers/infovis08-anim.pdf">Effectiveness of Animation in Trend Visualization</a></em>, evaluates the visualizations and indicates that trend animation is challenging to use and, despite being engaging for participants, it leads to errors and is least effective for analysis.</p><p>The paper was presented at InfoVis in 2008. Like Abowd, this is Stasko&rsquo;s second straight year receiving a 10-year legacy award at IEEE VIS. His work with co-authors <strong>Carsten G&ouml;rg</strong>,&nbsp;<strong>Zhicheng Liu</strong>, and&nbsp;<strong>Kanupriya Singhal</strong>, titled <em><a href="https://www.cc.gatech.edu/~stasko/papers/vast07-jigsaw.pdf">Jigsaw: Supporting Investigative Analysis through Interactive Visualization</a></em>, was <a href="https://www.cc.gatech.edu/news/596952/ic-researchers-earn-test-time-award-vast-2007-paper">recognized last year</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1540833195</created>  <gmt_created>2018-10-29 17:13:15</gmt_created>  <changed>1540833195</changed>  <gmt_changed>2018-10-29 17:13:15</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Stasko received a test of time designation for a paper at InfoVis 2008, and Abowd one for a paper at UbiComp 2008.]]></teaser>  <type>news</type>  <sentence><![CDATA[Stasko received a test of time designation for a paper at InfoVis 2008, and Abowd one for a paper at UbiComp 2008.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-10-29T00:00:00-04:00</dateline>  <iso_dateline>2018-10-29T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-10-29 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>613425</item>      </media>  <hg_media>          <item>          <nid>613425</nid>          <type>image</type>          <title><![CDATA[Test of Time]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[time-371226_960_720.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/time-371226_960_720.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/time-371226_960_720.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/time-371226_960_720.jpg?itok=Xg95KE0O]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Clock]]></image_alt>                    <created>1540833148</created>          <gmt_created>2018-10-29 17:12:28</gmt_created>          <changed>1540833148</changed>          <gmt_changed>2018-10-29 17:12:28</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="11632"><![CDATA[john stasko]]></keyword>          <keyword tid="11002"><![CDATA[Gregory Abowd]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="7730"><![CDATA[infovis]]></keyword>          <keyword tid="4923"><![CDATA[Ubicomp]]></keyword>          <keyword tid="170453"><![CDATA[Test of Time Award]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="613299">  <title><![CDATA[Antón Named as Technologist Advisor to U.S. National Security Court]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Annie I. Ant&oacute;n, a professor in Georgia Tech&rsquo;s School of Interactive Computing, has been named a technologist advisor to the U.S. Foreign Intelligence Surveillance Court (FISC).</p><p>Starting this month, Ant&oacute;n will assist the court in a part-time role. She is the only academic among the three technologists.&nbsp;</p><p>The FISC may receive assistance from an &ldquo;amicus curiae&rdquo; (friend of the court), who has expertise in privacy and civil liberties, intelligence collection, communications technology or other relevant areas.&nbsp;</p><p>&ldquo;I am honored to be asked to assist with foreign intelligence cases that involve national security, cybersecurity and privacy,&rdquo; Ant&oacute;n said. &ldquo;Technologists play a vital role in helping the courts understand how complex systems operate in practice, in order to assure that systems comply with law.&rdquo;</p><p>Ant&oacute;n, a Georgia Tech graduate, returned to serve as chair of the School of Interactive Computing from 2012 to 2017.</p><p>In 2016, she was one of 12 members of the President&rsquo;s Commission on Enhancing National Cybersecurity.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1540487868</created>  <gmt_created>2018-10-25 17:17:48</gmt_created>  <changed>1540487868</changed>  <gmt_changed>2018-10-25 17:17:48</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[ Starting this month, Annie Antón will assist the U.S. Foreign Intelligence Surveillance Court in a part-time role. She is the only academic among the three technologists. ]]></teaser>  <type>news</type>  <sentence><![CDATA[ Starting this month, Annie Antón will assist the U.S. Foreign Intelligence Surveillance Court in a part-time role. She is the only academic among the three technologists. ]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-10-25T00:00:00-04:00</dateline>  <iso_dateline>2018-10-25T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-10-25 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Laura Diamond</p><p><a href="mailto:laura.diamond@gatech.edu">laura.diamond@gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>522611</item>      </media>  <hg_media>          <item>          <nid>522611</nid>          <type>image</type>          <title><![CDATA[Annie Antón photo]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[annie-anton1.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/annie-anton1_0.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/annie-anton1_0.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/annie-anton1_0.jpg?itok=zps0lETT]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1460134800</created>          <gmt_created>2016-04-08 17:00:00</gmt_created>          <changed>1480708522</changed>          <gmt_changed>2016-12-02 19:55:22</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="27641"><![CDATA[annie anton]]></keyword>          <keyword tid="109"><![CDATA[Georgia Tech]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="179497"><![CDATA[u.s. foreign intelligence surveillance court]]></keyword>          <keyword tid="10231"><![CDATA[Washington D.C.]]></keyword>      </keywords>  <core_research_areas>          <term tid="145171"><![CDATA[Cybersecurity]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="613031">  <title><![CDATA[Georgia Tech research shapes data literacy and usability at IEEE VIS 2018]]></title>  <uid>27592</uid>  <body><![CDATA[<p><strong>Digital data is a growing&nbsp;type of currency, offering&nbsp;insights for transforming businesses and organizations, allowing&nbsp;better decision making, and answering&nbsp;questions people didn&rsquo;t even know they had. Data transactions are as common as convenience store purchases, yet the costs of those transactions are very different.&nbsp;</strong></p><p>Information visualization researchers at Georgia Tech are developing ways people can better understand the world&rsquo;s data and how to interpret its meaning through techniques that can surface key insights and make the data meaningful to users.&nbsp;</p><p>Georgia Tech faculty and graduate students will present their latest research in information visualization and visual analytics, including 14 papers, at the annual IEEE Visualization (<a href="http://ieeevis.org/year/2018/welcome">IEEE VIS</a>) Conference in Berlin, Germany, Oct. 21-26.</p><p>&nbsp;</p><h4><a href="https://gvu.gatech.edu/vis-2018"><strong>Explore Research Highlights and Data Graphics</strong></a></h4><p>&nbsp;</p><p>Of the 15 researchers, 11 are from the School of Interactive Computing and four represent the School of Computational Science and Engineering in the College of Computing. The faculty authors &ndash;&nbsp;<strong>Rahul&nbsp;Basole</strong>,&nbsp;<strong>Polo Chau</strong>,&nbsp;<strong>Alex Endert</strong>, and&nbsp;<strong>John Stasko</strong>&nbsp;&ndash;&nbsp;are members of the VIS Lab and GVU Center.</p><p>School of Interactive Computing Professor John Stasko, along with collaborators from Microsoft Research, will receive a Test of Time award for their 2007 paper&nbsp;<a href="https://ieeexplore.ieee.org/document/4658146"><em>Effectiveness of Animation in Trend Visualization</em></a>. It is Stasko&rsquo;s second straight year receiving such a designation at IEEE VIS. Read about&nbsp;<a href="https://www.ic.gatech.edu/news/596952/ic-researchers-earn-test-time-award-vast-2007-paper">last year&#39;s award</a>.</p><p>IEEE VIS is the largest conference on scientific visualization, information visualization, and visual analytics.&nbsp;</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1539976717</created>  <gmt_created>2018-10-19 19:18:37</gmt_created>  <changed>1539977596</changed>  <gmt_changed>2018-10-19 19:33:16</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech faculty and graduate students will present their latest research in information visualization and visual analytics, including 14 papers, at the annual IEEE Visualization (IEEE VIS) Conference in Berlin, Germany, Oct. 21-26.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech faculty and graduate students will present their latest research in information visualization and visual analytics, including 14 papers, at the annual IEEE Visualization (IEEE VIS) Conference in Berlin, Germany, Oct. 21-26.]]></sentence>  <summary><![CDATA[<p>Georgia Tech faculty and graduate students will present their latest research in information visualization and visual analytics, including 14 papers, at the annual IEEE Visualization (<a href="http://ieeevis.org/year/2018/welcome">IEEE VIS</a>) Conference in Berlin, Germany, Oct. 21-26.</p>]]></summary>  <dateline>2018-10-19T00:00:00-04:00</dateline>  <iso_dateline>2018-10-19T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-10-19 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jpreston@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu">Joshua Preston</a><br />Communications Manager, GVU Center</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>613034</item>          <item>613033</item>      </media>  <hg_media>          <item>          <nid>613034</nid>          <type>image</type>          <title><![CDATA[Georgia Tech Visualization Lab]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[vis_2018_header_image.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/vis_2018_header_image.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/vis_2018_header_image.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/vis_2018_header_image.jpg?itok=A_quZGNs]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1539976795</created>          <gmt_created>2018-10-19 19:19:55</gmt_created>          <changed>1539977395</changed>          <gmt_changed>2018-10-19 19:29:55</gmt_changed>      </item>          <item>          <nid>613033</nid>          <type>image</type>          <title><![CDATA[Georgia Tech faculty at VIS 2018]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[vis 2018 social promo.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/vis%202018%20social%20promo.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/vis%202018%20social%20promo.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/vis%25202018%2520social%2520promo.png?itok=Z1lTcXvP]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1539976748</created>          <gmt_created>2018-10-19 19:19:08</gmt_created>          <changed>1539977421</changed>          <gmt_changed>2018-10-19 19:30:21</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://public.tableau.com/views/vis2018_faculty/Dashboard2?:embed=y&amp;:display_count=yes&amp;:showVizHome=no#2]]></url>        <title><![CDATA[Faculty at VIS 2018]]></title>      </link>          <link>        <url><![CDATA[https://public.tableau.com/views/vis2018_papers/Dashboard1?:embed=y&amp;:embed_code_version=3&amp;:loadOrderID=1&amp;:display_count=yes&amp;publish=yes&amp;:showVizHome=no]]></url>        <title><![CDATA[Read Technical Papers]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39431"><![CDATA[Data Engineering and Science]]></term>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="612675">  <title><![CDATA[Constellations Center Commits to Teaching 200 Students Computer Science in 2018-19 Academic Year]]></title>  <uid>34773</uid>  <body><![CDATA[<p><strong>The Constellations Center for Equity in Computing</strong> at Georgia Tech was one of 294 organizations to <a href="http://summit.csforall.org/searchCommitments">make a commitment</a> to improving the computer science education landscape at the 2018 <a href="http://summit.csforall.org/home">CSforALL Summit</a>, Oct. 8-11. The center committed to &ldquo;instilling a sequence of high-level CS courses in secondary schools with low-income communities.&rdquo;</p><p>Constellations fellows will teach the AP CS principles course in seven public high schools in Atlanta Public Schools (APS) during the 2018-19 academic year, educating nearly 200 high school students. The end result of the engagement will be a hybrid model of instructional and online learning providing a pathway to post-secondary computer science and STEM studies. These courses include AP CS Principles, Georgia Tech&rsquo;s Introduction to Computing Using Python, and AP CS A.</p><p>&ldquo;Our fellows began teaching AP CS in August and we have already seen so much growth from our students. Many of them have never been trusted with a computer, much less exposed to computer science, and it has been amazing to see them start to realize their potential in this subject and how far they can go in life,&rdquo; said <strong>Lien Diaz</strong>, Constellations Director of Educational Innovation and Leadership.</p><p>Diaz <a href="https://twitter.com/GT_CCEC/status/1049753216391495680">moderated a panel</a> about engaging underrepresented youth in computer science at the summit.</p><p>Commitment goals at the event ranged from increasing rigor and equity in computing to creating opportunities for youth, supporting local change, and growing the computer science education movement.</p><p>The event included talks from guests such as &ldquo;NCIS: New Orleans&rdquo; actor, <strong>Daryl &ldquo;Chill&rdquo; Mitchell</strong>, who <a href="https://twitter.com/GT_CCEC/status/1049655202754781184">reminded the attendees</a> that &ldquo;there is nothing a kid can&rsquo;t do if given the opportunity.&rdquo;</p><p>ReCAPTCHA creator and <strong>Duolingo</strong> founder, <strong>Luis von Ahn</strong>, <a href="https://twitter.com/GT_CCEC/status/1049716915751530501">inspired the crowd</a> with lessons learned from his time creating technology solutions, and <strong>Deon Gordon</strong> of <strong>Tech Birmingham</strong> encouraged the audience to care not only about teaching children technical skills, but to <a href="https://twitter.com/GT_CCEC/status/1049686852188409856">care for them as a person</a>. The <strong>Detroit Arts and Sciences Academy</strong> chorus wowed attendees with their <a href="https://twitter.com/GT_CCEC/status/1049646504745598976">computer science remix</a> from &ldquo;Frozen.&rdquo;</p><p>The summit took place in Detroit, Mich. at Wayne State University. For an interactive visualization of the 227 commitments made, please visit <a href="http://summit.csforall.org/visualizationView">here.</a></p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1539351506</created>  <gmt_created>2018-10-12 13:38:26</gmt_created>  <changed>1539361257</changed>  <gmt_changed>2018-10-12 16:20:57</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Constellations Center attends CSforALL Summit and pledges to teach 200 students computer science in Atlanta Public Schools during the 2018-19 academic year. ]]></teaser>  <type>news</type>  <sentence><![CDATA[Constellations Center attends CSforALL Summit and pledges to teach 200 students computer science in Atlanta Public Schools during the 2018-19 academic year. ]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-10-12T00:00:00-04:00</dateline>  <iso_dateline>2018-10-12T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-10-12 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allie.mcfadden@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>612682</item>      </media>  <hg_media>          <item>          <nid>612682</nid>          <type>image</type>          <title><![CDATA[Lien Diaz moderates a panel at the CSforALL Summit held in Detroit Oct. 8 - 11. ]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[10791647824_IMG_0796 copy.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/10791647824_IMG_0796%20copy.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/10791647824_IMG_0796%20copy.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/10791647824_IMG_0796%2520copy.jpg?itok=he0pt_z-]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1539355904</created>          <gmt_created>2018-10-12 14:51:44</gmt_created>          <changed>1539355904</changed>          <gmt_changed>2018-10-12 14:51:44</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="606703"><![CDATA[Constellations Center]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="151"><![CDATA[Policy, Social Sciences, and Liberal Arts]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="151"><![CDATA[Policy, Social Sciences, and Liberal Arts]]></term>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="612183">  <title><![CDATA[Georgia Tech Researchers Develop AI That Can Create Entirely New Games]]></title>  <uid>33939</uid>  <body><![CDATA[<h3><em>Using a method dubbed &#39;conceptual expansion,&#39; the system studies old games to create unique mechanics and designs</em></h3><p>The first machine learning-based automated game design tool from a team of researchers at Georgia Tech could empower anyone to make their own games. Utilizing what the researchers dub &ldquo;conceptual expansion,&rdquo; the method recombines machine-learned models of games into new, playable games.</p><p>Prior automated game design approaches have used hand-authored or crowd-sourced knowledge, which requires a human author to write instructions for the system to produce games. This approach, however, limits the scope and applications of such systems, according to the Georgia Tech team.</p><p>Conceptual expansion, on the other hand, takes in an arbitrary number of games &ndash; Super Mario Bros., Kirby&rsquo;s Adventure, and Mega Man, for example &ndash; and then outputs original games with unique mechanics and level designs.</p><p>&ldquo;By having this done using machine learning, it doesn&rsquo;t require me or anyone else to author additional content,&rdquo; said <strong>Matthew Guzdial</strong>, a Ph.D. student in Georgia Tech&rsquo;s School of Interactive Computing (IC) and the lead on the project. &ldquo;I can just give it a new game to learn on, and it will immediately change its output based on what it sees. If I want a game like Pong and game like Tetris, I show two videos to the system and, bam, here are 30 games or however many it comes up with.&rdquo;</p><p>This research builds on <a href="https://gvu.gatech.edu/ai-uses-less-two-minutes-videogame-footage-recreate-game-engine">previous work</a> by the team, which includes Guzdial&rsquo;s adviser, IC Associate Professor <strong>Mark Riedl</strong>. That work attempted to empower artificial intelligence (AI) with creativity through the use of video games. Past iterations have looked at learning level design for a game &ndash; Super Mario Bros., for example &ndash; but this led to output levels very similar to those already in the game.</p><p>In an alternative method, the researchers drew on an approach called conceptual blending that allowed them to create entirely new types of levels, like underwater castles, which aren&rsquo;t present in the original franchise.</p><p>The latest work has used that foundation to produce entirely new content, though challenges still exist, Guzdial said.</p><p>&ldquo;What we have to do now is define some way for the AI to know what is good and what is bad,&rdquo; he said. &ldquo;That&rsquo;s the trick. We haven&rsquo;t quite figured it out yet. We&rsquo;ve gotten some bad stuff out, too.&rdquo;</p><p>Some examples: An enemy was invented that couldn&rsquo;t move and couldn&rsquo;t die and a power-up that moved further away any time a player was within a certain distance.</p><p>&ldquo;So, those aren&rsquo;t great,&rdquo; Guzdial said. &ldquo;There&rsquo;s no intentionality there. It&rsquo;s frustrating, but it&rsquo;s interesting to see how it builds these new relationships. At the end of the day, right now, we are looking for something new and something playable. Have I ever seen this before, and can I actually play this game?&rdquo;</p><p>The goal, Guzdial said, is not to replace human game designers but to provide non-game designers with the ability to produce original content on their own. He&rsquo;s often heard others describe games as being a combination of multiple games they&rsquo;ve played before.</p><p>&ldquo;Why not empower them with the ability to create on their own?&rdquo; he said. &ldquo;I have no illusions about whether experts can make a better game than this engine, obviously. But can we let people who don&rsquo;t know how to make games become developers on their own?&rdquo;</p><p>This research is published in a paper titled <em><a href="https://arxiv.org/pdf/1809.02232.pdf">Automated Game Design via Conceptual Expansion</a></em>, which will be presented at the <a href="https://sites.google.com/ncsu.edu/aiide-2018/home?authuser=0">14<sup>th</sup> AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment</a> on Nov. 13-17 in Edmonton, Alberta, Canada.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1538425549</created>  <gmt_created>2018-10-01 20:25:49</gmt_created>  <changed>1538655643</changed>  <gmt_changed>2018-10-04 12:20:43</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Conceptual expansion takes in an arbitrary number of games and then outputs original games with unique mechanics and level designs.]]></teaser>  <type>news</type>  <sentence><![CDATA[Conceptual expansion takes in an arbitrary number of games and then outputs original games with unique mechanics and level designs.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-10-01T00:00:00-04:00</dateline>  <iso_dateline>2018-10-01T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-10-01 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>595803</item>      </media>  <hg_media>          <item>          <nid>595803</nid>          <type>image</type>          <title><![CDATA[Super Mario Bros.]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[mario2-cloned_engine.gif]]></image_name>            <image_path><![CDATA[/sites/default/files/images/mario2-cloned_engine.gif]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/mario2-cloned_engine.gif]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/mario2-cloned_engine.gif?itok=LCewMldU]]></image_740>            <image_mime>image/gif</image_mime>            <image_alt><![CDATA[Super Mario Brothers]]></image_alt>                    <created>1505147296</created>          <gmt_created>2017-09-11 16:28:16</gmt_created>          <changed>1505147296</changed>          <gmt_changed>2017-09-11 16:28:16</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://public.tableau.com/views/Firstmachine-learning-basedautogamedesigner/Dashboard1?:embed=y&amp;:display_count=yes&amp;:showVizHome=no]]></url>        <title><![CDATA[Guzdial's Gaming Research Evolution]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="2835"><![CDATA[ai]]></keyword>          <keyword tid="2556"><![CDATA[artificial intelligence]]></keyword>          <keyword tid="2449"><![CDATA[video games]]></keyword>          <keyword tid="146631"><![CDATA[Matthew Guzdial]]></keyword>          <keyword tid="66281"><![CDATA[Mark Riedl]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="179255"><![CDATA[automated game design]]></keyword>          <keyword tid="179256"><![CDATA[super mario brothers]]></keyword>          <keyword tid="179257"><![CDATA[kirby&#039;s adventure]]></keyword>          <keyword tid="179258"><![CDATA[mega man]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="611757">  <title><![CDATA[Good Vibrations: Passive Haptic Learning Could Be a Key to Rehabilitation]]></title>  <uid>33939</uid>  <body><![CDATA[<h3><em>It has shown positive results in spinal injury recovery. Can it do the same for stroke patients?</em></h3><p>It was around 2008, and <strong>Kevin Huang</strong> was a master&rsquo;s student at Georgia Tech under the advisement of School of Interactive Computing (IC) Professor <strong>Thad Starner</strong>. Huang, an intrepid student, had big ideas within the field of haptics and wanted to pursue the creation of a full-bodied exoskeleton.</p><p>&ldquo;Of course, that costs about a million dollars,&rdquo; Starner, an associate professor at the time, said with a laugh. &ldquo;So, I said, &lsquo;Here&rsquo;s an idea. Why don&rsquo;t you try making this glove where you put vibrating motors above each finger and see if you can teach people how to play piano passively?&rsquo;&rdquo;</p><p>Starner was suggesting a new approach that later became known as passive haptic learning. Users would wear the gloves while doing other activities, like reading email or driving, and have the device tap each finger in the appropriate sequence for a piano melody over and over again. The hope was that the repetition would give the wearer the motor memory to later take off the gloves and play the song on the piano, perhaps even a song they had never heard before.</p><p>Huang liked the idea and ran with it. Starner, of course, thought the idea had merit or never would have suggested it. But even he was surprised with Huang&rsquo;s results. The glove worked. Users, as was later demonstrated during a live segment on CNN, could quickly learn a tune in a short span of time.</p><p>&ldquo;I don&rsquo;t believe this,&rdquo; said Starner, recalling his initial reaction. &ldquo;I had never heard of anything like this before in my life. We went and did a 16-person user study, and it came back with even better results. I figured we had something real there, and it was time to dig a little deeper.&rdquo;</p><p>What Starner didn&rsquo;t know was that the initial whim was laying the foundation for an entirely new field of study, a field that has demonstrated positive results in sensation-based learning for music, Morse code, and even Braille.</p><h3><a href="https://www.freethink.com/shows/superhuman/season-3/these-gloves-could-offer-rapid-recovery-from-brain-injuries"><strong>VIDEO: These Gloves Can Teach You How to Play Piano. And Maybe Heal Your Brain (Freethink&#39;s Superhuman)</strong></a></h3><p>Current IC Ph.D. student <strong>Caitlyn Seim</strong> and her team of researchers has taken this initial discovery to the next level. She defined this new field of study when she unlocked even more important secrets within the real-world impact of passive haptic learning. If it can improve dexterity in healthy hands, could it do the same for those with limitations? Could it, in fact, lead to rehabilitation for someone who has suffered a traumatic spinal injury? Could it even aid in recovery from a stroke?</p><h4><strong>The elephant in the room</strong></h4><p>Seim was an undergraduate in electrical engineering at Georgia Tech in 2013 around the time Starner and former Ph.D. student <strong>Tanya Estes</strong> were beginning to understand some of the ramifications of their method. She had done some work with IC Professors <strong>Gregory Abowd</strong> and <strong>Jim Rehg</strong>, and Starner approached her about working with him in this field of passive haptic learning.</p><p>Estes and Starner had recently partnered with the Shepherd Center for spinal cord and brain injury rehabilitation to test whether the seemingly random stimulation of the fingers, as demonstrated in the piano project, could lead to increased sensation dexterity in the hands for injury patients.</p><p>It was a logical next step in the research, Starner said, and the results were encouraging. In the study, participants who were injured more than a year prior wore the Mobile Music Touch glove that led to learned skills on a piano. They participated in simple piano lessons and evaluations indicated stastistically significant improvement among the experimental group.</p><p>They were beginning to scratch the surface of what passive haptic rehabilitation was able to achieve. But, Seim said, there has always been one elephant in the room for anyone in the rehab space.</p><p>&ldquo;Stroke,&rdquo; Seim said. &ldquo;It&rsquo;s the clear elephant in the room. It&rsquo;s the No. 1 cause of long-term disability in the United States and a leading cause globally.&rdquo;</p><p>Not only is it a huge financial burden, patients also have precious few options for recovery. Exercise-based therapy such as constraint-induced movement is the state of the art. For immobile hands, Botox injections are also common.</p><p>&ldquo;But that&rsquo;s only temporary,&rdquo; Starner said. &ldquo;It&rsquo;s not retraining the body. It&rsquo;s for relief, not getting your hand back.&rdquo;</p><p>The exercise-based therapy can help, but it is painful, expensive, and only available to about 50 percent of patients who meet the baseline dexterity level to begin treatment. The other half have been rendered too disabled to be eligible, so compensatory strategies like spousal assistance are encouraged.</p><h4><strong>A new option?</strong></h4><p>With the positive results in the partial spinal cord injuries study, the thought was that perhaps this stimulation-based method could have a similarly positive impact for stroke patients, as well. Like in the previous study, patients wear the gloves, this time for three hours each day for two months.</p><p>Seim, who Starner said recruited her own subjects via news groups and mailing lists, takes measurements weekly. Volunteer clinicians also take measurements at the beginning, middle, and end of the cycle.</p><p>While the team is currently preparing a publication on their work and is not yet ready to release results, Seim said the findings have been encouraging.</p><p>&ldquo;It&rsquo;s exciting,&rdquo; Seim said of the study. &ldquo;We aren&rsquo;t looking for complete recovery but if we can actually get patients to regain control of their fingers or hands, they can do so much more for themselves.&rdquo;</p><p>This method has already shown plenty of promise. If it impacts stroke recovery, that&rsquo;s already a significant portion of the population. And perhaps passive haptic learning is a key that could unlike even more avenues of study.</p><p>&ldquo;I think there&rsquo;s just enormous potential in the area of haptics, wearables, and tapping into this area of passive stimulation,&rdquo; Seim said. &ldquo;As I finish my Ph.D. and see the research landscape, I see that we are uncovering a new paradigm beyond just the awesome applications like learning a melody on the piano or having a great system to teach Braille. These are great applications in themselves, but this is a whole new cognitive approach, and it&rsquo;s very exciting.&rdquo;</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1537471051</created>  <gmt_created>2018-09-20 19:17:31</gmt_created>  <changed>1537561287</changed>  <gmt_changed>2018-09-21 20:21:27</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Studies have shown that passive haptic learning can help patients suffering from spinal injury. Can it also be an option in stroke recovery?]]></teaser>  <type>news</type>  <sentence><![CDATA[Studies have shown that passive haptic learning can help patients suffering from spinal injury. Can it also be an option in stroke recovery?]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-09-20T00:00:00-04:00</dateline>  <iso_dateline>2018-09-20T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-09-20 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>611755</item>      </media>  <hg_media>          <item>          <nid>611755</nid>          <type>image</type>          <title><![CDATA[Caitlyn Seim - PHL]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Seim Banner.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Seim%20Banner.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Seim%20Banner.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Seim%2520Banner.jpg?itok=EAolAA9Q]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Caitlyn Seim showing haptic glove]]></image_alt>                    <created>1537470856</created>          <gmt_created>2018-09-20 19:14:16</gmt_created>          <changed>1537470856</changed>          <gmt_changed>2018-09-20 19:14:16</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="104221"><![CDATA[passive haptic learning]]></keyword>          <keyword tid="179164"><![CDATA[spinal injury recovery]]></keyword>          <keyword tid="179165"><![CDATA[stroke recovery]]></keyword>          <keyword tid="115211"><![CDATA[wearable tech]]></keyword>          <keyword tid="132141"><![CDATA[wearables]]></keyword>          <keyword tid="10353"><![CDATA[wearable computing]]></keyword>          <keyword tid="1944"><![CDATA[Thad Starner]]></keyword>          <keyword tid="170072"><![CDATA[Caitlyn Seim]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="611374">  <title><![CDATA[School of Interactive Computing Launching Podcast to Address the 'Big Issues' in Computing]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Interaction, as one might guess, is a key tenet of what the <a href="http://www.ic.gatech.edu">School of Interactive Computing</a> tries to achieve. That could be interaction between people and technology, interaction between researchers and their peer collaborators, or interaction between researchers and the public to achieve an open dialogue over the big issues facing computing today.</p><p>To achieve those goals, the School is embarking on an exciting new project, beginning Sept. 18, with the launch of <strong>The Interaction Hour</strong>, a podcast hosted by Chair <a href="https://www.cc.gatech.edu/people/ayanna-howard"><strong>Ayanna Howard</strong></a>, featuring guest experts expounding on a range of important topics, and crafted by you, the listener.</p><p>&ldquo;If you think about computing and where it&rsquo;s going, it&rsquo;s really about the intersection between the human experience and computing, which is really what the School of Interactive Computing is all about,&rdquo; Howard said. How do we ensure that our computing technology addresses the needs of real people in society, and not just the lab?&rdquo;</p><p>The podcast will focus on a range of topics that affect people in the real world. Initial episodes focus on ethics in artificial intelligence &ndash; from self-driving cars to lethal autonomous weapons &ndash; with Professor <a href="https://www.cc.gatech.edu/people/ronald-arkin"><strong>Ron Arkin</strong></a>, a new approach to security and privacy called &ldquo;social cybersecurity&rdquo; with Assistant Professor <a href="https://www.cc.gatech.edu/people/sauvik-das"><strong>Sauvik Das</strong></a>, and an important look at how social media can be used as a tool to detect changes in mental health with Assistant Professor <a href="https://www.cc.gatech.edu/people/munmun-dechoudhury"><strong>Munmun De Choudhury</strong></a>.</p><p>The initial episodes are available online on <a href="https://itunes.apple.com/us/podcast/the-interaction-hour/id1435564422?mt=2">iTunes</a>, <a href="https://open.spotify.com/show/4UZ9Hlniz3FvG8uUGdJOm1?si=FeiB6Dq-QlSx8IzsebDDsQ">Spotify</a>, and <a href="https://www.spreaker.com/show/the-interaction-hour">Spreaker</a>, and will be shared on our social media channels <a href="http://www.twitter.com/ICatGT">Twitter</a> and <a href="http://www.facebook.com/ICatGT">Facebook</a> over the coming months. Beyond that, however, the podcast will look to the audience to help guide the discussions being had in this podcast.</p><p>&ldquo;We really want to hear directly from the public in person, in our classrooms, on the street, or on social media,&rdquo; Howard said. &ldquo;We want you to tell us what you want to know about computing and society, and we will find an expert for you to address that.&rdquo;</p><p>The unique aspect of the School is the wide range of research areas of the IC faculty. Topics from virtual reality to health care, ethics to cybersecurity, information visualization and wearable technology, education, robotics, artificial intelligence, and so much more are all within the realm of what IC researchers do.</p><p>Asked whether the School could find an expert for almost any range of computing topic, Howard didn&rsquo;t hesitate.</p><p>&ldquo;One hundred percent,&rdquo; she said.</p><p>You can subscribe to the podcast at any of the three locations below, with more options to come in the future:</p><ul><li><a href="https://itunes.apple.com/us/podcast/the-interaction-hour/id1435564422?mt=2">iTunes</a></li><li><a href="https://www.spreaker.com/show/the-interaction-hour">Spreaker</a></li><li><a href="https://open.spotify.com/show/4UZ9Hlniz3FvG8uUGdJOm1?si=FeiB6Dq-QlSx8IzsebDDsQ">Spotify</a></li></ul><p>A podcasts page devoted to our production will be launched on the School website next Tuesday.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1536767875</created>  <gmt_created>2018-09-12 15:57:55</gmt_created>  <changed>1536767875</changed>  <gmt_changed>2018-09-12 15:57:55</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The podcast, called the Interaction Hour, is launching Tuesday, Sept. 18 and will be available on iTunes, Spotify, and Spreaker.]]></teaser>  <type>news</type>  <sentence><![CDATA[The podcast, called the Interaction Hour, is launching Tuesday, Sept. 18 and will be available on iTunes, Spotify, and Spreaker.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-09-12T00:00:00-04:00</dateline>  <iso_dateline>2018-09-12T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-09-12 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>611373</item>      </media>  <hg_media>          <item>          <nid>611373</nid>          <type>image</type>          <title><![CDATA[Interaction Hour]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Podcast Banner Image 2.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Podcast%20Banner%20Image%202.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Podcast%20Banner%20Image%202.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Podcast%2520Banner%2520Image%25202.jpg?itok=Q2lLlsj_]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[The Interaction Hour]]></image_alt>                    <created>1536767585</created>          <gmt_created>2018-09-12 15:53:05</gmt_created>          <changed>1536767585</changed>          <gmt_changed>2018-09-12 15:53:05</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://www.twitter.com/icatgt]]></url>        <title><![CDATA[IC on Twitter]]></title>      </link>          <link>        <url><![CDATA[http://www.facebook.com/icatgt]]></url>        <title><![CDATA[IC on Facebook]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="179079"><![CDATA[The interaction hour]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="208"><![CDATA[computing]]></keyword>          <keyword tid="623"><![CDATA[Technology]]></keyword>          <keyword tid="179080"><![CDATA[ethics in ai]]></keyword>          <keyword tid="2835"><![CDATA[ai]]></keyword>          <keyword tid="2556"><![CDATA[artificial intelligence]]></keyword>          <keyword tid="167543"><![CDATA[social media]]></keyword>          <keyword tid="167731"><![CDATA[social computing]]></keyword>          <keyword tid="177228"><![CDATA[social cybersecurity]]></keyword>          <keyword tid="1404"><![CDATA[Cybersecurity]]></keyword>          <keyword tid="179081"><![CDATA[college fo computing]]></keyword>          <keyword tid="825"><![CDATA[Ayanna Howard]]></keyword>          <keyword tid="175376"><![CDATA[sauvik das]]></keyword>          <keyword tid="89321"><![CDATA[Munmun De Choudhury]]></keyword>          <keyword tid="14444"><![CDATA[ron arkin]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="610978">  <title><![CDATA[Georgia Tech to Present Nine Poster Papers at ECCV 2018]]></title>  <uid>34773</uid>  <body><![CDATA[<p>Next week, a group of Georgia Tech students and faculty will travel to Munich, Germany to attend the <a href="https://eccv2018.org/">European Conference on Computer Vision (ECCV) 2018</a>.</p><p>More than 700 organizations from industry, academia, and government are represented at the 2018 conference, which is held every two years. Georgia Tech will present eight papers during poster sessions at the premier event and, it is among&nbsp;the top 3 percent of participating institutions based on accepted research.</p><p>Along with presenting several papers, Georgia Tech faculty members have also participated in organizing ECCV 2018. <strong>Devi Parikh</strong>, <strong>Irfan Essa</strong>, <strong>Dhruv Batra</strong>, and <strong>Fuxin Li</strong> served as area chairs for the event.</p><p>&ldquo;ECCV is an exciting conference to participate in. There&rsquo;s a lot of good work that gets presented from top computer vision labs in the world, and it is great that Georgia Tech is one of them! It is a great venue to share our latest ideas and hear what others in the research community are thinking about these days.&rdquo; said <strong>Devi Parikh</strong>, assistant professor in Georgia Tech&rsquo;s <a href="https://www.ic.gatech.edu/">School of Interactive Computing.</a></p><p>Georgia Tech organized the first <a href="///Users/dparikh/Library/Containers/com.apple.mail/Data/Library/Mail%20Downloads/8035C01B-953E-4839-B1B8-4956B4756504/%5bhttps:/visualdialog.org/challenge/2018">Visual Dialog Challenge,</a> designed to find methods for artificial intelligence agents to hold a meaningful dialog with humans in natural, conversational language about visual content. Winners will be announced at the conference.</p><p>The conference takes place Sept. 8 through 14 in the heart of Munich at the Gasteig Cultural Center.</p><p>To see an interactive visualization of the entire ECCV 2018 program, please click <a href="https://public.tableau.com/views/ECCV2018-MainProgram/Dashboard1?:embed=y&amp;:display_count=yes&amp;:showVizHome=no">here.</a></p><p>For an interactive visualization of ECCV 2018 by institutions with accepted research, please click <a href="https://public.tableau.com/views/ECCV2018-Top3/Dashboard2?:embed=y&amp;:display_count=yes&amp;:showVizHome=no">here.</a></p><p>An interactive visualization of ECCV 2018 by people and institutions can be viewed <a href="https://public.tableau.com/views/ECCV2018-MainProgram/Dashboard1?:embed=y&amp;:display_count=yes&amp;:showVizHome=no">here.</a></p><p>Below are the titles of Georgia Tech&rsquo;s research being presented this week.</p><p><strong>Georgia Tech at ECCV 2018</strong></p><p><strong><a href="https://arxiv.org/pdf/1804.04259.pdf">Learning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation</a></strong></p><p>By Zhaoyang Lv*, GEORGIA TECH; Kihwan Kim, NVIDIA; Alejandro Troccoli, NVIDIA; Deqing Sun, NVIDIA; Kautz Jan, NVIDIA; James Rehg, Georgia Institute of Technology</p><p><strong>Read our blog post about this paper on the ML@GT blog <a href="https://mlatgt.blog/2018/09/06/learning-rigidity-and-scene-flow-estimation/">here.</a> </strong></p><p><strong><a href="https://web.engr.oregonstate.edu/~lif/1925.pdf">Multi-object Tracking with Neural Gating using bilinear LSTMs</a></strong></p><p>By Chanho Kim*, Georgia Tech; Fuxin Li, Oregon State University; James Rehg, Georgia Institute of Technology</p><p><strong><a href="http://openaccess.thecvf.com/content_ECCV_2018/papers/Yin_Li_In_the_Eye_ECCV_2018_paper.pdf">In the Eye of Beholder: Joint Learning of Gaze and Actions in First Person Vision</a></strong></p><p>Yin Li*, CMU; Miao Liu, Georgia Tech; James Rehg, Georgia Institute of Technology</p><p><strong><a href="https://arxiv.org/pdf/1808.02861.pdf">Choose Your Neuron: Incorporating Domain Knowledge through Neuron Importance</a></strong></p><p>By Ramprasaath Ramasamy Selvaraju*, Georgia Tech; Prithvijit Chattopadhyay, Georgia Institute of Technology; Mohamed Elhoseiny, Facebook; Tilak Sharma, Facebook; Dhruv Batra, Georgia Tech &amp; Facebook AI Research; Devi Parikh, Georgia Tech &amp; Facebook AI Research; Stefan Lee, Georgia Institute of Technology</p><p><strong>Read our blog post about this paper on the ML@GT blog <a href="https://mlatgt.blog/2018/09/05/choose-your-neuron-incorporating-domain-knowledge-through-neuron-importance/">here.</a></strong></p><p><strong><a href="http://users.ece.cmu.edu/~skottur/papers/corefnmn_eccv18.pdf">Visual Coreference Resolution in Visual Dialog using Neural Module Networks</a></strong></p><p>By Satwik Kottur*, Carnegie Mellon University; Jos&eacute; M. F. Moura, Carnegie Mellon University; Devi Parikh, Georgia Tech &amp; Facebook AI Research; Dhruv Batra, Georgia Tech &amp; Facebook AI Research; Marcus Rohrbach, Facebook AI Research</p><p><strong><a href="https://arxiv.org/pdf/1808.00191.pdf">Graph R-CNN for Scene Graph Generation</a></strong></p><p>By Jianwei Yang*, Georgia Institute of Technology; Jiasen Lu, Georgia Institute of Technology; Stefan Lee, Georgia Institute of Technology; Dhruv Batra, Georgia Tech &amp; Facebook AI Research; Devi Parikh, Georgia Tech &amp; Facebook AI Research</p><p><strong>Read our blog post about this paper on the ML@GT <a href="https://mlatgt.blog/2018/09/04/what-is-graph-r-cnn/">blog here.</a> </strong></p><p><strong><a href="http://wyliu.com/papers/LiuECCV18.pdf">SEAL: A Framework Towards Simultaneous Edge Alignment and Learning</a></strong></p><p>By Zhiding Yu*, NVIDIA; Weiyang Liu, Georgia Tech; Yang Zou, Carnegie Mellon University; Chen Feng, Mitsubishi Electric Research Laboratories (MERL); Srikumar Ramalingam, University of Utah; B. V. K. Vijaya Kumar, CMU, USA; Kautz Jan, NVIDIA</p><p><strong><a href="http://www.eye.gatech.edu/swapnet/paper.pdf">SwapNet: Image Based Garment Transfer</a></strong></p><p>By Amit Raj, Georgia Tech; Patsorn Sangkloy, Georgia Tech; Huiwen Chang, Princeton; James Hays, Georgia Tech; Duygu Ceylan, Adobe; and Jingwan Lu, Adobe</p><p><a href="http://openaccess.thecvf.com/content_ECCV_2018/papers/Eunji_Chong_Connecting_Gaze_Scene_ECCV_2018_paper.pdf"><strong>Connecting Gaze, Scene, and Attention: Generalized Attention Estimation via Joint Modeling of Gaze and Scene Saliency</strong></a></p><p>Eunji Chong, Nataniel Ruiz, Yongxin Wang, Yun Zhang, Agata Rozga, James M. Rehg, Georgia Tech</p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1536250593</created>  <gmt_created>2018-09-06 16:16:33</gmt_created>  <changed>1536342967</changed>  <gmt_changed>2018-09-07 17:56:07</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech faculty and students will travel to Munich, Germany to present their research at the European Conference on Computer Vision (ECCV).]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech faculty and students will travel to Munich, Germany to present their research at the European Conference on Computer Vision (ECCV).]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-09-06T00:00:00-04:00</dateline>  <iso_dateline>2018-09-06T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-09-06 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allie.mcfadden@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>610984</item>      </media>  <hg_media>          <item>          <nid>610984</nid>          <type>image</type>          <title><![CDATA[ECCV 2018 will be held in Munich, Germany]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Munich_skyline_1-1 copy.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Munich_skyline_1-1%20copy.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Munich_skyline_1-1%20copy.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Munich_skyline_1-1%2520copy.jpg?itok=ZGJD8bq6]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1536253387</created>          <gmt_created>2018-09-06 17:03:07</gmt_created>          <changed>1536253387</changed>          <gmt_changed>2018-09-06 17:03:07</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="610957">  <title><![CDATA[What If Robots Could Learn Skills from Scratch?]]></title>  <uid>34773</uid>  <body><![CDATA[<p>Any machine can learn to move with enough engineering, according to <strong>Karen Liu, </strong>but imagine<strong> </strong>what could happen if machines were able to evolve and learn new motions over time with very little instruction, just like a human child does.</p><p>Liu, an associate professor in the <strong>School of Interactive Computing</strong> and member of the <strong>Machine Learning Center at Georgia Tech,</strong> conducts research on simulating and controlling human and animal movements in the digital world with virtual &ldquo;agents&rdquo; or using actual robots in the lab.&nbsp;</p><p>Creating moving agents in a digital landscape has been around for many years but Liu and her team are teaching agents to move by using artificial intelligence.</p><p>In previous iterations, robots and agents have been taught using reinforcement learning (RL), which requires extensive coding and algorithmic development for each movement, no matter how big or small.&nbsp;</p><p>In contrast to the common approach of mimicking motion trajectories, Liu&rsquo;s lab wanted to create a virtual agent that learns how to walk on its own.</p><p>By combining RL with deep learning, the recent advancement in deep RL has demonstrated that it is possible to use a &ldquo;minimalist&rdquo; approach to learn locomotion, but the resulting motion appears unnatural.</p><p>Liu&rsquo;s team proposed to train the agent using curriculum learning with adjustable physical aid to create more natural animal locomotion using the minimalist learning approach.&nbsp;</p><p>Curriculum learning is, as it sounds, very similar to how a person goes through their educational process. An agent is given a simpler task at the beginning of the learning process and once it masters the skill, it is able to progress to the next lesson.&nbsp;</p><p>One of the challenges researchers face is making sure the agent&rsquo;s motion looks natural.</p><p>&ldquo;Without motion trajectory to mimic, most locomotion produced by deep RL methods are too energetic or asymmetrical.&rdquo; said Liu.&nbsp;</p><p>To help combat these issues, Liu and her team have introduced a virtual spring to assist an agent to provide physical aid during the training process.&nbsp;</p><p>For instance, if the agent needs to walk forward, the spring helps to propel it forward. If it is about to fall, the spring pushes it back up. Because the spring is a physical force, its stiffness can easily be adjusted, making the lesson more or less difficult. As the agent learns the skill, the spring is adjusted before eventually being taken out completely.&nbsp;</p><p>For Liu, creating generative models for natural animal motion has always been a fascinating research area. &ldquo;We have been trying to mimic the kinematics and the dynamic characteristics of real animal movements. Thanks to the recent development in deep reinforcement learning, for the first time, we are able to also mimic &ldquo;how&rdquo; the real animals acquire motion skills.&rdquo;</p><p>Karen Liu and co-authors Wenhao Yu and Greg Turk recently presented their paper<a href="https://arxiv.org/abs/1801.08093">, &ldquo;Learning Symmetric and Low Energy Locomotion&rdquo;</a> at <a href="https://s2018.siggraph.org/attend/vancouver/">SIGGRAPH 2018</a> in Vancouver BC, Canada.&nbsp;</p><p>&nbsp;</p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1536181446</created>  <gmt_created>2018-09-05 21:04:06</gmt_created>  <changed>1536334965</changed>  <gmt_changed>2018-09-07 15:42:45</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A Georgia Tech lab is working to teach robots new skills with minimal data.]]></teaser>  <type>news</type>  <sentence><![CDATA[A Georgia Tech lab is working to teach robots new skills with minimal data.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-09-05T00:00:00-04:00</dateline>  <iso_dateline>2018-09-05T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-09-05 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allie.mcfadden@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>610956</item>      </media>  <hg_media>          <item>          <nid>610956</nid>          <type>image</type>          <title><![CDATA[What If Robots Could Learn Skills from Scratch?]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Screen Shot 2018-09-05 at 11.25.27 AM.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Screen%20Shot%202018-09-05%20at%2011.25.27%20AM.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Screen%20Shot%202018-09-05%20at%2011.25.27%20AM.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Screen%2520Shot%25202018-09-05%2520at%252011.25.27%2520AM.png?itok=USUA-wsA]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1536181222</created>          <gmt_created>2018-09-05 21:00:22</gmt_created>          <changed>1536181222</changed>          <gmt_changed>2018-09-05 21:00:22</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="611005">  <title><![CDATA[IC Researchers Utilizing OMSCS as Test Bed for Wearable Tech in Online Learning]]></title>  <uid>33939</uid>  <body><![CDATA[<p>A team of researchers in the School of Interactive Computing (IC) and the Online Master of Science in Computer Science (OMSCS) program will investigate the feasibility of using wearable technologies and other types of sensing data to provide context in online learning environments.</p><p>This is the first funded project that uses OMSCS as a test bed.</p><p>Under a $300,000 grant from the National Science Foundation, IC Assistant Professor <strong>Lauren Wilcox</strong>, along with co-principal investigators <strong>Betsy DiSalvo</strong> (IC), <strong>Thomas Ploetz</strong> (IC), and <strong>David Joyner</strong> (OMSCS), will set up an infrastructure for using wearable technology and interaction analytics to capture students&rsquo; experiences with online courses. They will also investigate which personal interactive computing technologies are effective in capturing and modeling context, and what correlations exist between wearable data, analytics from online behavior, self-reports of stress and anxiety and learning outcomes. The initial aim is to determine if the use of wearable technologies could better inform online course delivery, and improve retention and learning outcomes.</p><p>&ldquo;When we are instructing online courses, we lose an important view into the student experience,&rdquo; Wilcox said. &ldquo;Some students might be paying attention while others aren&rsquo;t. Some might be paying attention, but they still aren&rsquo;t learning. We want to better understand these scenarios and use our knowledge of them to inform better online learning experiences.&rdquo;</p><p>&ldquo;Analyzing data from wearables worn by students during online course instruction could help us understand and recognize these scenarios,&rdquo; Ploetz added.</p><p>Such a study may not be possible at the vast majority of research institutions, but the presence of the OMSCS program at Georgia Tech affords a unique opportunity. Because of the high volume of participation and enrollment, not to mention the number of quality dedicated professors, the researchers expect to establish reliable conclusions and create a foundation for future research</p><p>&ldquo;We think the OMSCS program is a strong test bed for this research because students are motivated to succeed and course completion rates are very high, and yet the course content and assessments remain extremely rigorous,&rdquo; said Joyner, the associate director of student experience in the College of Computing and a longtime lecturer for OMSCS.</p><p>&ldquo;These students experience the same stress, engagement, discouragement, and triumph as traditional students, but online instructors cannot see these states. Wearable technologies may help identify when these states occur and whether they correlate to desirable learning outcomes.&rdquo;</p><p>The project is a two-year study during which the researchers will establish ground truth on student success and satisfaction. Are they generally happy? Do they disengage or pay greater attention to a specific lecture? What events trigger indicators of stress or anxiety, and at what point is that detrimental to learning?</p><p>&ldquo;The idea is not to provide data on individuals to instructors,&rdquo; Wilcox said, addressing concerns of student privacy. &ldquo;First, we hope to see whether we can collect these data points, understand what they might mean for learning, and then provide anonymous aggregated feedback to instructors. It&rsquo;s also about how we can help adapt these learning experiences to the individual students.&rdquo;</p><p>Down the road, it could also be an important test bed for things like test anxiety or understanding what a flow state &ndash; or, colloquially, being &ldquo;in the zone&rdquo; &ndash; looks like and what features of a lecture lend themselves to it.</p><p>&ldquo;One of the most exciting aspects of deploying this type of research in OMSCS is the potential scale for future research,&rdquo; DiSalvo said. &ldquo;This grant is laying the groundwork for future research on designing learning, building upon theories of learning with design-based research both at a scale and with detail of individual behavior and feedback that we have not had access to in the past.&rdquo;</p><p>While the initial applications of this approach are in online courses, access to this type of data could be used to design many new learning environments.</p><p>&ldquo;With just-in-time feedback to the students, we could provide customized learning that really moves away from the traditional class structure,&rdquo; DiSalvo said.</p><p>Beyond learning, with more and more aspects of daily life going online, Wilcox said that she could also see implications of the findings from this study influencing the design of other online environments, such as job training.</p><p>&ldquo;The ultimate goal is to create online learning environments that promote positive human interactions and consider human health and wellness an integral part of the design,&rdquo; she said.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1536259967</created>  <gmt_created>2018-09-06 18:52:47</gmt_created>  <changed>1536259967</changed>  <gmt_changed>2018-09-06 18:52:47</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The research project is under a two-year, $300,000 grant from the National Science Foundation for faculty Lauren Wilcox, Betsy DiSalvo, Thomas Ploetz, and David Joyner.]]></teaser>  <type>news</type>  <sentence><![CDATA[The research project is under a two-year, $300,000 grant from the National Science Foundation for faculty Lauren Wilcox, Betsy DiSalvo, Thomas Ploetz, and David Joyner.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-09-06T00:00:00-04:00</dateline>  <iso_dateline>2018-09-06T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-09-06 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>611004</item>      </media>  <hg_media>          <item>          <nid>611004</nid>          <type>image</type>          <title><![CDATA[Online learning stock]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[online learning.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/online%20learning.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/online%20learning.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/online%2520learning.jpg?itok=iJjWc-lh]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Fingers typing on a laptop keyboard]]></image_alt>                    <created>1536259875</created>          <gmt_created>2018-09-06 18:51:15</gmt_created>          <changed>1536259875</changed>          <gmt_changed>2018-09-06 18:51:15</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://www.omscs.gatech.edu]]></url>        <title><![CDATA[Online Master of Science Computer Science]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>      </news_terms>  <keywords>          <keyword tid="1051"><![CDATA[Computer Science]]></keyword>          <keyword tid="208"><![CDATA[computing]]></keyword>          <keyword tid="2483"><![CDATA[interactive computing]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="69631"><![CDATA[Online Master of Science in Computer Science]]></keyword>          <keyword tid="121521"><![CDATA[OMSCS]]></keyword>          <keyword tid="14511"><![CDATA[online learning]]></keyword>          <keyword tid="109121"><![CDATA[Lauren Wilcox]]></keyword>          <keyword tid="11961"><![CDATA[betsy disalvo]]></keyword>          <keyword tid="176045"><![CDATA[thomas ploetz]]></keyword>          <keyword tid="145291"><![CDATA[David Joyner]]></keyword>          <keyword tid="77691"><![CDATA[wearable technology]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="610021">  <title><![CDATA[Oh, the Places They'll Go: Professor Gregory Abowd Looks Back on 30 Ph.D. Graduates]]></title>  <uid>33939</uid>  <body><![CDATA[<p>If you go outside School of Interactive Computing Professor <strong>Gregory Abowd</strong>&rsquo;s office, you might find a rather odd collection of keepsakes.</p><p>A wooden toilet seat sits on a table in the front right corner of his lab space, a metal toilet paper holder not far away. A bobblehead sits behind that, unmistakably the bearded figure of the longtime College of Computing faculty resting his left foot on the top of a miniature toilet. The smallest of these trinkets is simply just a carving of a toilet, and behind it all, yes, the blue plastic door of a portable toilet you might see at a concert or other outdoor venue.</p><p>Each item was given to Abowd by his students, curious selections without the proper context. They might indicate underlying animosity if not for the bright smile they bring to Abowd&rsquo;s face as he describes the origin of each.</p><p>One man&rsquo;s chamber pot, he might say, is another man&rsquo;s reminder of where&rsquo;s he&rsquo;s been and where his students have gone. Earlier this year, he graduated his 30<sup>th</sup> Ph.D. student and, among his impressive list of accomplishments, that&rsquo;s the one he cites as the most important.</p><h4><strong>A unique tradition</strong></h4><p>The story goes like this:</p><p>When Abowd was a graduate student at the University of Oxford, his supervisor described his thesis research as &ldquo;a beautiful porcelain pot&rdquo; that he had forgotten to &ndash; we&rsquo;ll just say, &ldquo;fill.&rdquo;</p><p>&ldquo;These rather disparaging words have since had a profoundly positive influence on me,&rdquo; stated a posted explanation of the bathroom fixtures around his lab space. &ldquo;Shortly after successfully defending my thesis, a good friend, who knew how deflating my supervisor&rsquo;s words were, gave me a little figurine of a toilet and asked me to keep it nearby in order to preserve my humility throughout my career.&rdquo;</p><p>Since then, it has become tradition to continue to keep Abowd humble throughout each accomplishment of his career &ndash; being awarded tenure, a promotion to full professor, distinguished professor, regents&rsquo; professor, and J.Z. Liang Chair.</p><p>He remembers each of his 30 Ph.D. grads like family. Comparing them to his own 11 brothers and sisters, whom he can understandably recite in order, he shows an uncanny ability to do the same with his students. Prompted with an easy one &ndash; his first Ph.D. graduate &ndash; Abowd didn&rsquo;t hesitate.</p><p>&ldquo;Oh, <strong>Kurt Stirewalt</strong>,&rdquo; Abowd immediately said, as if stunned to receive such a softball question. &ldquo;He went to Michigan State, but he&rsquo;s no longer there.&rdquo;</p><p>Indeed, Stirewalt graduated in 1997, earned an NSF CAREER Award in 2000, and reached the level of associate professor with tenure at Michigan State. Since then, he has gone on to become vice president of application architecture at LogicBox.</p><p>Ok, a tougher one then: Ph.D. graduate No. 17. A slight hesitation.</p><p>&ldquo;That would be <strong>Tracy Westeyn</strong>, right?&rdquo;</p><p>Right. He advised her with Professor <strong>Thad Starner</strong> through her graduation in 2010. She has since gone on to a career in Washington, D.C.</p><p>One more: No. 9. A long pause.</p><p>&ldquo;Would that be Lonnie?&rdquo; Abowd asked. He&rsquo;s one off. <strong>Lonnie Harvel</strong> was No. 8 and graduated in 2005. &ldquo;So, the one right after. That would be <strong>Giovanni</strong> (<strong>Iachello</strong>).&rdquo;</p><p>Right again. Iachello graduated in 2006 and is now the head of international and new markets at LinkedIn.</p><h4><strong>Where they&rsquo;ve gone</strong></h4><p>The rest of <strong><a href="https://public.tableau.com/views/Abowds30PhDs/Dashboard1?:embed=y&amp;:display_count=yes&amp;publish=yes&amp;:showVizHome=no">Abowd&#39;s academic family</a></strong>&nbsp;is an equally impressive reminder of the quality of the students that have come through his lab. One former student is at Google, another at Samsung Electronics. There&rsquo;s one at Amazon Lab126 and one at Intel Labs. One has an ever-expanding list of patents, and a host of others have gone on to become academic faculty around the world &ndash; Sweden, India, Korea, Texas, New York, Washington, and more.</p><p>Abowd hesitated when asked whether 30 Ph.D. graduates was a big number, then put it into context.</p><p>&ldquo;It&rsquo;s a big number when you think that most of them have gone on to academic positions,&rdquo; Abowd said. &ldquo;I wouldn&rsquo;t want to say that&rsquo;s the best option for everyone, but it is something I&rsquo;m proud of.&rdquo;</p><p>The impact is exponential. Many of his graduate students have gone on to have graduate students of their own. While his 30 students&rsquo; destinations are relatively easy to map, the task gets significantly more challenging with second and third generations that number in the hundreds.</p><p>They get together as often as possible, usually at conferences like the ACM CHI Conference on Human Factors in Computing Systems or Ubicomp. At CHI 2018, Abowd estimated about 50 or 60 former students, including masters and undergraduates, attended their get-together. Abowd has connected with these students in a profound way &ndash; from one-on-one mentorship to &ldquo;making music&rdquo; together in a band (okay, they were just pretending).</p><p>&ldquo;It&rsquo;s the proudest academic accomplishment for me,&rdquo; Abowd said of his students. &ldquo;The students and the quality of the students. We are teachers first, and you develop very close long-term relationships with them. Almost all of them, I am still in contact with, so it really is like your own children.&rdquo;</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1534787813</created>  <gmt_created>2018-08-20 17:56:53</gmt_created>  <changed>1535144738</changed>  <gmt_changed>2018-08-24 21:05:38</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Gregory Abowd graduated his 30th Ph.D. student in the spring. They have each gone on to impressive careers in academia and industry.]]></teaser>  <type>news</type>  <sentence><![CDATA[Gregory Abowd graduated his 30th Ph.D. student in the spring. They have each gone on to impressive careers in academia and industry.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-08-20T00:00:00-04:00</dateline>  <iso_dateline>2018-08-20T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-08-20 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>610020</item>          <item>610350</item>      </media>  <hg_media>          <item>          <nid>610020</nid>          <type>image</type>          <title><![CDATA[Gregory Abowd 30 PhDs]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Abowd Main.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Abowd%20Main.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Abowd%20Main.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Abowd%2520Main.jpg?itok=atmOJAFv]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Gregory Abowd stands at his office door]]></image_alt>                    <created>1534787486</created>          <gmt_created>2018-08-20 17:51:26</gmt_created>          <changed>1534787486</changed>          <gmt_changed>2018-08-20 17:51:26</gmt_changed>      </item>          <item>          <nid>610350</nid>          <type>image</type>          <title><![CDATA[Where Are They Now? ]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Abowd&#039;s 30 PhDs_Mercury.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Abowd%27s%2030%20PhDs_Mercury.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Abowd%27s%2030%20PhDs_Mercury.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Abowd%2527s%252030%2520PhDs_Mercury.png?itok=_2cKybH4]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1535144557</created>          <gmt_created>2018-08-24 21:02:37</gmt_created>          <changed>1535144632</changed>          <gmt_changed>2018-08-24 21:03:52</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://public.tableau.com/views/Abowds30PhDs/Dashboard1?:embed=y&amp;:display_count=yes&amp;publish=yes&amp;:showVizHome=no]]></url>        <title><![CDATA[Data Interactive - Abowd's Graduates Across the World]]></title>      </link>          <link>        <url><![CDATA[http://ubicomp.cc.gatech.edu/index.html]]></url>        <title><![CDATA[The Georgia Tech Ubicomp Group]]></title>      </link>          <link>        <url><![CDATA[https://www.ic.gatech.edu/academics/phd-programs]]></url>        <title><![CDATA[School of Interactive Computing Ph.D. Programs]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="11002"><![CDATA[Gregory Abowd]]></keyword>          <keyword tid="115361"><![CDATA[Ph.D]]></keyword>          <keyword tid="10353"><![CDATA[wearable computing]]></keyword>          <keyword tid="178784"><![CDATA[Kurt Stirewalt]]></keyword>          <keyword tid="178785"><![CDATA[Tracy Westeyn]]></keyword>          <keyword tid="1944"><![CDATA[Thad Starner]]></keyword>          <keyword tid="178786"><![CDATA[Lonnie Harvel]]></keyword>          <keyword tid="178787"><![CDATA[Giovanni Iachello]]></keyword>          <keyword tid="178788"><![CDATA[The School of Interactive Computing]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="610180">  <title><![CDATA[IC’s John Stasko Recognized as Regents Professor By University System of Georgia]]></title>  <uid>33939</uid>  <body><![CDATA[<p>School of Interactive Computing&rsquo;s (IC) <a href="https://www.ic.gatech.edu/people/john-stasko"><strong>John Stasko</strong></a> was appointed Regents Professor, the highest academic and research recognition bestowed by the Board of Regents of the University System of Georgia, earlier this month.</p><p>Stasko was one of four Georgia Tech faculty members to earn the title in the latest announcement and the only from the College of Computing.</p><p>&ldquo;I feel very honored and grateful to receive this title,&rdquo; Stasko said. &ldquo;For a professor to have their research and teaching recognized in this way is the highest compliment. It would not have been possible without my students and faculty colleagues here over the past 30 years. I&rsquo;d like to tank them for their contributions and their friendship.&rdquo;</p><p>Stasko joined the Georgia Tech faculty in 1989. His primary research area has been in human-computer interaction, with a focus on information and visual analytics. He is the director of the <a href="https://www.cc.gatech.edu/gvu/ii/">Information Interfaces Research Group</a> at Georgia Tech whose mission is to help people take advantage of information to enrich their lives. He was <a href="http://www.chi.gatech.edu/2016/chi-academy-2016/">named to the prestigious CHI Academy in 2016</a>.</p><p>Georgia Tech Provost <strong>Rafael L. Bras</strong> offered his congratulations to all of the new designees.</p><p>&ldquo;Congratulations to my esteemed colleagues on this new distinction,&rdquo; said Bras, who serves as executive vice president for Academic Affairs and the K. Harrison Brown Family Chair.. &ldquo;The world&rsquo;s best and brightest scholars and researchers can be found at Georgia Tech, and this recognition is evidence of their relentless pursuit of excellence in teaching, research, and scholarship.&rdquo;</p><p>IC Chair <a href="https://www.ic.gatech.edu/people/ayanna-howard"><strong>Ayanna Howard</strong></a> also offered a note of support.</p><p>&ldquo;John has long been an admired leader within the School of Interactive Computing,&rdquo; she said. &ldquo;In a school full of the best and brightest minds in computing, he has distinguished himself through his devotion to research and education. We couldn&rsquo;t be prouder of him for this well-deserved distinction.&rdquo;</p><p>Each year, college deans may nominate two academic faculty members for the Regents Professor title and one research faculty member for the Regents Researcher title. The other three new Regents Professors are <strong>Ajay Kohli</strong> (Professor, Scheller College of Business), <strong>Timothy Lieuwen</strong> (Professor, School of Aerospace Engineering), and <strong>Catherine L. Ross</strong> (Professor, School of City and Regional Planning). The Regents Researcher is <strong>Michael O. Rodgers</strong> (Principal Research Scientist, School of Civil and Environmental Engineering).</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1534959436</created>  <gmt_created>2018-08-22 17:37:16</gmt_created>  <changed>1534959436</changed>  <gmt_changed>2018-08-22 17:37:16</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Stasko was one of four Georgia Tech faculty members to earn the title in the latest announcement and the only from the College of Computing.]]></teaser>  <type>news</type>  <sentence><![CDATA[Stasko was one of four Georgia Tech faculty members to earn the title in the latest announcement and the only from the College of Computing.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-08-22T00:00:00-04:00</dateline>  <iso_dateline>2018-08-22T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-08-22 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>394731</item>      </media>  <hg_media>          <item>          <nid>394731</nid>          <type>image</type>          <title><![CDATA[John Stasko]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[stasko14.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/stasko14.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/stasko14.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/stasko14.jpg?itok=v8FF8dAB]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[John Stasko]]></image_alt>                    <created>1449246346</created>          <gmt_created>2015-12-04 16:25:46</gmt_created>          <changed>1475895089</changed>          <gmt_changed>2016-10-08 02:51:29</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="11632"><![CDATA[john stasko]]></keyword>          <keyword tid="19401"><![CDATA[Regents Professors]]></keyword>          <keyword tid="1966"><![CDATA[usg]]></keyword>          <keyword tid="171841"><![CDATA[University System of Georgia Board of Regents]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="825"><![CDATA[Ayanna Howard]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="609771">  <title><![CDATA[Irfan Essa Gives Invited Talk on Computational Video for Sports]]></title>  <uid>34773</uid>  <body><![CDATA[<p>From the introduction of the &ldquo;1<sup>st</sup> and Ten&rdquo; line in NFL broadcasts in 1998 to the use of the Hawk-Eye system for line calls in tennis and cricket, sports viewers now expect to see graphics on their screens to help explain the action.</p><p><strong>Irfan Essa</strong>, director of the <a href="http://ml.gatech.edu/">Machine Learning Center at Georgia Tech (ML@GT)</a>, recently gave an invited talk, <em>Computational Video for Sports: Challenges for Large-Scale Video Analysis</em>, detailing why technology areas such as computer vision and augmented reality are so prevalent in sports broadcasts. During his talk, which took place at the <a href="http://www.vap.aau.dk/cvsports/">4th International Workshop on Computer Vision in Sports (CVsports)</a>&nbsp;in June, he also discussed the challenges that computer vision scientists face when creating technology to improve the sports industry.</p><p>One such challenge is computer vision and machine learning being used to create models that can identify common sports scenes and help write news captions. But what humans easily interpret is often not so simple for machines.</p><p>When analyzing a photo of Georgia Tech&rsquo;s head football coach Paul Johnson getting a celebratory Gatorade bath on the field, a computer algorithm misidentified a&nbsp;camera in the scene as a hair dryer, and the resulting caption read &ldquo;Man taking shower while others watch.&rdquo;</p><p>Improving computer vision so that it is able to better account for things like context and have better training models is one of the next steps in the field.</p><p>Scientists are also working on how to make this type of technology available for use on lower-quality video. Current computer vision techniques work well for broadcast-quality video in part because of the detail available in high-definition, but researchers would like to make the techniques accurate and cost-effective for lower-quality video so that high school sports can also take advantage of their benefits.</p><p>By 2019, video content is expected to account for 80 percent of the world&rsquo;s total web content. Computer vision&rsquo;s ability to understand context, analyze lower-quality video content, and establish better metrics to analyze the data are all key next steps for this segment of computer science, according to Essa.</p><p>The workshop took place as part of the CVF/IEEE <strong><a href="http://cvpr2018.thecvf.com/">Computer Vision and Pattern Recognition (CVPR)</a></strong> conference in Salt Lake City, Utah. Georgia Tech researchers and alumni presented work in computer vision and were among 6,000 attendees at the conference. Throughout the week, <a href="http://ml.gatech.edu/hg/item/607130">attendees presented their latest research papers</a> through oral presentations, spotlights, poster sessions, and workshops.</p><p>&nbsp;</p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1534342940</created>  <gmt_created>2018-08-15 14:22:20</gmt_created>  <changed>1534433882</changed>  <gmt_changed>2018-08-16 15:38:02</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Machine Learning Director, Irfan Essa, discussed how computer vision technology is being used in the sports industry at the 4th International Workshop on Computer Vision in Sports (CVsports).]]></teaser>  <type>news</type>  <sentence><![CDATA[Machine Learning Director, Irfan Essa, discussed how computer vision technology is being used in the sports industry at the 4th International Workshop on Computer Vision in Sports (CVsports).]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-08-15T00:00:00-04:00</dateline>  <iso_dateline>2018-08-15T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-08-15 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Allie McFadden</p><p>Communications Officer</p><p>allison.blinder@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>609738</item>      </media>  <hg_media>          <item>          <nid>609738</nid>          <type>image</type>          <title><![CDATA[IN OR OUT: Hawk-Eye is a vision processing technology that tracks balls with millimeter accuracy to give viewers an up-close view of the action. Photo credit: Wikimedia Commons]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[The_decision_of_In_or_Out_with_the_help_of_Technology_at_Wimbledon.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/The_decision_of_In_or_Out_with_the_help_of_Technology_at_Wimbledon.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/The_decision_of_In_or_Out_with_the_help_of_Technology_at_Wimbledon.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/The_decision_of_In_or_Out_with_the_help_of_Technology_at_Wimbledon.jpg?itok=eqXkrotG]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1534268749</created>          <gmt_created>2018-08-14 17:45:49</gmt_created>          <changed>1534343189</changed>          <gmt_changed>2018-08-15 14:26:29</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="143"><![CDATA[Digital Media and Entertainment]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="143"><![CDATA[Digital Media and Entertainment]]></term>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="609490">  <title><![CDATA[School of IC Chair Ayanna Howard Named CMD-IT 2018 Richard A. Tapia Award Winner]]></title>  <uid>33939</uid>  <body><![CDATA[<p>The <a href="http://www.cmd-it.org/">Center for Minorities and People with Disabilities in Information Technology</a> (CMD-IT) announced School of Interactive Computing Chair <a href="https://www.ic.gatech.edu/people/ayanna-howard"><strong>Ayanna Howard</strong></a> as the winner of the Richard A. Tapia Achievement Award for Scientific Scholarship, Civic Science and Diversifying Computing.</p><p>&ldquo;Ayanna Howard has been a leading innovator and researcher in the fields of robotics, computer vision, and artificial intelligence,&rdquo; said Valerie Taylor, CMD-IT CEO and President. &ldquo;Applications of her work have included the development of assistive robots in the home, therapy gaming apps, and remote exploration of extreme environments.&rdquo;</p><p>&ldquo;Throughout her career she has focused on bringing girls, underrepresented minorities, and people with disabilities into computing through programs related to robotics. Ayanna&rsquo;s focus on engaging people with disabilities resulted in the creation of Zyrobotics, LLC., which provides inclusive mobile technologies that make learning accessible.&rdquo;</p><p>The Richard A. Tapia Award is awarded annually to an individual who demonstrates significant research leadership and strong commitment and contributions to diversifying computing. It will be presented at the <a href="http://www.tapiaconference.org/">2018 ACM Richard Tapia Celebration of Diversity in Computing Conference</a>. Themed &ldquo;Diversity: Roots of Innovation,&rdquo; the Tapia Conference will be held Sept. 19-22, in Orlando, Florida.&nbsp; The conference brings together students, faculty, researchers, and professionals from all backgrounds and ethnicities in computing to promote and celebrate diversity in computing. The Tapia Conference is sponsored by the Association of Computing Machinery (ACM) and presented by CMD-IT.</p><p>The vision of CMD-IT is to contribute to the national need for an effective workforce in computing and IT through inclusive programs and initiatives focused on minorities and people with disabilities.</p><p>&ldquo;CMD-IT is focused on improving engagement among diverse communities in computing,&rdquo; Howard said. &ldquo;This is something I have long considered among my missions as a researcher and an educator. To be recognized by such a wonderful organization is truly an honor.&rdquo;</p><p>For more information and to register for the Tapia Conference, visit <a href="http://www.tapiaconference.org">www.tapiaconference.org</a>.&nbsp;</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1533762668</created>  <gmt_created>2018-08-08 21:11:08</gmt_created>  <changed>1533762668</changed>  <gmt_changed>2018-08-08 21:11:08</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The Richard A. Tapia Award is awarded annually to an individual who demonstrates significant research leadership and strong commitment and contributions to diversifying computing.]]></teaser>  <type>news</type>  <sentence><![CDATA[The Richard A. Tapia Award is awarded annually to an individual who demonstrates significant research leadership and strong commitment and contributions to diversifying computing.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-08-08T00:00:00-04:00</dateline>  <iso_dateline>2018-08-08T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-08-08 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>599486</item>      </media>  <hg_media>          <item>          <nid>599486</nid>          <type>image</type>          <title><![CDATA[Ayanna Howard headshot]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Howard 2.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Howard%202.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Howard%202.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Howard%25202.jpg?itok=FKpjBeR-]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Ayanna Howard]]></image_alt>                    <created>1512405411</created>          <gmt_created>2017-12-04 16:36:51</gmt_created>          <changed>1512405411</changed>          <gmt_changed>2017-12-04 16:36:51</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="825"><![CDATA[Ayanna Howard]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="433"><![CDATA[IC]]></keyword>          <keyword tid="2435"><![CDATA[ECE]]></keyword>          <keyword tid="66891"><![CDATA[Georgia Tech School of Electrical and Computer Engineering]]></keyword>          <keyword tid="109"><![CDATA[Georgia Tech]]></keyword>          <keyword tid="505"><![CDATA[gatech]]></keyword>          <keyword tid="5022"><![CDATA[Richard Tapia]]></keyword>          <keyword tid="175368"><![CDATA[Tapia Celebration of Diversity in Computing]]></keyword>          <keyword tid="170724"><![CDATA[TAPIA]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="608094">  <title><![CDATA[Fifth Summer of Civic Data Science Program Presents Community-Focused Solutions]]></title>  <uid>34541</uid>  <body><![CDATA[<p>Students presented data-oriented solutions for civic problems, from public health to environmentalism, at the <a href="http://civicdatascience.gatech.edu/">Civic Data Science</a> (CDS) finale on July 19.</p><p>The 10-week summer program brings college students from across the country to Georgia Tech to use data science research and applications for direct civic and social impact. The National Science Foundation&ndash;funded program is now in its fifth year, previously under the name Data Science for Social Good.</p><p>This year&rsquo;s projects addressed gentrification, sustainable transportation, pest control, and environmental monitoring. Each project pairs with local organizations, such as the City of Atlanta and neighborhood planning units, to ensure the work can help the community.</p><p>&ldquo;Community organizations help make sure the kind of problems we&rsquo;re working on are grounded in reality,&rdquo; said School of Computer Science (SCS) Professor and program co-director <a href="https://www.scs.gatech.edu/people/11077/ellen-zeguras"><strong>Ellen Zegura</strong></a>. &ldquo;It&rsquo;s public problem solving.&rdquo;</p><p>It also lets students see how their data skills can be used outside the classroom.</p><p>&ldquo;The CDS program is unique in that it provides the perfect balance between research and application,&rdquo; said <a href="https://www.linkedin.com/in/michael-koohang-5611ba14a/"><strong>Michael Koohang</strong></a>, a rising fourth-year student at Middle Georgia State University. &ldquo;While we conducted formal research during the program, we were also applying our discoveries to tangible pieces of work that had almost immediate impact on local communities.&rdquo;</p><p>For many students, who often come from smaller liberal arts colleges, this was their first opportunity in an environment as large and well-resourced as Tech.</p><p>&ldquo;I knew I was interested in attending graduate school before I came to Tech, but working full-time in a lab, with a mentor and team, has given me invaluable insight about the day-to-day of research,&rdquo; said Wellesley College rising third-year student <a href="https://www.linkedin.com/in/annabel-rothschild-488327124/"><strong>Annabel Rothschild</strong></a>.</p><p>Tech&rsquo;s focus on interdisciplinary research also showed students the potential fields they could go into.</p><p>&ldquo;Prior to coming to CDS, I had a lot of difficulty trying to figure out how to combine both my majors, statistical and data sciences and government, into something that excited me,&rdquo; said <a href="https://www.linkedin.com/in/arielle-dror-8a488215b/"><strong>Arielle Dror</strong></a>, a rising third-year student at Smith College. &nbsp;&ldquo;Spending time in a bigger university than my own home institution showed me the exciting world of interdisciplinary research.&rdquo;</p><p>Zegura co-runs the program with <a href="https://www.lmc.gatech.edu/">Literature, Media, and Communication</a> (LCM) Associate Professor <a href="https://ledantec.net/"><strong>Christopher Le Dantec</strong></a>. They also use the support of faculty mentors: <a href="https://spp.gatech.edu/">School of Public Policy</a> Assistant Professor <a href="https://spp.gatech.edu/people/person/omar-isaac-asensio"><strong>Omar Asensio</strong></a>, LCM Associate Professor <a href="https://www.iac.gatech.edu/people/faculty/disalvo"><strong>Carl DiSalvo</strong></a><strong>,</strong> LCM Assistant Professor <a href="https://www.iac.gatech.edu/people/faculty/loukissas"><strong>Yanni Loukissas</strong></a><em>, </em>SCS research scientist <a href="https://www.cc.gatech.edu/people/amanda-meng"><strong>Amanda Meng</strong></a><em>, </em>and <a href="https://www.ce.gatech.edu/">School of Civil and Environmental Engineering</a> Associate Professor<em> </em><a href="https://ce.gatech.edu/people/faculty/5861/overview"><strong>Kari Watkins</strong></a><em>.</em></p><p>The four projects included:</p><h2><strong>Project:</strong> Rat Watch by <strong>Winne Luo</strong> and Michael Koohang</h2><p><strong>Mentors: </strong>Carl DiSalvo, Amanda Meng, Ellen Zegura</p><p><strong>Problem:</strong> Rats are everywhere, but no reliable public data is kept because rat control is outside city jurisdiction and only homeowners can report rats. Without oversight, rats can increase disease, asthma, stress, and cause infrastructure damage.</p><p><strong>Solution: </strong>Luo and Koohang created an SMS chat bot where residents could report rat sightings via text. They created an interactive map with this data that lets users toggle between layers of code violations to see where rats are. The map can help city officials direct mitigation efforts and provide citizens with a tool to engage the government into action.</p><h2><strong>Project:</strong> Atlanta Map Room by Annabel Rothschild and <a href="https://www.linkedin.com/in/muniba-khan-bb1883b4/"><strong>Muniba Khan</strong></a></h2><p><strong>Mentors:</strong> Yanni Loukissas</p><p><strong>Problem: </strong>Maps often depict an idealized environment created by the population in power and not everyone&rsquo;s reality. This team wanted to document and reflect upon the connections and disjunctions between civic data and lived experience in Atlanta.</p><p><strong>Solution</strong>: They created the Atlanta Map Room in Technology Square Research Building, where everyone can collaborate on large-scale, interpretive maps. Using an app that allows users to select an area of the city to focus on and project it on a piece of paper, users can write their experiences on the map and bring their narrative back to the data. The map allows users to critique data and recognize it may not always tell the full story.</p><h2><strong>Project: </strong>Popular Sentiment of U.S. Electric Vehicle Drivers by Arielle Dror, <a href="https://www.linkedin.com/in/emerson-wenzel/"><strong>Emerson Wenzel</strong></a>, and <strong>Kevin Alvarez</strong></h2><p><strong>Mentors: </strong>Omar Asensio</p><p><strong>Problem:</strong> Although electric vehicles make up just 2 percent of car sales today, they will be 55 percent in 2050. Despite this boom, charging station experiences are less than accessible.</p><p><strong>Solution</strong>: This group pulled data from the app Plugshare, where users rate electric vehicle charging stations, to determine how well the current electric vehicle structure serves drivers. They used machine learning to automatically classify all the reviews as having a negative or positive sentiment. Overall roughly 40 percent of drivers have a poor experience at charging stations, a problem that needs to be fixed as the market expands.</p><h2><strong>Project:</strong> Seeing Like A Bike by Nic Alton and <a href="https://www.linkedin.com/in/saumik-narayanan-533b1b132/"><strong>Saumik Naraynan</strong></a></h2><p><strong>Mentors: </strong>Christopher Le Dantec and Kari Watkins</p><p><strong>Problem: </strong>Traffic in Atlanta grows worse every year, but better bike infrastructure can alleviate congestion. Yet the heaviest trafficked routes often have higher pollution, which adversely affects cyclists&rsquo; health.</p><p><strong>Solution: </strong>Students attached low-cost air quality sensors to bikes and ran a series of calibration tests against high-precision sensing equipment. The data will enable a large-scale deployment of bikes to collect air quality data from around the city, determining &nbsp;which routes are too unhealthy for cyclists.</p>]]></body>  <author>Tess Malone</author>  <status>1</status>  <created>1532615303</created>  <gmt_created>2018-07-26 14:28:23</gmt_created>  <changed>1533047400</changed>  <gmt_changed>2018-07-31 14:30:00</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Students worked on environmental, public health, and civic-minded projects for the fifth annual Civic Data Science program.]]></teaser>  <type>news</type>  <sentence><![CDATA[Students worked on environmental, public health, and civic-minded projects for the fifth annual Civic Data Science program.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-07-26T00:00:00-04:00</dateline>  <iso_dateline>2018-07-26T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-07-26 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[tess.malone@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Tess Malone, Communications Officer</p><p><a href="mailto:tess.malone@cc.gatech.edu">tess.malone@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>608096</item>      </media>  <hg_media>          <item>          <nid>608096</nid>          <type>image</type>          <title><![CDATA[CDS group]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[29744050898_e54ef5b162_k.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/29744050898_e54ef5b162_k.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/29744050898_e54ef5b162_k.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/29744050898_e54ef5b162_k.jpg?itok=JZ-8_SOl]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[CDS group photo]]></image_alt>                    <created>1532615454</created>          <gmt_created>2018-07-26 14:30:54</gmt_created>          <changed>1532615454</changed>          <gmt_changed>2018-07-26 14:30:54</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="545781"><![CDATA[Institute for Data Engineering and Science]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39431"><![CDATA[Data Engineering and Science]]></term>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39511"><![CDATA[Public Service, Leadership, and Policy]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="607617">  <title><![CDATA[Georgia Tech Solves 'Texture Fill' Problem with Machine Learning]]></title>  <uid>32045</uid>  <body><![CDATA[<p>A new machine learning technique developed at Georgia Tech may soon give budding fashionistas and other designers the freedom to create realistic, high-resolution visual content without relying on complicated 3-D rendering programs.</p><p><a href="http://texturegan.eye.gatech.edu/">TextureGAN</a> is the first deep image synthesis method that can realistically spread multiple textures across an object. With this new approach, users drag one or more texture patches onto a sketch &mdash; say of a handbag or a skirt &mdash;&nbsp;and the network texturizes the sketch to accurately account for 3-D surfaces and lighting.</p><h5><a href="https://youtu.be/bCBDPfWdpDc" target="_blank">[VIDEO: See&nbsp;TextureGAN&nbsp;in action]</a></h5><p>Prior to this work, producing realistic images of this kind could be tedious and time-consuming, particularly for those with limited experience. And, according to the researchers, existing machine learning-based methods are not particularly good at generating high-resolution texture details.</p><h4><strong>Using a neural network to improve results</strong></h4><p>&ldquo;The &lsquo;texture fill&rsquo; operation is difficult for a deep network to learn because it not only has to propagate the color, but also has to learn how to synthesize the structure of texture across 3-D shapes,&rdquo; said <strong>Wenqi Xian</strong>, computer science (CS) major and co-lead developer.</p><h5><a href="https://youtu.be/XWr0Fg5XbPs?t=1h32m44s" target="_blank">[VIDEO:&nbsp;Wenqi&nbsp;Xian presents TextureGAN at CVPR&nbsp;2018]</a></h5><p>The researchers initially trained a type of neural network called a conditional generative adversarial network (GAN) on sketches and textures extracted from thousands of ground-truth photographs. In this approach,&nbsp;a generator neural network creates images that a discriminator neural network then evaluates for accuracy. The goal is for both to get increasingly better at their respective tasks, which leads to more realistic outputs.</p><p>To ensure that the results look as realistic as possible, researchers fine-tuned the new system to minimize pixel-to-pixel style differences between generated images and training data. But the results were not quite what the team had expected.</p><h4><strong>Producing more realistic images</strong></h4><p>&ldquo;We realized that we needed a stronger constraint to preserve high-level texture in our outputs,&rdquo; said Georgia Tech CS Ph.D. student <strong>Patsorn Sangkloy</strong>. &ldquo;That&rsquo;s when we developed an additional discriminator network that we trained on a separate texture dataset. Its only job is to be presented with two samples and ask &lsquo;are these the same or not?&rsquo;&rdquo;</p><p>With its sole focus on a single question, this type of discriminator is much harder to fool. This, in turn, leads the generator to produce images that are not only realistic, but also true to the texture patch the user placed onto the sketch.</p><p>The work was presented in June at the conference on&nbsp;<a href="http://cvpr2018.thecvf.com/" target="_blank">Computer Vision and Pattern Recognition (CVPR) 2018</a> held in Salt Lake City and is funded through National Science Foundation award 1561968. <a href="https://www.ic.gatech.edu/">School of Interactive Computing</a> Associate Professor <strong>James Hays</strong> advises Xian and Sangkloy. Georgia Tech is collaborating on this research with Adobe Research, University of California at Berkeley, and Argo AI.</p>]]></body>  <author>Ben Snedeker</author>  <status>1</status>  <created>1531245139</created>  <gmt_created>2018-07-10 17:52:19</gmt_created>  <changed>1531405779</changed>  <gmt_changed>2018-07-12 14:29:39</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A new technique allows users to spread textures across sketches of objects to create high resolution images.]]></teaser>  <type>news</type>  <sentence><![CDATA[A new technique allows users to spread textures across sketches of objects to create high resolution images.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-07-10T00:00:00-04:00</dateline>  <iso_dateline>2018-07-10T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-07-10 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[albert.snedeker@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Albert Snedeker, Communications Manager</p><p><a href="mailto:albert.snedeker@cc.gatech.edu?subject=Texture%20Patch">albert.snedeker@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>607631</item>      </media>  <hg_media>          <item>          <nid>607631</nid>          <type>image</type>          <title><![CDATA[Georgia Tech Using Machine Learning to Solve Texture Fill Problem]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[zebra-texture-11297063007KgE.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/zebra-texture-11297063007KgE.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/zebra-texture-11297063007KgE.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/zebra-texture-11297063007KgE.jpg?itok=A6yfTjGT]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Georgia Tech Using Machine Learning to Solve Texture Fill Problem]]></image_alt>                    <created>1531254179</created>          <gmt_created>2018-07-10 20:22:59</gmt_created>          <changed>1531254179</changed>          <gmt_changed>2018-07-10 20:22:59</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="9167"><![CDATA[machine learning]]></keyword>          <keyword tid="178516"><![CDATA[texture GAN]]></keyword>          <keyword tid="178517"><![CDATA[neural network]]></keyword>          <keyword tid="109581"><![CDATA[deep learning]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71881"><![CDATA[Science and Technology]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="607634">  <title><![CDATA[App Developed by College of Computing Undergrads is a One-Stop Shop to Report Human Trafficking]]></title>  <uid>33939</uid>  <body><![CDATA[<p>An updated mobile application designed by undergraduates in Georgia Tech&#39;s College of Computing on behalf of&nbsp;<a href="https://airlineamb.org/">Airline Ambassadors International</a> could drastically reduce human trafficking through airlines by giving flight attendants necessary tools to effectively pinpoint threats and tip off authorities.</p><p>First developed in 2015, the application, called TIP Line, received a needed update from students working in Georgia Tech&rsquo;s CS Junior Design class, making reporting to authorities faster and more reliable by bringing trained users directly into contact with local law enforcement at the destination airport rather than relying on largely unreliable national hotlines.</p><p>TIP Line leverages trained airline professionals who have graduated from the AAI C-TIP (Counter-Trafficking in Persons) training class and been given a registration key to use the app, ensuring that law enforcement will take any tip from the app seriously.</p><p>Instead of making tips to one of over 190 global national hotlines, many of which only function during local work hours and also suffer from a high rate of false reporting, TIP Line&rsquo;s trained reporters are automatically brought into contact with the correct authorities, many of whom have also taken the peer-to-peer training class with the airline personnel.</p><p>TIP Line challenges the state of the art in reporting human trafficking by air because of its peer-to-peer and time sensitive nature, as well as its capabilities in providing a data rich format that allows video, photo, voice, and text to be anonymously transmitted to assigned law enforcement in real time.</p><p>According to the <a href="http://www.ilo.org/global/lang--en/index.htm">International Labor Organization</a>, forced labor and human trafficking is estimated to be more than a $150 billion industry, of which there are 40.3 million victims. In Atlanta alone, the sex trade is thought to generate $290 million annually. In Dallas, the total is over $350 million.</p><p>&ldquo;Human trafficking is one of the fastest-growing activities in transnational crime,&rdquo; said <strong>William Cheng</strong>, one of the Georgia Tech students who worked on the app. &ldquo;However, it has a weakness. When a human trafficker is transporting a victim in the air, the favored method of transport, they become vulnerable because they are in a public location surrounded by airport security and an unwilling victim. With this vulnerability in mind, our team aims to drastically reduce trafficking by giving flight attendants the proper tools to recognize and report.&rdquo;</p><p>The app allows a user to choose who to contact with the information. A geo-location function can help decide which phone number is appropriate, or users can select a destination airport to find the best contact in the app&rsquo;s database. If a different authority is required, users can scroll through a list of available numbers also stored in the database.</p><p>&ldquo;There is a growing global trend in the airline industry for reporting to appropriate airport police and not national tip lines,&rdquo; said <strong>David Rivard</strong>, a member of the AAI board and the organization&rsquo;s liaison with the Georgia Tech team. &ldquo;Interpol, for example, has a new program called <em>AIRCop</em> that makes available to signatory airports its 24-7 crime database for identification of perpetrators for all things trafficking.&rdquo;</p><p>Then, those reporting are given the option to provide a description, video, audio, or photo as evidence to the local authority, giving them additional information to discern the threat and how to apprehend the perpetrator and victim.</p><p>&ldquo;Perhaps most importantly, an app reporting solution creates a concerned citizens network, which is most important for combatting crime networks,&rdquo; Rivard said. &ldquo;TIP Line Version 2.0 (the current version) collects data and can distill it into actionable intelligence.&rdquo;</p><p>Currently, this version of the app is being used by over 7,000 trainees &ndash; airline flight crews, airport staffs, and others &ndash; who can monitor over 168,000,000 passengers each year and is a model for other transport services like Uber who seek to add similar services into their applications.</p><p>The TIP team, as the Georgia Tech students are affectionately called, aims to present the app to Interpol in hopes of further integrating it with enforcement agencies and, eventually, taking it beyond just human trafficking.</p><p>&ldquo;In terms of reporting, especially in the time-critical air transport environment, we can no longer afford to live in the telephone age,&rdquo; Rivard said.</p><p>The Georgia Tech students &ndash; Cheng, <strong>Kenta Kawaguchi</strong>, <strong>Kyle Al-Shafei</strong>, <strong>Micah Jo</strong>, and <strong>Heather Schirra</strong> &ndash; were connected with the program through School of Interactive Computing Professor Emeritus <strong>Jim Foley</strong>, who heard through a colleague that AAI was utilizing a rudimentary first version of the app that needed improvements. Cheng and Schirra are currently continuing work on the app.</p><p>TIP Line is available to trained users on iPhone and Android.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1531319745</created>  <gmt_created>2018-07-11 14:35:45</gmt_created>  <changed>1531319745</changed>  <gmt_changed>2018-07-11 14:35:45</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The app was developed in tandem with Airline Ambassadors International and is available to airline professionals trained in identifying trafficking.]]></teaser>  <type>news</type>  <sentence><![CDATA[The app was developed in tandem with Airline Ambassadors International and is available to airline professionals trained in identifying trafficking.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-07-11T00:00:00-04:00</dateline>  <iso_dateline>2018-07-11T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-07-11 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[david.mitchell@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>607633</item>      </media>  <hg_media>          <item>          <nid>607633</nid>          <type>image</type>          <title><![CDATA[AAI Tip Line]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Screen Shot 2018-07-11 at 10.30.54 AM.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Screen%20Shot%202018-07-11%20at%2010.30.54%20AM.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Screen%20Shot%202018-07-11%20at%2010.30.54%20AM.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Screen%2520Shot%25202018-07-11%2520at%252010.30.54%2520AM.png?itok=KAzvlBo5]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[AAI TIP Line App]]></image_alt>                    <created>1531319522</created>          <gmt_created>2018-07-11 14:32:02</gmt_created>          <changed>1531319522</changed>          <gmt_changed>2018-07-11 14:32:02</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="134"><![CDATA[Student and Faculty]]></category>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>      </categories>  <news_terms>          <term tid="134"><![CDATA[Student and Faculty]]></term>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>      </news_terms>  <keywords>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="78531"><![CDATA[Jim Foley]]></keyword>          <keyword tid="178520"><![CDATA[Airline Ambassadors International]]></keyword>          <keyword tid="178521"><![CDATA[TIP Line]]></keyword>          <keyword tid="62081"><![CDATA[human trafficking]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71881"><![CDATA[Science and Technology]]></topic>          <topic tid="71901"><![CDATA[Society and Culture]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="607374">  <title><![CDATA[Georgia Tech Ranks in the Top 20 for Most Accepted Papers at ICML 2018]]></title>  <uid>34773</uid>  <body><![CDATA[<p>With more than&nbsp;300 universities and companies represented, the Georgia Institute of Technology&nbsp;ranks #14 for the number of accepted papers at the&nbsp;<a href="https://icml.cc/">International Conference on Machine Learning (ICML)</a>&nbsp;in Stockholm, Sweden, July 10-15.</p><p>ICML is the leading international machine learning conference and is supported by the&nbsp;<a href="http://www.machinelearning.org/">International Machine Learning Society (IMLS)</a>. This year marks the 35th year for the conference.</p><p>Throughout the five-day conference, more than seven Georgia Tech faculty members and eight students will present 11 papers through oral presentations, poster sessions, tutorials, invited talks, and workshops.</p><p>The group will have representatives from the School of Interactive Computing, the School of Computational Science and Engineering, the Machine Learning Center at Georgia Tech, and the GVU Center. Associate Director of Georgia Tech&rsquo;s Machine Learning Center, <strong>Le Song</strong>, leads the pack contributing to six of the 11 papers.</p><p>&ldquo;It&#39;s great to see that Georgia Tech continues to make great scientific contributions to the international machine learning community, especially in the area of deep learning over graphs and reinforcement learning,&rdquo; said Song.</p><p>The conference takes place in conjunction with the&nbsp;<a href="https://www.ijcai-18.org/">International Joint Conference on Artificial Intelligence (IJCAI)</a>, the&nbsp;<a href="http://celweb.vuse.vanderbilt.edu/aamas18/">International Conference on Autonomous Agents and Multiagent Systems (AAMAS)</a>, and the&nbsp;<a href="http://iccbr18.com/">International Conference on Case-Based&nbsp;Reasoning&nbsp;(ICCBR).</a></p><p>Below are the titles and abstracts from each of Georgia Tech&rsquo;s papers. An&nbsp;<a href="https://public.tableau.com/views/GeorgiaTechICML2018/Dashboard1?:embed=y&amp;:display_count=yes&amp;:showVizHome=no">interactive data graphic</a> is available and allows users to&nbsp;explore the papers, authors, and presentation schedule. Coverage will&nbsp;be available during the conference on Twitter at @mlatgt and Instagram at @mlatgeorgiatech.</p><p><strong>Georgia Tech at ICML 2018</strong></p><p><strong><a href="https://arxiv.org/pdf/1708.04783.pdf">Non-convex Conditional Gradient Sliding</a></strong></p><p><em>chao qu (technion) &middot; Yan Li (Georgia Institute of Technology) &middot; Huan Xu (Georgia Tech)</em></p><p>Abstract: We investigate a projection free method, namely conditional gradient sliding on batched, stochastic and finite-sum non-convex problem. CGS is a smart combination of Nesterov&#39;s accelerated gradient method and Frank-Wolfe (FW) method, and outperforms FW in the convex setting by saving gradient computations. However, the study of CGS in the non-convex setting is limited. In this paper, we propose the non-convex conditional gradient sliding (NCGS) which surpasses the non-convex Frank-Wolfe method in batched, stochastic and finite-sum setting.</p><p><strong><a href="https://arxiv.org/pdf/1802.07814.pdf">Learning to Explain: An Information-Theoretic Perspective on Model Interpretation</a></strong></p><p><em>Jianbo Chen (University of California, Berkeley) &middot; Le Song (Georgia Institute of Technology) &middot; Martin Wainwright (University of California at Berkeley) &middot; Michael Jordan (UC Berkeley)&nbsp;</em></p><p>Abstract: We introduce instance wise feature selection as a methodology for model interpretation. Our method is based on learning a function to extract a subset of features that are most informative for each given example. This feature selector is trained to maximize the mutual information between selected features and the response variable, where the conditional distribution of the response variable given the input is the model to be explained. We develop an efficient variational approximation to the mutual information, and show the effectiveness of our method on a variety of synthetic and real data sets using both quantitative metrics and human evaluation.</p><p><strong><a href="https://arxiv.org/pdf/1710.01410.pdf">Learning Registered Point Processes from Idiosyncratic Observations</a></strong></p><p><em>Hongteng Xu (InfiniaML, Inc) &middot; Lawrence Carin (Duke) &middot; Hongyuan Zha (Georgia Institute of Technology)</em></p><p>Abstract: A parametric point process model is developed, with modeling based on the assumption that sequential observations often share latent phenomena, while also possessing idiosyncratic effects. An alternating optimization method is proposed to learn a &quot;registered&quot; point process that accounts for shared structure, as well as &quot;warping&quot; functions that characterize idiosyncratic aspects of each observed sequence. Under reasonable constraints, in each iteration we update the sample-specific warping functions by solving a set of constrained nonlinear programming problems in parallel, and update the model by maximum likelihood estimation. The justifiability, complexity and robustness of the proposed method are investigated in detail, and the influence of sequence stitching on the learning results is examined empirically. Experiments on both synthetic and real-world data demonstrate that the method yields explainable point process models, achieving encouraging results compared to state-of-the-art methods.</p><p><strong><a href="https://arxiv.org/pdf/1710.10568.pdf">Stochastic Training of Graph Convolutional Networks</a></strong></p><p><em>Jianfei Chen (Tsinghua University) &middot; Jun Zhu (Tsinghua University) &middot; Le Song (Georgia Institute of Technology)</em></p><p>Abstract: Graph convolutional networks (GCNs) are powerful deep neural networks for graph-structured data. However, GCN computes the representation of a node recursively from its neighbors, making the receptive field size grow exponentially with the number of layers. Previous attempts on reducing the receptive field size by subsampling neighbors do not have a convergence guarantee, and their receptive field size per node is still in the order of hundreds. In this paper, we develop control variate-based algorithms, which allow sampling an arbitrarily small neighbor size. Furthermore, we prove new theoretical guarantee for our algorithms to converge to a local optimum of GCN. Empirical results show that our algorithms enjoy a similar convergence with the exact algorithm using only two neighbors per node. The runtime of our algorithms on a large Reddit dataset is only one seventh of previous neighbor sampling algorithms.</p><p><strong><a href="https://openreview.net/pdf?id=SyPMT6gAb">Variance Regularized Counterfactual Risk Minimization via Variational Divergence Minimization</a></strong></p><p><em>Hang Wu (Georgia Institute of Technology) &middot; May Wang (Georgia Institute of Technology)</em></p><p>&nbsp;Abstract: Off-policy learning, the task of evaluating and improving policies using historic data collected from a logging policy, is important because on-policy evaluation is usually expensive and has adverse impacts. One of the major challenges of off-policy learning is to derive counterfactual estimators that also have low variance and thus low generalization error.<br />In this work, inspired by learning bounds for importance sampling problems, we present a new counterfactual learning principle for off-policy learning with bandit feedbacks. Our method regularizes the generalization error by minimizing the distribution divergence between the logging policy and the new policy, and removes the need for iterating through all training samples to compute sample variance regularization in prior work. With neural network policies, our end-to-end training algorithms using variational divergence minimization showed significant improvement over conventional baseline algorithms and are also consistent with our theoretical results.</p><p><strong><a href="https://arxiv.org/pdf/1802.03493.pdf">More Robust Doubly Robust Off-policy Evaluation</a></strong></p><p><em>Mehrdad Farajtabar (Georgia Tech) &middot; Yinlam Chow (DeepMind) &middot; Mohammad Ghavamzadeh (Google DeepMind and INRIA)</em></p><p>Abstract: We study the problem of off-policy evaluation (OPE) in reinforcement learning (RL), where the goal is to estimate the performance of a policy from the data generated by another policy(ies). In particular, we focus on the doubly robust (DR) estimators that consist of an importance sampling (IS) component and a performance model, and utilize the low (or zero) bias of IS and low variance of the model at the same time. Although the accuracy of the model has a huge impact on the overall performance of DR, most of the work on using the DR estimators in OPE has been focused on improving the IS part, and not much on how to learn the model. In this paper, we propose alternative DR estimators, called more robust doubly robust (MRDR), that learn the model parameter by minimizing the variance of the DR estimator. We first present a formulation for learning the DR model in RL. We then derive formulas for the variance of the DR estimator in both contextual bandits and RL, such that their gradients with reference to the model parameters can be estimated from the samples, and propose methods to efficiently minimize the variance. We prove that the MRDR estimators are strongly consistent and asymptotically optimal. Finally, we evaluate MRDR in bandits and RL benchmark problems, and compare its performance with the existing methods.</p><p><strong><a href="https://arxiv.org/pdf/1806.02371.pdf">Adversarial Attack on Graph Structured Data</a></strong></p><p><em>Hajun Dai (Georgia Tech) &middot; Hui Li (Ant Financial Services Group) &middot; Tian Tian () &middot; Xin Huang (Ant Financial) &middot; Lin Wang () &middot; Jun Zhu (Tsinghua University) &middot; Le Song (Georgia Institute of Technology)</em></p><p>Abstract: Deep learning on graph structures has shown exciting results in various applications. However, little attention has been paid to the robustness of such models, in contrast to numerous research works for image or text adversarial attack and defense. In this paper, we focus on the adversarial attacks that fool the model by modifying the combinatorial structure of data. We first propose a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier. Also, variants of genetic algorithms and gradient methods are presented in the scenario where prediction confidence or gradients are available. We use both synthetic and real-world data to show that, a family of Graph Neural Network models is vulnerable to these attacks, in both graph-level and node-level classification tasks. We also show such attacks can be used to diagnose the learned classifiers.</p><p><strong>&nbsp;<a href="https://arxiv.org/pdf/1806.02934.pdf">Learn from Your Neighbor: Learning Multi-modal Mappings from Sparse Annotations</a></strong></p><p><em>Ashwin Kalyan (Georgia Tech) &middot; Stefan Lee (Georgia Institute of Technology) &middot; Anitha Kannan (Curai) &middot; Dhruv Batra (Georgia Institute of Technology / Facebook AI Research)</em></p><p>&nbsp;Abstract: Many structured prediction problems (particularly in vision and language domains) are ambiguous, with multiple outputs being &lsquo;correct&rsquo; for an input &ndash; e.g. there are many ways of describing an image, multiple ways of translating a sentence; however, exhaustively annotating the applicability of all possible outputs is intractable due to exponentially large output spaces (e.g. all English sentences). In practice, these problems are cast as multi-class prediction, with the likelihood of only a sparse set of annotations being maximized &ndash; unfortunately penalizing for placing beliefs on plausible but unannotated outputs. We make and test the following hypothesis &ndash; for a given input, the annotations of its neighbors may serve as an additional supervisory signal. Specifically, we propose an objective that transfers supervision from neighboring examples. We first study the properties of our developed method in a controlled toy setup before reporting results on multi-label classification and two image-grounded sequence modeling tasks &ndash; captioning and question generation. We evaluate using standard task-specific metrics and measures of output diversity, finding consistent improvements over standard maximum likelihood training and other baselines.</p><p><strong>&nbsp;<a href="https://arxiv.org/pdf/1710.07742.pdf">Towards Black-box Iterative Machine Teaching</a></strong></p><p><em>Weiyang Liu (Georgia Tech) &middot; Bo Dai (Georgia Institute of Technology) &middot; Xingguo Li (University of Minnesota) &middot; Zhen Liu (Georgia Tech) &middot; James Rehg (Georgia Tech) &middot; Le Song (Georgia Institute of Technology)</em></p><p>Abstract: In this paper, we make an important step towards the black-box machine teaching by considering the cross-space machine teaching, where the teacher and the learner use different feature representations and the teacher can not fully observe the learner&#39;s model. In such scenario, we study how the teacher is still able to teach the learner to achieve faster convergence rate than the traditional passive learning. We propose an active teacher model that can actively query the learner (i.e., make the learner take exams) for estimating the learner&#39;s status and provably guide the learner to achieve faster convergence. The sample complexities for both teaching and query are provided. In the experiments, we compare the proposed active teacher with the omniscient teacher and verify the effectiveness of the active teacher model.</p><p><strong><a href="https://arxiv.org/pdf/1704.01665.pdf">Learning Steady-States of Iterative Algorithms over Graphs</a></strong></p><p><em>Hajun Dai (Georgia Tech) &middot; Zornitsa Kozareva () &middot; Bo Dai (Georgia Institute of Technology) &middot; Alex Smola (Amazon) &middot; Le Song (Georgia Institute of Technology)</em></p><p>Abstract: The design of good heuristics or approximation algorithms for NP-hard combinatorial optimization problems often requires significant specialized knowledge and trial-and-error. Can we automate this challenging, tedious process, and learn the algorithms instead? In many real-world applications, it is typically the case that the same optimization problem is solved again and again on a regular basis, maintaining the same problem structure but differing in the data. This provides an opportunity for learning heuristic algorithms that exploit the structure of such recurring problems. In this paper, we propose a unique combination of reinforcement learning and graph embedding to address this challenge. The learned greedy policy behaves like a meta-algorithm that incrementally constructs a solution, and the action is determined by the output of a graph-embedding network capturing the current state of the solution. We show that our framework can be applied to a diverse range of optimization problems over graphs, and learns effective algorithms for the Minimum Vertex Cover, Maximum Cut and Traveling Salesman problems.</p><p><strong><a href="https://arxiv.org/pdf/1712.10285.pdf">SBEED: Convergent Reinforcement Learning with Nonlinear Function Approximation</a></strong></p><p><em>Bo Dai (Georgia Institute of Technology) &middot; Albert Shaw (Georgia Tech) &middot; Lihong Li (Microsoft Research) &middot; Lin Xiao (Microsoft Research) &middot; Niao He (UIUC) &middot; Zhen Liu (Georgia Tech) &middot; Jianshu Chen (Microsoft Research) &middot; Le Song (Georgia Institute of Technology)</em>&nbsp;</p><p>Abstract: When function approximation is used, solving the Bellman optimality equation with stability guarantees has remained a major open problem in reinforcement learning for decades. The fundamental difficulty is that the Bellman operator may become an expansion in general, resulting in oscillating and even divergent behavior of popular algorithms like Q-learning. In this paper, we revisit the Bellman equation, and reformulate it into a novel primal-dual optimization problem using Nesterov&#39;s smoothing technique and the Legendre-Fenchel transformation. We then develop a new algorithm, called Smoothed Bellman Error Embedding, to solve this optimization problem where any differentiable function class may be used. We provide what we believe to be the first convergence guarantee for general nonlinear function approximation, and analyze the algorithm&#39;s sample complexity. Empirically, our algorithm compares favorably to state-of-the-art baselines in several benchmark control problems.</p>]]></body>  <author>ablinder6</author>  <status>1</status>  <created>1530202874</created>  <gmt_created>2018-06-28 16:21:14</gmt_created>  <changed>1530288390</changed>  <gmt_changed>2018-06-29 16:06:30</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech Presents 11 Papers at ICML 2018]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech Presents 11 Papers at ICML 2018]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-06-28T00:00:00-04:00</dateline>  <iso_dateline>2018-06-28T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-06-28 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[allison.blinder@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Allie Blinder</p><p>allison.blinder@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>607375</item>          <item>607413</item>      </media>  <hg_media>          <item>          <nid>607375</nid>          <type>image</type>          <title><![CDATA[ICML 2018]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[icml.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/icml.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/icml.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/icml.jpg?itok=NI95uydn]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1530202921</created>          <gmt_created>2018-06-28 16:22:01</gmt_created>          <changed>1530202921</changed>          <gmt_changed>2018-06-28 16:22:01</gmt_changed>      </item>          <item>          <nid>607413</nid>          <type>image</type>          <title><![CDATA[Explore GT@ICML 2018]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[GT_ICML2018_viz.gif]]></image_name>            <image_path><![CDATA[/sites/default/files/images/GT_ICML2018_viz.gif]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/GT_ICML2018_viz.gif]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/GT_ICML2018_viz.gif?itok=uuwKun_N]]></image_740>            <image_mime>image/gif</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1530286568</created>          <gmt_created>2018-06-29 15:36:08</gmt_created>          <changed>1530286568</changed>          <gmt_changed>2018-06-29 15:36:08</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="37041"><![CDATA[Computational Science and Engineering]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="9167"><![CDATA[machine learning]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="6381"><![CDATA[Conferences]]></keyword>      </keywords>  <core_research_areas>          <term tid="39431"><![CDATA[Data Engineering and Science]]></term>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="607130">  <title><![CDATA[Georgia Tech Presenting 13 Papers at Premier Computer Vision Conference CVPR]]></title>  <uid>33939</uid>  <body><![CDATA[<p>A host of Georgia Tech students and faculty will travel to Salt Lake City, Utah, this week to attend the conference on <a href="http://cvpr2018.thecvf.com/">Computer Vision and Pattern Recognition</a> (CVPR) 2018.</p><p>CVPR is the premier annual computer vision event and comprises a main conference and several co-located workshops and short courses. As in years past, faculty and students in the <a href="http://ic.gatech.edu">School of Interactive Computing</a> (IC) and associated research units &ndash; the <a href="http://ml.gatech.edu">Center for Machine Learning</a>, the <a href="http://gvu.gatech.edu">GVU Center</a>, and the <a href="http://robotics.gatech.edu">Institute for Robotics and Intelligent Machines</a> &ndash; will participate at all levels of the conference.</p><p>&ldquo;CVPR is the top event in computer vision, and Georgia Tech has long had a substantial presence at the conference,&rdquo; said <strong>Irfan Essa</strong>, IC professor and director of the Center for Machine Learning. &ldquo;This year, we have a number of faculty and student researchers participating in the technical program and we&rsquo;re excited to share our research with the rest of the community.&rdquo;</p><p>More than 10 faculty members and many more student researchers sharing 13 papers in oral, spotlight, poster, and demo presentations will represent Georgia Tech at the five-day event..</p><p>The conference will take place June 18-22, with the main technical program set to begin on June 19. Essa will provide a workshop talk at the conference.</p><p>Below are titles and abstracts of Georgia Tech&rsquo;s research being presented this week. The visualization below shows all of Georgia Tech&rsquo;s research, as well as dates, times, and locations for the associated talks.</p><p><strong>Georgia Tech at CVPR 2018</strong></p><p><strong><a href="http://fredhohman.com/papers/18-interactive-cvpr.pdf">Interactive Classification for Deep Learning Interpretation</a> </strong>(Angel Cabrera, Fred Hohman, Jason Lin, Polo Chau)</p><p>ABSTRACT: We present an interactive system enabling users to manipulate images to explore the robustness and sensitivity of deep learning image classifiers. Using modern web technologies to run in-browser inference, users can remove image features using inpainting algorithms and obtain new classifications in real time, which allows them to ask a variety of &ldquo;what if&rdquo; questions by experimentally modifying images and seeing how the model reacts. Our system allows users to compare and contrast what image regions humans and machine learning models use for classification, revealing a wide range of surprising results ranging from spectacular failures (e.g., a water bottle image becomes a concert when removing a person) to impressive resilience (e.g., a baseball player image remains correctly classified even without a glove or base).</p><p><strong><a href="http://openaccess.thecvf.com/content_cvpr_2018/papers/Kundu_3D-RCNN_Instance-Level_3D_CVPR_2018_paper.pdf">3D-RCNN: Instance-Level 3D Object Reconstruction via Render-and-Compare</a> </strong>(Abhijit Kundu, Yin Li, Jim Rehg)</p><p>ABSTRACT: We present a fast inverse-graphics framework for instance-level 3D scene understanding. We train a deep convolutional network that learns to map image regions to the full 3D shape and pose of all object instances in the image. Our method produces a compact 3D representation of the scene, which can be readily used for applications like autonomous driving. Many traditional 2D vision outputs, like instance segmentations and depth-maps, can be obtained by simply rendering our output 3D scene model. We exploit class-specific shape priors by learning a low dimensional shape-space from collections of CAD models. We present novel representations of shape and pose, that strive towards better 3D equivariance and generalization. In order to exploit rich supervisory signals in the form of 2D annotations like segmentation, we propose a differentiable Render-and-Compare loss that allows 3D shape and pose to be learned with 2D supervision. We evaluate our method on the challenging real-world datasets of Pascal3D+ and KITTI, where we achieve state-of-the-art results.</p><p><strong><a href="https://arxiv.org/pdf/1711.11543.pdf">Embodied Question Answering</a> </strong>(Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, Dhruv Batra)</p><p>ABSTRACT: We present a new AI task &ndash; Embodied Question Answering (EmbodiedQA) &ndash; where an agent is spawned at a random location in a 3D environment and asked a question (&lsquo;What color is the car?&rsquo;). In order to answer, the agent must first intelligently navigate to explore the environment, gather information through first-person (egocentric) vision, and then answer the question (&lsquo;orange&rsquo;). This challenging task requires a range of AI skills &ndash; active perception, language understanding, goal-driven navigation, commonsense reasoning, and grounding of language into actions. In this work, we develop the environments, end-to-end-trained reinforcement learning agents, and evaluation protocols for EmbodiedQA.</p><p><strong><a href="https://arxiv.org/pdf/1711.06330.pdf">Attend and Interact: Higher-Order Object Interactions for Video Understanding</a> </strong>(Chih-Yao Ma, Asim Kadav, Iain Melvin, Zsolt Kira, Ghassan AlRegib, Hans Peter Graf)</p><p>ABSTRACT: Human actions often involve complex interactions across several inter-related objects in the scene. However, existing approaches to fine-grained video understanding or visual relationship detection often rely on single object representation or pairwise object relationships. Furthermore, learning interactions across multiple objects in hundreds of frames for video is computationally infeasible and performance may suffer since a large combinatorial space has to be modeled. In this paper, we propose to efficiently learn higher-order interactions between arbitrary subgroups of objects for fine-grained video understanding. We demonstrate that modeling object interactions significantly improves accuracy for both action recognition and video captioning, while saving more than 3-times the computation over traditional pairwise relationships. The proposed method is validated on two large-scale datasets: Kinetics and ActivityNet Captions. Our SINet and SINet-Caption achieve state-of-the-art performances on both datasets even though the videos are sampled at a maximum of 1 FPS. To the best of our knowledge, this is the first work modeling object interactions on open domain large-scale video datasets, and we additionally model higher-order object interactions which improves the performance with low computational costs.</p><p><strong><a href="https://arxiv.org/pdf/1712.00377.pdf">Don&rsquo;t Just Assume; Look and Answer: Overcoming Priors for Visual Question Answering</a> </strong>(Aishwarya Agrawal, Dhruv Batra, Devi Parikh, Aniruddha Kembhavi)</p><p>ABSTRACT: A number of studies have found that today&rsquo;s Visual Question Answering (VQA) models are heavily driven by superficial correlations in the training data and lack sufficient image grounding. To encourage development of models geared toward the latter, we propose a new setting for VQA where for every question type, train and test sets have different prior distributions of answers. Specifically, we present new splits of the VQA v1 and VQA v2 datasets, which we call Visual Question Answering under Changing Priors (VQACP v1 and VQA-CP v2 respectively). First, we evaluate several existing VQA models under this new setting and show that their performance degrades significantly compared to the original VQA setting. Second, we propose a novel Grounded Visual Question Answering model (GVQA) that contains inductive biases and restrictions in the architecture specifically designed to prevent the model from &lsquo;cheating&rsquo; by primarily relying on priors in the training data. Specifically, GVQA explicitly disentangles the recognition of visual concepts present in the image from the identification of plausible answer space for a given question, enabling the model to more robustly generalize across different distributions of answers. GVQA is built off an existing VQA model &ndash; Stacked Attention Networks (SAN). Our experiments demonstrate that GVQA significantly outperforms SAN on both VQA-CP v1 and VQA-CP v2 datasets. Interestingly, it also outperforms more powerful VQA models such as Multimodal Compact Bilinear Pooling (MCB) in several cases. GVQA offers strengths complementary to SAN when trained and evaluated on the original VQA v1 and VQA v2 datasets. Finally, GVQA is more transparent and interpretable than existing VQA models.</p><p><strong><a href="https://arxiv.org/pdf/1711.06368.pdf">Mobile Video Object Detection With Temporally-Aware Feature Maps</a> </strong>(Mason Liu, Menglong Zhu)</p><p>ABSTRACT: This paper introduces an online model for object detection in videos designed to run in real-time on low-powered mobile and embedded devices. Our approach combines fast single-image object detection with convolutional long short-term memory (LSTM) layers to create an interweaved recurrent-convolutional architecture. Additionally, we propose an efficient Bottleneck-LSTM layer that significantly reduces computational cost compared to regular LSTMs. Our network achieves temporal awareness by using Bottleneck-LSTMs to refine and propagate feature maps across frames. This approach is substantially faster than existing detection methods in video, outperforming the fastest single-frame models in model size and computational cost while attaining accuracy comparable to much more expensive single-frame models on the Imagenet VID 2015 dataset. Our model reaches a real-time inference speed of up to 15 FPS on a mobile CPU.</p><p><strong><a href="https://arxiv.org/pdf/1804.08071.pdf">Decoupled Networks</a> </strong>(Weiyang Liu, Zhen Liu, Zhiding Yu, Bo Dai, Rongmei Lin, Yisen Wang, Jim Rehg, Le Song)</p><p>ABSTRACT: Inner product-based convolution has been a central component of convolutional neural networks (CNNs) and the key to learning visual representations. Inspired by the observation that CNN-learned features are naturally decoupled with the norm of features corresponding to the intra-class variation and the angle corresponding to the semantic difference, we propose a generic decoupled learning framework which models the intra-class variation and semantic difference independently. Specifically, we first reparametrize the inner product to a decoupled form and then generalize it to the decoupled convolution operator which serves as the building block of our decoupled networks. We present several effective instances of the decoupled convolution operator. Each decoupled operator is well motivated and has an intuitive geometric interpretation. Based on these decoupled operators, we further propose to directly learn the operator from data. Extensive experiments show that such decoupled reparameterization renders significant performance gain with easier convergence and stronger robustness.</p><p><strong><a href="https://arxiv.org/pdf/1712.03342.pdf">Geometry-Aware Learning of Maps for Camera Localization</a> </strong>(Samarth Brahmbhatt, Jinwei Gu, Kihwan Kim, James Hays, Jan Kautz)</p><p>ABSTRACT: Maps are a key component in image-based camera localization and visual SLAM systems: they are used to establish geometric constraints between images, correct drift in relative pose estimation, and relocalize cameras after lost tracking. The exact definitions of maps, however, are often application-specific and hand-crafted for different scenarios (e.g. 3D landmarks, lines, planes, bags of visual words). We propose to represent maps as a deep neural net called MapNet, which enables learning a data-driven map representation. Unlike prior work on learning maps, MapNet exploits cheap and ubiquitous sensory inputs like visual odometry and GPS in addition to images and fuses them together for camera localization. Geometric constraints expressed by these inputs, which have traditionally been used in bundle adjustment or pose-graph optimization, are formulated as loss terms in MapNet training and also used during inference. In addition to directly improving localization accuracy, this allows us to update the MapNet (i.e., maps) in a self-supervised manner using additional unlabeled video sequences from the scene. We also propose a novel parameterization for camera rotation which is better suited for deep-learning based camera pose regression. Experimental results on both the indoor 7-Scenes dataset and the outdoor Oxford RobotCar dataset show significant performance improvement over prior work.</p><p><strong><a href="https://arxiv.org/pdf/1804.00092.pdf">Iterative Learning With Open-Set Noisy Labels (</a></strong>Yisen Wang, Weiyang Liu, Xingjun Ma, James Bailey, Hongyuan Zha, Le Song, Shu-Tao Xia)</p><p>ABSTRACT: Large-scale datasets possessing clean label annotations are crucial for training Convolutional Neural Networks (CNNs). However, labeling large-scale data can be very costly and error-prone, and even high-quality datasets are likely to contain noisy (incorrect) labels. Existing works usually employ a closed-set assumption, whereby the samples associated with noisy labels possess a true class contained within the set of known classes in the training data. However, such an assumption is too restrictive for many applications, since samples associated with noisy labels might in fact possess a true class that is not present in the training data. We refer to this more complex scenario as the open-set noisy label problem and show that it is nontrivial in order to make accurate predictions. To address this problem, we propose a novel iterative learning framework for training CNNs on datasets with open-set noisy labels. Our approach detects noisy labels and learns deep discriminative features in an iterative fashion. To benefit from the noisy label detection, we design a Siamese network to encourage clean labels and noisy labels to be dissimilar. A reweighting module is also applied to simultaneously emphasize the learning from clean labels and reduce the effect caused by noisy labels. Experiments on CIFAR-10, ImageNet and real-world noisy (web-search) datasets demonstrate that our proposed model can robustly train CNNs in the presence of a high proportion of open-set as well as closed-set noisy labels.</p><p><strong><a href="https://arxiv.org/pdf/1803.09845.pdf">Neural Baby Talk</a> </strong>(Jiasen Lu, Jianwei Yang, Dhruv Batra, Devi Parikh)</p><p>ABSTRACT: We introduce a novel framework for image captioning that can produce natural language explicitly grounded in entities that object detectors find in the image. Our approach reconciles classical slot filling approaches (that are generally better grounded in images) with modern neural captioning approaches (that are generally more natural sounding and accurate). Our approach first generates a sentence &lsquo;template&rsquo; with slot locations explicitly tied to specific image regions. These slots are then filled in by visual concepts identified in the regions by object detectors. The entire architecture (sentence template generation and slot filling with object detectors) is end-to-end differentiable. We verify the effectiveness of our proposed model on different image captioning tasks. On standard image captioning and novel object captioning, our model reaches state-of-the-art on both COCO and Flickr30k datasets. We also demonstrate that our model has unique advantages when the train and test distributions of scene compositions &ndash; and hence language priors of associated captions &ndash; are different.</p><p><strong><a href="https://arxiv.org/pdf/1706.02823.pdf">TextureGAN: Controlling Deep Image Synthesis With Texture Patches</a> </strong>(Wenqi Xian, Patsorn Sangkloy, Varun Agrawal, Amit Raj, Jingwan Lu, Chen Fang, Fisher Yu, James Hays)</p><p>ABSTRACT: In this paper, we investigate deep image synthesis guided by sketch, color, and texture. Previous image synthesis methods can be controlled by sketch and color strokes but we are the first to examine texture control. We allow a user to place a texture patch on a sketch at arbitrary locations and scales to control the desired output texture. Our generative network learns to synthesize objects consistent with these texture suggestions. To achieve this, we develop a local texture loss in addition to adversarial and content loss to train the generative network. We conduct experiments using sketches generated from real images and textures sampled from a separate texture database and results show that our proposed algorithm is able to generate plausible images that are faithful to user controls. Ablation studies show that our proposed pipeline can generate more realistic images than adapting existing methods directly</p><p><strong><a href="https://arxiv.org/pdf/1801.02753.pdf">SktechyGAN: Towards Diverse and Realistic Sketch to Image Synthesis</a></strong> (Wengling Chen, James Hays)</p><p>ABSTRACT: Synthesizing realistic images from human drawn sketches is a challenging problem in computer graphics and vision. Existing approaches either need exact edge maps, or rely on retrieval of existing photographs. In this work, we propose a novel Generative Adversarial Network (GAN) approach that synthesizes plausible images from 50 categories including motorcycles, horses and couches. We demonstrate a data augmentation technique for sketches which is fully automatic, and we show that the augmented data is helpful to our task. We introduce a new network building block suitable for both the generator and discriminator which improves the information flow by injecting the input image at multiple scales. Compared to state-of-the-art image translation methods, our approach generates more realistic images and achieves significantly higher Inception Scores.</p><p><strong><a href="https://arxiv.org/pdf/1711.06798.pdf">MorphNet: Fast &amp; Simple Resource-Constrained Structure Learning of Deep Networks</a></strong> (Ariel Gordon, Elad Eban, Ofir Nachum, Bo Chen, Hao Wu, Tien-Ju Yang, Edward Choi)</p><p>ABSTRACT: We present MorphNet, an approach to automate the design of neural network structures. MorphNet iteratively shrinks and expands a network, shrinking via a resourceweighted sparsifying regularizer on activations and expanding via a uniform multiplicative factor on all layers. In contrast to previous approaches, our method is scalable to large networks, adaptable to specific resource constraints (e.g. the number of floating-point operations per inference), and capable of increasing the network&rsquo;s performance. When applied to standard network architectures on a wide variety of datasets, our approach discovers novel structures in each domain, obtaining higher performance while respecting the resource constraint.</p><p><strong>Title photo credit: Steve Greenwood</strong></p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1529335011</created>  <gmt_created>2018-06-18 15:16:51</gmt_created>  <changed>1529358713</changed>  <gmt_changed>2018-06-18 21:51:53</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[More than 10 faculty members and many more students will be present at the five-day event in Salt Lake City.]]></teaser>  <type>news</type>  <sentence><![CDATA[More than 10 faculty members and many more students will be present at the five-day event in Salt Lake City.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-06-18T00:00:00-04:00</dateline>  <iso_dateline>2018-06-18T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-06-18 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>607128</item>      </media>  <hg_media>          <item>          <nid>607128</nid>          <type>image</type>          <title><![CDATA[CVPR logo]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Cityscapes SunsetSkyline_Steve_Greenwood_crop2_text.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Cityscapes%20SunsetSkyline_Steve_Greenwood_crop2_text.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Cityscapes%20SunsetSkyline_Steve_Greenwood_crop2_text.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Cityscapes%2520SunsetSkyline_Steve_Greenwood_crop2_text.png?itok=JV2JV488]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Georgia Tech @ CVPR 2018]]></image_alt>                    <created>1529333993</created>          <gmt_created>2018-06-18 14:59:53</gmt_created>          <changed>1529333993</changed>          <gmt_changed>2018-06-18 14:59:53</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://gvu.gatech.edu/georgia-tech-cvpr-2018]]></url>        <title><![CDATA[Georgia Tech at CVPR 2018]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="607017">  <title><![CDATA[Second-Year Stefamikha Suwisar Takes Top Prize in IC T-Shirt Design Contest]]></title>  <uid>33939</uid>  <body><![CDATA[<p>After a close vote via social media and a web-based survey, the School of Interactive Computing (IC) has identified its t-shirt design contest winner.</p><p>Second-year Industrial Design major <strong>Stefamikha Suwisar</strong> overcame three other finalists in a closely-contested vote for the victory. Her design, which features a lightbulb and brain combination at the center of a number of examples of computer science in use, demonstrated her idea of the intersection between human and machine.</p><p>&ldquo;My design illustrates how ideas emerge from the brain to constantly fit the research opportunities within IC,&rdquo; she said. &ldquo;It depicts the eight threads for the future of computer science education in the United States: devices, info internetworks, intelligence, media, modelling and simulation, people, systems and architecture, and theory.&rdquo;</p><p>Suwisar said that she has always been interested in art and science. Her major, Industrial Design, combines both. It doesn&rsquo;t only focus on the appearance of a product, but also how it functions, is manufactured, and ultimately the value and experience it provides for users.</p><p>&ldquo;In the future, I aspire to help people and ultimately make life better through my designs,&rdquo; she said.</p><p>The other finalists for the contest were computational media undergraduate student John Britti, computer science undergraduate student Brian Cochran, and GVU Center research technologist Tim Trent.</p><p>The School appreciates all the fantastic submissions and the wonderful voter turnout. Stay tuned for information on how to procure a t-shirt after the finished product has been produced.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1528906805</created>  <gmt_created>2018-06-13 16:20:05</gmt_created>  <changed>1528906805</changed>  <gmt_changed>2018-06-13 16:20:05</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Suwisar's design focused on the interaction between human and machine, and depicted eight threads of CS education.]]></teaser>  <type>news</type>  <sentence><![CDATA[Suwisar's design focused on the interaction between human and machine, and depicted eight threads of CS education.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-06-13T00:00:00-04:00</dateline>  <iso_dateline>2018-06-13T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-06-13 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>607016</item>      </media>  <hg_media>          <item>          <nid>607016</nid>          <type>image</type>          <title><![CDATA[IC T-Shirt Contest Winner]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Screen Shot 2018-06-13 at 12.13.46 PM.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Screen%20Shot%202018-06-13%20at%2012.13.46%20PM.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Screen%20Shot%202018-06-13%20at%2012.13.46%20PM.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Screen%2520Shot%25202018-06-13%2520at%252012.13.46%2520PM.png?itok=I-LAYS7-]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1528906490</created>          <gmt_created>2018-06-13 16:14:50</gmt_created>          <changed>1528906490</changed>          <gmt_changed>2018-06-13 16:14:50</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="178289"><![CDATA[stefamikha suwisar]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="606946">  <title><![CDATA[Marissa Gonzales Using Own Educational Experience as Inspiration for Research]]></title>  <uid>33939</uid>  <body><![CDATA[<p><strong>Marissa Gonzales</strong>&rsquo; educational experience is not an uncommon one.</p><p>The School of Interactive Computing Ph.D. student grew up in California, where she attended an exclusive high school populated, more or less, by students of privilege who had the time and resources to engage with educators, devote themselves to their studies, and ultimately come out of high school with the skills necessary for academic success.</p><p>But Gonzales wasn&rsquo;t like most of her classmates. It took a lot of effort on both her and her mom&rsquo;s part to make her time in high school a success.</p><p>Each day, Gonzales woke up at 4:30 a.m. so that her mom could take her to school in time to make it to work early in the morning. She made it to the high school about an hour and a half early every day and then, after school, went straight to her job at a t-shirt printing shop. There, Gonzales worked evenings to earn money to help supplement her family&rsquo;s income, a practice she continued when she attended the University of California, Irvine. She paid her own student loans and sent money home to help her family make ends meet.</p><p>School, she said, was like an obstacle.</p><p>&ldquo;As much as I loved it, it was limiting,&rdquo; she said. &ldquo;I didn&rsquo;t have a computer. It became a restriction. There was no way for me to catch up because I was already so far behind on access.&rdquo;</p><p>Gonzales&rsquo; story can be told by millions of other Americans. For most, that&rsquo;s where the story ends &ndash; an educational deficit never closed because of a lack of access and resources. For her, though, that experience has served as inspiration for her research into the benefits and pitfalls of online educational environments. Gonzales believes that online learning has potential to reach students who, like herself, had limited access to one of the fundamental components of education: time. Online learning has opened up opportunities for students to learn asynchronously from one another, allowing them to participate in courses as their schedules allow for it.</p><p>Degrees like the Georgia Tech <a href="http://omscs.gatech.edu/">Online Master of Science in Computer Science</a> (OMSCS) program, which has made enormous breakthroughs and turned online learning on its head, aim to broaden access to these types of students. Many in Gonzales&rsquo; position are unable to achieve similar results in traditional education. In theory, by making quality education available online, online learning could provide the same opportunity to those underserved communities.</p><p>But what are the properties of a flourishing online classroom? Why do they work? Who do they work for? And, ultimately, how can the academic community design environments that provide access to quality higher education for all?</p><h3><strong>Marissa, meet Jill</strong></h3><p>Gonzales came to Georgia Tech in 2016 to pursue her Ph.D. after graduating from Irvine with a degree in informatics, concentrating on human-computer interaction.</p><p>When she arrived, she approached Professor <strong>Ashok Goel</strong>, who had just achieved <a href="https://www.chronicle.com/article/When-the-Teaching-Assistant-Is/238114">international attention for Jill Watson</a>, an artificially intelligent teaching assistant that answered students&rsquo; questions in the online section of his Knowledge-Based AI class.</p><p>&ldquo;I literally just took (Goel) aside and said, &lsquo;Look, I&rsquo;m interested in your work,&rsquo;&rdquo; Gonzales said. &ldquo;&lsquo;I really like this concept going with the virtual TA, and I&rsquo;d like to help.&rsquo;&rdquo;</p><p>Initially, she saw the opportunity as one to evaluate the system. How did it affect the students? Was it helping them become more engaged or helping overall grades? As she began to dig in to the project, though, she realized that there was an opportunity and a need for more.</p><p>&ldquo;When we learn, there&rsquo;s a lot of factors that affect how we learn or affect our feeling about learning, about the classroom, the teacher, the material,&rdquo; Gonzales explained. &ldquo;How much do we value the experience and how much does that value impact our overall performance? Do we feel like we&rsquo;re getting something out of it? Are we learning to use specific strategies for academic improvement and reflecting on our performance?</p><p>&ldquo;These are all things that go on in residential classrooms. What about online classrooms, where the sense of a learning community is perhaps obscured, and where students aren&rsquo;t just working with the teaching staff, but with intelligent agents?&rdquo;</p><h3><strong>Understanding Design Implications for Online Systems</strong></h3><p>As she dug, Gonzales concluded that she needed to evaluate more than just how AIs could ease the load on teaching staff, making them more available to provide additional in-depth assistance to students online. Instead, she needed to take a more holistic view about the students&rsquo; online experience.</p><p>Since she began, Gonzales has performed evaluations after each semester for both the residential and online sections of the <a href="https://www.omscs.gatech.edu/cs-7637-knowledge-based-artificial-intelligence-cognitive-systems">Knowledge-Based AI class</a> in which Jill Watson and other AIs are used.</p><p>The goal is to gain a more complete understanding of the online educational experience and how the design and implementation of these AI assistants, among other design decisions in online learning environments, can help or hurt the process of offering quality education online. Online learning, Gonzales said, isn&rsquo;t going anywhere anytime soon. As OMSCS has shown, a quality education can be achieved beyond just a residential program. But it is important that researchers get in front of potential future challenges as online opportunities become more common.</p><p>&ldquo;We have to get in front of it and understand what works and what doesn&rsquo;t work before the demand becomes too great to keep up,&rdquo; she said. &ldquo;Ultimately, online learning should broaden access to more populations, but it&rsquo;s important that we design and implement programs that provide a complete educational experience.&rdquo;</p><p>In her two years at Georgia Tech, Gonzales has been <a href="https://www.ic.gatech.edu/news/591828/ic-phd-student-marissa-gonzales-receives-goizueta-foundation-fellowship">awarded the Goizueta Foundation Fellowship</a>, which is designed to help attract and promote doctoral students of Hispanic/Latino origin, and the <a href="https://www.ic.gatech.edu/news/601381/georgia-tech-focus-intel-diversity-fellowship-helping-ics-marissa-gonzales-study-online">Intel Diversity Fellowship</a> from the Georgia Tech Focus Program.</p><p>She aims to pick her dissertation topic in the next few months and is on track to complete her degree in 2021.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1528819190</created>  <gmt_created>2018-06-12 15:59:50</gmt_created>  <changed>1528819190</changed>  <gmt_changed>2018-06-12 15:59:50</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A challenging educational experience as a teen has helped inform and drive Gonzales' current research in online educational environments.]]></teaser>  <type>news</type>  <sentence><![CDATA[A challenging educational experience as a teen has helped inform and drive Gonzales' current research in online educational environments.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-06-12T00:00:00-04:00</dateline>  <iso_dateline>2018-06-12T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-06-12 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>606945</item>      </media>  <hg_media>          <item>          <nid>606945</nid>          <type>image</type>          <title><![CDATA[Marissa Gonzales]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Marissa Gonzales rotator.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Marissa%20Gonzales%20rotator.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Marissa%20Gonzales%20rotator.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Marissa%2520Gonzales%2520rotator.jpg?itok=zInV4ewI]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Marissa Gonzales]]></image_alt>                    <created>1528818866</created>          <gmt_created>2018-06-12 15:54:26</gmt_created>          <changed>1528818866</changed>          <gmt_changed>2018-06-12 15:54:26</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://omscs.gatech.edu]]></url>        <title><![CDATA[Georgia Tech Online Master of Science in Computer Science]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="66244"><![CDATA[C21U]]></group>          <group id="1305"><![CDATA[Georgia Tech Academic Advising Network (GTAAN)]]></group>          <group id="431631"><![CDATA[OMS]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="176882"><![CDATA[marissa gonzales]]></keyword>          <keyword tid="112431"><![CDATA[ashok goel]]></keyword>          <keyword tid="169183"><![CDATA[Jill Watson]]></keyword>          <keyword tid="121521"><![CDATA[OMSCS]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="606473">  <title><![CDATA[Colleagues Celebrate 25 Years, Bid Farewell to Departing Professor Mark Guzdial]]></title>  <uid>33939</uid>  <body><![CDATA[<p>After 25 years of service to Georgia Tech, longtime College of Computing Professor <strong>Mark Guzdial</strong> is heading back to his home state to teach and continue his research at the University of Michigan.</p><p>Guzdial, along with his wife and College of Computing research scientist <strong>Barbara Ericson</strong>, leave lasting academic impacts on the College for their revolutionary research into computer science education, developing innovative technology to improve learning and leading a charge in examining and increasing equity &ndash; specifically with regard to women and minorities &ndash; in computing.</p><p>During his time at Georgia Tech, Guzdial has led such initiatives as Georgia Computes, a National Science Foundation Broadening Participation in Computing alliance focused on increasing the number and diversity of computing students in the state of Georgia. His work has reached beyond Georgia, leading a national conversation in equity and education at conferences such as the ACM Special Interest Group on Computer Science Education and International Computing Education Research conference, among others.</p><p>Ericson, who recently completed her Ph.D. in Human Centered Computing, was the Director for Computing Outreach in the College. Her work, in conjunction with the national CSforAll initiative established by former president Barack Obama, improved the quality and quantity of secondary computing teachers in the state.</p><p>Guzdial and Ericson were awarded the Karl V. Karlstrom Outstanding Educator Award in 2010 and Guzdial the IEEE Computer Science and Engineering Undergraduate Teaching Award in 2012 for contributions to computing education. Guzdial became a Fellow for the Association for Computing Machinery in 2014. Ericson also won the 2012 A. Richard Newton Educator Award for efforts to attract more females to computing.</p><p>&ldquo;Mark and Barb&rsquo;s work has helped position the College of Computing as a thought leader in computer science education,&rdquo; John P. Imlay Dean of Computing Zvi Galil said. &ldquo;We are extremely appreciative for their service and will miss them greatly. I wish them great success at the University of Michigan.&rdquo;</p><h2><a href="https://www.flickr.com/photos/ccgatech/albums/72157694064803572/with/42327563181/"><strong>PHOTOS:&nbsp; Click here for photos from Guzdial&rsquo;s 25 Years of Service Reception</strong></a></h2><p>Georgia Tech colleagues celebrated Guzdial&rsquo;s 25 years of service to the Institute at a reception earlier this month. Many praised the impact on Georgia Tech and shared stories of long-lasting friendships. Professor <strong>Amy Bruckman</strong> said Guzdial was &ldquo;the reason (she&rsquo;s) here&rdquo; at Georgia Tech.</p><p>Professor <strong>John Stasko</strong> shared how he, Guzdial, and Professor <strong>Gregory Abowd</strong> have had&nbsp;a standing monthly Saturday lunch and how much the camaraderie has meant to him.</p><p>&ldquo;Probably most of you all know about Mark&rsquo;s contributions in CS Ed and, in many ways, he gives us the presence in that area,&rdquo; Stasko said. &ldquo;There are things beyond that, though. Gregory, Mark and I have been having breakfast together on Saturdays for years and years. Kind of like a bunch of old men getting together &ndash; well, I guess now it&rsquo;s not really &lsquo;like&rsquo; old men.</p><p>&ldquo;But that&rsquo;s been great. We&rsquo;ll miss him certainly for all of his academic contributions, but many of us miss him as a close friend, too.&rdquo;</p><p>Abowd echoed Stasko&rsquo;s words, calling Guzdial a &ldquo;brother&rdquo; and lamenting the fact that he, a Notre Dame graduate, now has to like something about the University of Michigan.</p><p>&ldquo;I&rsquo;m really angry at Mark and Barbara because I grew up in Detroit and went to Notre Dame, and all my family went to Notre Dame,&rdquo; he joked. &ldquo;I grew up despising everything to do with the University of Michigan. And I&rsquo;m so mad that now I have to love some piece of that university. But I think I&rsquo;ll get over it.&rdquo;</p><p>College of Computing Professor Emeritus <strong>Jim Foley</strong>, a Michigan graduate, said he was happy that his colleagues could bring their &ldquo;great spirits&rdquo; to his alma mater.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1527188449</created>  <gmt_created>2018-05-24 19:00:49</gmt_created>  <changed>1527188449</changed>  <gmt_changed>2018-05-24 19:00:49</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[After 25 years of service to Georgia Tech, longtime College of Computing Professor Mark Guzdial is heading back to his home state to teach and continue his research at the University of Michigan.]]></teaser>  <type>news</type>  <sentence><![CDATA[After 25 years of service to Georgia Tech, longtime College of Computing Professor Mark Guzdial is heading back to his home state to teach and continue his research at the University of Michigan.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-05-24T00:00:00-04:00</dateline>  <iso_dateline>2018-05-24T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-05-24 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>606472</item>      </media>  <hg_media>          <item>          <nid>606472</nid>          <type>image</type>          <title><![CDATA[Mark Guzdial Farewell]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[_MG_1360.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/_MG_1360.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/_MG_1360.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/_MG_1360.jpg?itok=lhaLO7F7]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Mark Guzdial and Barbara Ericson at Guzdial's 25 Years of Service Reception]]></image_alt>                    <created>1527188412</created>          <gmt_created>2018-05-24 19:00:12</gmt_created>          <changed>1527188412</changed>          <gmt_changed>2018-05-24 19:00:12</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="10469"><![CDATA[Mark Guzdial]]></keyword>          <keyword tid="141461"><![CDATA[Barbara Ericson; Director of Computing Outreach]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="11002"><![CDATA[Gregory Abowd]]></keyword>          <keyword tid="8472"><![CDATA[amy bruckman]]></keyword>          <keyword tid="11632"><![CDATA[john stasko]]></keyword>          <keyword tid="78531"><![CDATA[Jim Foley]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="606097">  <title><![CDATA[Wearable Ring, Wristband Allow Users to Control Smart Tech With Hand Gestures]]></title>  <uid>33939</uid>  <body><![CDATA[<p>New technology created by a team of Georgia Tech researchers could make controlling text or other mobile applications as simple as &ldquo;1-2-3.&rdquo;</p><p>Using acoustic chirps emitted from a ring and received by a wristband, like a smartwatch, the system is able to recognize 22 different micro finger gestures that could be programmed to various commands &mdash; including a T9 keyboard interface, a set of numbers, or application commands like playing or stopping music.</p><p>A <a href="https://youtu.be/a-R45u5sqFc">video demonstration of the technology</a> shows how, at a high rate of accuracy, the system can recognize hand poses using the 12 bones of the fingers and digits &lsquo;1&rsquo; through &lsquo;10&rsquo; in American Sign Language (ASL).</p><p>&ldquo;Some interaction is not socially appropriate,&rdquo; said <strong>Cheng Zhang</strong>, the Ph.D. student in the School of Interactive Computing who led the effort. &ldquo;A wearable is always on you, so you should have the ability to interact through that wearable at any time in an appropriate and discreet fashion. When we&rsquo;re talking, I can still make some quick reply that doesn&rsquo;t interrupt our interaction.&rdquo;</p><p>Since one of the&nbsp;goals&nbsp;was to enter digits using only one hand, the team decided to use&nbsp;ASL, which already has well defined hand postures for each digit. In this manner, the user might select options from a numbered list, call a phone number, or do simple calculations.</p><p>The system is called <em>FingerPing</em>. Unlike other technology that requires the use of a glove or a more obtrusive wearable, this technique is limited to just a thumb ring and a watch. The ring produces acoustic chirps that travel through the hand and are picked up by receivers on the watch. There are specific patterns in which sound waves travel through structures, including the hand, that can be altered by the manner in which the hand is posed. Utilizing those poses, the wearer can achieve up to 22 pre-programmed commands.</p><p>The gestures are small and non-invasive, as simple as tapping the tip of a finger or posing your hand in classic &ldquo;1,&rdquo; &ldquo;2,&rdquo; and &ldquo;3&rdquo; gestures.</p><p>&ldquo;The receiver recognizes these tiny differences,&rdquo; Zhang said. &ldquo;The injected sound from the thumb will travel at different paths inside the body with different hand postures. For instance, when your hand is open there is only one direct path from the thumb to the wrist. Any time you do a gesture where you close a loop, the sound will take a different path and that will form a unique signature.&rdquo;</p><p>Zhang said that the research is a proof of concept for a technique that could be expanded and improved upon in the future.</p><p>The research was presented last month at the 2018 ACM Conference on Human Factors in Computing Systems (CHI). The paper is titled FingerPing: Recognizing Fine-grained Hand Poses Using Active Acoustic On-body Sensing (Cheng Zhang, <strong>Qiuyue Xue</strong>, <strong>Anandghan Waghmare</strong>, <strong>Ruichen Meng</strong>, <strong>Sumeet Jain</strong>, <strong>Yizeng Han</strong>, <strong>Xinyu Li</strong>, <strong>Kenneth Cunefare</strong>, <strong>Thomas Ploetz</strong>, <strong>Thad Starner</strong>, <strong>Omer Inan</strong>, <strong>Gregory Abowd</strong>).</p><p>Researchers on this team, including Zhang, have worked on similar unique gesture techniques in the past. Zhang graduated from Georgia Tech in May and will join the Information Science Department at Cornell University as a tenure-track assistant professor.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1526058834</created>  <gmt_created>2018-05-11 17:13:54</gmt_created>  <changed>1527169649</changed>  <gmt_changed>2018-05-24 13:47:29</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Using acoustic chirps emitted from a ring and received by a wristband, like a smartwatch, the system is able to recognize 22 different micro finger gestures that could be programmed to various commands.]]></teaser>  <type>news</type>  <sentence><![CDATA[Using acoustic chirps emitted from a ring and received by a wristband, like a smartwatch, the system is able to recognize 22 different micro finger gestures that could be programmed to various commands.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-05-11T00:00:00-04:00</dateline>  <iso_dateline>2018-05-11T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-05-11 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[david.mitchell@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>606096</item>      </media>  <hg_media>          <item>          <nid>606096</nid>          <type>image</type>          <title><![CDATA[FingerPing 1]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Screen Shot 2018-05-11 at 1.08.10 PM.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Screen%20Shot%202018-05-11%20at%201.08.10%20PM.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Screen%20Shot%202018-05-11%20at%201.08.10%20PM.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Screen%2520Shot%25202018-05-11%2520at%25201.08.10%2520PM.png?itok=BXXGPWfV]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[FingerPing Ring and Wristband]]></image_alt>                    <created>1526058685</created>          <gmt_created>2018-05-11 17:11:25</gmt_created>          <changed>1526058685</changed>          <gmt_changed>2018-05-11 17:11:25</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://www.news.gatech.edu/2017/11/29/wearable-computing-ring-allows-users-write-words-and-numbers-thumb]]></url>        <title><![CDATA[Using a Ring to Draw and Write]]></title>      </link>          <link>        <url><![CDATA[http://www.news.gatech.edu/2017/11/29/wearable-computing-ring-allows-users-write-words-and-numbers-thumb]]></url>        <title><![CDATA[Controlling Smartwatch with Breaths and Swipes]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="10353"><![CDATA[wearable computing]]></keyword>          <keyword tid="1944"><![CDATA[Thad Starner]]></keyword>          <keyword tid="177958"><![CDATA[cheng zhang]]></keyword>          <keyword tid="11002"><![CDATA[Gregory Abowd]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71891"><![CDATA[Health and Medicine]]></topic>          <topic tid="71881"><![CDATA[Science and Technology]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="606296">  <title><![CDATA[VOTE NOW: IC Selects Four Finalists in T-Shirt Design Contest]]></title>  <uid>33939</uid>  <body><![CDATA[<p>&ldquo;Interactive computing&rdquo; means different things to different people.</p><p>For some, it may mean a person&rsquo;s physical interaction with computing through tangible technological devices. For others, it might mean a school &ndash; the School of Interactive Computing, for example &ndash; filled with a diverse set of research. Still others might think of the progression of computing from classic personal computers to those pushing boundaries through machine learning and artificial intelligence.</p><p>A few weeks ago, we asked students, faculty, staff, and friends of the School of Interactive Computing to come up with concepts for a t-shirt design that demonstrate what those words mean to them.</p><p>After sifting through all our submissions &ndash; and we received a number of great ones &ndash; we have narrowed the contest down to four finalists. Check out the finalists below and be sure to vote on&nbsp;Facebook&nbsp;or <a href="https://www.surveymonkey.com/r/7VXLTKM">this survey</a>.</p><h3><strong>John Britti, Computational Media undergraduate student</strong></h3><p>Britti provided a futuristic look at human interaction with a computing interface, utilizing the classic Buzz Gold color. Finalist selectors liked his design for its universal depiction of the intersection between humans and computers.</p><h3><strong>Brian Cochran, Computer Science undergraduate student</strong></h3><p>Cochran submitted a selection of computing characters that could be the basis of a series of t-shirt designs now and in the future. Finalist selectors liked his design because of its fun interpretation of computing and that it provides what every organization or event needs &ndash; a mascot.</p><h3><strong>Stefamikha Suwisar, Industrial Design undergraduate student</strong></h3><p>Suwisar&rsquo;s design depicts the diverse research that comes from the many human sources within the School of Interactive Computing. Finalist selectors liked her design because it captured in an image the breadth of computing research that comes out of the School.</p><h3><strong>Tim Trent, GVU Center research technologist</strong></h3><p>Trent provided an initial concept for a series of t-shirts that highlight the many IC research areas in a nostalgic way. Finalist selectors liked his submission because, while only an initial concept, it provides a fun theme to depict the many &ldquo;flavors&rdquo; of interactive computing.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1526582486</created>  <gmt_created>2018-05-17 18:41:26</gmt_created>  <changed>1526582486</changed>  <gmt_changed>2018-05-17 18:41:26</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[After selecting four finalists to the School of Interactive Computing t-shirt design contest, the vote is now up to you.]]></teaser>  <type>news</type>  <sentence><![CDATA[After selecting four finalists to the School of Interactive Computing t-shirt design contest, the vote is now up to you.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-05-17T00:00:00-04:00</dateline>  <iso_dateline>2018-05-17T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-05-17 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>606289</item>      </media>  <hg_media>          <item>          <nid>606289</nid>          <type>image</type>          <title><![CDATA[IC T-Shirt Design Contest Finalists]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Screen Shot 2018-05-17 at 1.52.54 PM.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Screen%20Shot%202018-05-17%20at%201.52.54%20PM.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Screen%20Shot%202018-05-17%20at%201.52.54%20PM.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Screen%2520Shot%25202018-05-17%2520at%25201.52.54%2520PM.png?itok=8m--q_yT]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[IC T-Shirt Design Contest Finalists]]></image_alt>                    <created>1526579963</created>          <gmt_created>2018-05-17 17:59:23</gmt_created>          <changed>1526579963</changed>          <gmt_changed>2018-05-17 17:59:23</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="178030"><![CDATA[school of interactive computing; t-shirt design contest; college of computing]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="605630">  <title><![CDATA[IC Researchers Highlight Design Implications as Venezuelans Turn to Facebook for Barter, Exchange]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Consider a scenario in which economic turmoil and hyperinflation have made it nearly impossible to purchase many of life&rsquo;s basic necessities. There are food and medicine shortages, and scammers purchase what is available in bulk in an effort to manage the flow and pricing of supplies at the expense of other citizens.</p><p>How, then, might honest citizens go about navigating the challenging circumstances to procure the items they need to survive?</p><p>It&rsquo;s a familiar environment to Venezuelan citizens who, since an economic crisis gripped the country in 2014, have faced such barriers in their daily lives. Out of necessity, many have turned to online solidarity economies like Facebook groups that are dedicated to a fairer system of barter and exchange.</p><p>While these groups present attempts at mitigating some of the more turbulent aspects of the Venezuelan economy, they come with their own set of challenges, as well. In a paper being presented at the ACM CHI Conference on Human Factors in Computing Systems (CHI), Georgia Tech researchers have examined the development of these online social ecosystems. They offer ideas for how the design structures of Facebook&rsquo;s groups can better support such solidarity economies.</p><h3>Why turn to online solidarity economies?</h3><p>To understand how social media sites like Facebook can more effectively implement their group design, it&rsquo;s important to understand how and why these groups came about in the first place, said <strong>Hayley Evans</strong>, School of Interactive Computing (IC) Ph.D. student and first author on the paper (<a href="http://delivery.acm.org/10.1145/3180000/3173802/paper228.pdf?ip=128.61.126.162&amp;id=3173802&amp;acc=OPEN&amp;key=A79D83B43E50B5B8%2E5E2401E94B5C98E0%2E4D4702B0C3E38B35%2E6D218144511F3437&amp;__acm__=1524839288_61a897260aea37fdb7b733d1a782b1a9"><em>Facebook in Venezuela: Understanding Solidarity Economies in Low-Trust Environments</em></a>).</p><p>In 2014, the price of crude oil, Venezuela&rsquo;s main export, collapsed, setting the stage for an economic and political crisis that has continued to deteriorate in the succeeding years. The country&rsquo;s GDP has declined at an average of 6.83 percent over the past five years, there have been food shortages, failing hospitals, high rights of inflation, calls for humanitarian aid, and political opposition both domestically and internationally.</p><p>Such instability paved the way for the rise of &ldquo;bachequeros,&rdquo; individuals who place their own self-interests over those of the group by charging or demanding barters at high prices. Often, these individuals will buy goods in bulk with the intention of controlling the supply and price. With low levels of trust in the traditional exchange of goods, as well as high scarcity, many Venezuelans migrated to online economies most commonly taking the form of Facebook groups.</p><p>Evans and her co-authors examined three such groups, all on Facebook &ndash; a large one with over 45,000 members, a mid-sized collection, and one that is just slightly over 1,000. Other groups, as small as 10-20 also existed, likely made up of closer family and friends and individuals searching for a specific item, Evans said.</p><p>Although group administrators seek out fairness in moderation and price-setting, in many ways they are still operating as a self-regulated free-for-all.</p><p>&ldquo;The government stopped setting the prices,&rdquo; Evans said. &ldquo;So, they kind of triangulate &ndash; they remember what the government set prices at in the past, what they&rsquo;ve seen the price at online, and what they feel is fair.</p><p>&ldquo;It can be as ambiguous as it sounds. &lsquo;Fair&rsquo; is highly dependent on the person and what they believe.&rdquo;</p><p>For example, one person might believe something is fair if it is double in price, but an absolute necessity. A mother whose son has asthma, Evans said, would be thankful to find asthma medication at only double the price. Someone else in a different situation might not.</p><h3>Navigating design flaws</h3><p>But with such ambiguity comes a stiff challenge in moderating such economies. Often, when an individual posts an item at a price the group deems unfair, they can lose credibility and, with it, the ability to barter in these groups. Attempts at regulations &ndash; like a three-strikes-and-out policy &ndash; have been made by at least one group administrator.</p><p>But those are difficult to enforce because Facebook&rsquo;s design doesn&rsquo;t offer any tracking method.</p><p>&ldquo;We start to see that there&rsquo;s flaws in the infrastructure and there&rsquo;s flaws in Facebook,&rdquo; Evans said. &ldquo;So, this group, which set out to create a more stable community, becomes like every other group that is too big, difficult to manage, and doesn&rsquo;t have the right tools.&rdquo;</p><p>Evans and her team highlight some key design implications, based on <a href="http://mysite.du.edu/~lavita/edpx_3770_13s/_docs/kollock_design_%20princ_for_online_comm%20copy.pdf">Peter Kollock&rsquo;s design principles for online communities</a>. Interestingly, Evans said, while Venezuelan bartering groups violate all of them to some degree, they still work due to necessity. Looking at Kollock, though, they were able to come up with four:</p><ul><li>buyer/seller reviews</li><li>an equitable marketplace indicator</li><li>prominent rule placement,</li><li>and tools for tracking offenses and implementing sanctions.</li></ul><p>&ldquo;These design affordances have worked well on other platforms like eBay or Amazon,&rdquo; Evans said.</p><p>Evans added that one of the most interesting takeaways was the appropriation of the platform. While Facebook was designed for college students in 2004, it has become a vital tool to Venezuelans in an unpredictable economic crisis.</p><p>&ldquo;This ingenuity merits attention,&rdquo; Evans said. &ldquo;Furthermore, we hope that there will be some incentive for Facebook to review this use, be it for business or humanitarian reasons.&rdquo;</p><p>The paper was co-authored by Evans, IC Ph.D. student <strong>Marisol Wong-Villacres</strong>, IC Ph.D. student<strong> Daniel Castro</strong>, former IC Assistant Professor <strong>Eric Gilbert</strong>, IC research scientist <a href="https://www.ic.gatech.edu/people/7087/rosa-arriagas"><strong>Rosa Arriaga</strong></a>, IC Ph.D. student <a href="https://www.ic.gatech.edu/content/michaelanne-dye"><strong>Michaelanne Dye</strong></a>, and IC Professor <a href="https://www.ic.gatech.edu/people/7127/amy-bruckmans"><strong>Amy Bruckman</strong></a>. It was presented this week at CHI in Montreal, Canada.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1524840039</created>  <gmt_created>2018-04-27 14:40:39</gmt_created>  <changed>1524840039</changed>  <gmt_changed>2018-04-27 14:40:39</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[In a paper being presented at CHI, Georgia Tech researchers have examined the development of these online social ecosystems.]]></teaser>  <type>news</type>  <sentence><![CDATA[In a paper being presented at CHI, Georgia Tech researchers have examined the development of these online social ecosystems.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-04-27T00:00:00-04:00</dateline>  <iso_dateline>2018-04-27T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-04-27 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>605629</item>      </media>  <hg_media>          <item>          <nid>605629</nid>          <type>image</type>          <title><![CDATA[Facebook in Venezuela]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Facebook3.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Facebook3.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Facebook3.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Facebook3.jpg?itok=P2HEBfib]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Facebook logo with hands surrounding it]]></image_alt>                    <created>1524838602</created>          <gmt_created>2018-04-27 14:16:42</gmt_created>          <changed>1524838602</changed>          <gmt_changed>2018-04-27 14:16:42</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://www.chi.gatech.edu/2018/]]></url>        <title><![CDATA[Georgia Tech at CHI 2018]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="10835"><![CDATA[Facebook]]></keyword>          <keyword tid="177811"><![CDATA[Facebook groups]]></keyword>          <keyword tid="177812"><![CDATA[venezuela]]></keyword>          <keyword tid="177813"><![CDATA[Amy Bruckman; Michaelanne Dye; School of Interactive Computing; Hayley Evans]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="605526">  <title><![CDATA[Georgia Tech Research Into Cuban 'Offline Internet' Could Inform Future Definitions of Connectivity]]></title>  <uid>33939</uid>  <body><![CDATA[<p>A pervasive assumption says that internet access is determined by wires or some unseen signal that delivers information from a source, through the cloud, and onto your hard drive in a matter of seconds. Often, though, environment and resources determine how digital media and information technology is shared and consumed.</p><p>In a paper being presented at the <a href="https://chi2018.acm.org/">ACM CHI Conference on Human Factors in Computing Systems</a>, researchers in the Georgia Tech <a href="http://cc.gatech.edu">College of Computing</a> outline a unique, and positively thriving, media ecology that operates mostly independent of traditional internet norms. Understanding the success of such a system could better inform how the internet is deployed in similar environments.</p><p>Titled <a href="https://static1.squarespace.com/static/59f549a3b7411c736b42936a/t/5a61b255e2c483c497384fd1/1516352085908/ElPaquete.pdf"><em>El Paquete Semanal: The Week&rsquo;s Internet in Havana</em></a>, the paper examines a human-centered &ldquo;offline&rdquo; internet that, despite lacking widespread affordable internet access in the traditional sense, nonetheless delivers information and entertainment in a locally-relevant platform.</p><p>Access to the web in Cuba has, historically, been prohibitive. Up until recently, as much as 95 percent of the population were without access. And, while that is changing with the introduction of public Wi-Fi hotspots, the public internet is slow and too expensive for a large portion of the population. To use the internet, the Cuban people must prioritize time and money.</p><p>Yet, School of Interactive Computing (IC) Ph.D. student <a href="https://www.ic.gatech.edu/content/michaelanne-dye"><strong>Michaelanne Dye</strong></a> realized in past research in the country that many still had access to things like movies or television entertainment, new versions of software and more, often before she would have gained access to it in the United States.</p><p>The reason?</p><p>El Paquete Semanal (&quot;the Weekly Package&quot;), a weekly &ldquo;offline&rdquo; internet that delivers a terabyte worth of multimedia, digital content, and news in an offline form. El Paquete is compiled by people with internet access, sold to &ldquo;paqueteros&rdquo; (packagers), and distributed throughout communities in the form of data on a USB drive.</p><p>The content is often delivered by hand or sold in physical stores that have popped up in apartment fronts. Individuals can enter the shop, select the content they want and pay a price per unit size of data.</p><p>&ldquo;It&rsquo;s done in a way that is incredibly affordable and accessible to most people,&rdquo; Dye said. &ldquo;People from all socioeconomic statuses use this network.&rdquo;</p><p>Not unlike YouTube, it also affords local artists the opportunity to reach new audiences. Local content, like recording or visual artists, is included in El Paquete and shared throughout the city, country, and beyond.</p><p>&ldquo;Their work will go into El Paquete, and it&rsquo;s making its way out of Cuba,&rdquo; Dye said.</p><p>Local journalists challenge the norm of government-run media channels, distributing their own literature through El Paquete. Whereas non-government journalists typically sent news outside of the country for publishing in the past, now it can be delivered weekly in this offline format.</p><p>As it has become more pervasive and successful, El Paquete challenges what is typically viewed as &ldquo;internet access.&rdquo; Dye argues that, while some attempts to establish more traditional access have failed, this has had unparalleled success.</p><p>&ldquo;It goes back to this larger question of how the internet is designed?&rdquo; she said. &ldquo;And does it have to be this way? As communities are brought online, how do you make information access or communication technologies that are flexible and adaptable to the local condition?&rdquo;</p><p>El Paquete offers one benefit that traditional internet lacks: a distinctly human infrastructure.</p><p>&ldquo;Every system has a human element, but this infrastructure is literally held together by humans, not wires,&rdquo; Dye said. &ldquo;So, the human element of it makes visible that this is a negotiated, relevant, and participatory internet that is very adaptable to a variety of cases.&rdquo;</p><p>Because it avoids automation, though, it requires painstaking work to be maintained.</p><p>&ldquo;There&rsquo;s affordances that the system provides that the internet doesn&rsquo;t provide us,&rdquo; Dye said. &ldquo;At the same time, there are limitations.&rdquo;</p><p>Ultimately, though, what are the limitations of the traditional internet and is it necessarily the right decision to replicate it in its entirety from the top down in a one-size-fits-all version?</p><p>&ldquo;Who are we to say that this is exactly what everyone needs access to?&rdquo; Dye said. &ldquo;Who determines what is valuable for people? This paper argues that there are varying successful iterations of the internet and that local norms and values should play a role in determining how access is delivered in different locales.&rdquo;</p><p>Research for this project was accomplished through in-depth interviews with the local population of Havana over the course of two years, as well as personal participation by authors in the system. The paper is being presented at the ACM CHI Conference on Human Factors in Computing Systems, April 21-26 in Montr&eacute;al, Canada. Dye&rsquo;s co-authors are <strong>David Nemer</strong> (University of Kentucky), <strong>Josiah Mangiameli</strong> (Independent), IC Professor <a href="https://www.ic.gatech.edu/people/7127/amy-bruckmans"><strong>Amy Bruckman</strong></a>, and School of International Affairs and IC Assistant Professor <a href="https://www.ic.gatech.edu/people/7054/neha-kumars"><strong>Neha Kumar</strong></a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1524675428</created>  <gmt_created>2018-04-25 16:57:08</gmt_created>  <changed>1524675428</changed>  <gmt_changed>2018-04-25 16:57:08</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[El Paquete Semanal -- "the Weekly Package" -- is an offline method of delivering digital content to communities in Havana, Cuba.]]></teaser>  <type>news</type>  <sentence><![CDATA[El Paquete Semanal -- "the Weekly Package" -- is an offline method of delivering digital content to communities in Havana, Cuba.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-04-25T00:00:00-04:00</dateline>  <iso_dateline>2018-04-25T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-04-25 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p>david.mitchell@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>605524</item>      </media>  <hg_media>          <item>          <nid>605524</nid>          <type>image</type>          <title><![CDATA[Havana, Cuba]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Cuba1.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Cuba1.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Cuba1.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Cuba1.jpg?itok=5PDLNHi6]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[A street in Havana, Cuba]]></image_alt>                    <created>1524675160</created>          <gmt_created>2018-04-25 16:52:40</gmt_created>          <changed>1524675160</changed>          <gmt_changed>2018-04-25 16:52:40</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://www.chi.gatech.edu/2018/]]></url>        <title><![CDATA[Georgia Tech at CHI 2018]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="177780"><![CDATA[El Paquete Semanal]]></keyword>          <keyword tid="1027"><![CDATA[chi]]></keyword>          <keyword tid="177781"><![CDATA[ACM CHI Conference on Human Factors in Computing Systems]]></keyword>          <keyword tid="8494"><![CDATA[HCI]]></keyword>          <keyword tid="177782"><![CDATA[Amy Bruckman; Michaelanne Dye; School of Interactive Computing; Cuba; Neha Kumar]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="604801">  <title><![CDATA[School of IC T-Shirt Design Contest: Design Our Shirt For a Chance at $200!]]></title>  <uid>33939</uid>  <body><![CDATA[<p>At the School of Interactive Computing, we feel like we have it all.</p><p>Research? If you search far and wide, you&rsquo;d be hard-pressed to find a school quite as diverse as ours. Faculty? Ours are international thought leaders who perpetually move the needle forward in their respective fields. Students? Our bright minds regularly accept appointments in industry and at high-profile academic institutions.</p><p>But, you know what we&rsquo;re missing? A T-shirt. <strong>And that&rsquo;s where you come in.</strong></p><p>We are producing a new shirt for the IC community, and we&rsquo;re asking YOU to design it. Dust off those graphics skills and put together a concept of a design for what &ldquo;interactive computing&rdquo; means to you.</p><p>It could be anything! And don&rsquo;t worry about whether your design is perfect. You can send us an initial concept for the contest, and we&rsquo;ll worry about the little details later.</p><p>Did we mention the best part? If your design wins, <strong>we&rsquo;ll give you a $200 Amazon gift card</strong> for your troubles. Pretty sweet, right?</p><p>So, who can participate?</p><p>That&rsquo;s the cool thing. Because we&rsquo;re such an interactive and collaborative family, we consider so many both within and outside of the school to be a part of our community. Faculty, staff, students, alumni, associated centers and research institutes, friends of the school &ndash; you name it. If you&rsquo;ve ever participated within our community, we encourage you to submit!</p><p>And submitting is simple. Just save your design/concept in JPEG format and email to our school communications officer, David Mitchell, at <a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a>. In the email, be sure to include your name and association with the school (current or former faculty or staff, student or alumni, friend of the school, etc.) so that we can give you proper credit.</p><p>When submissions are closed, we will select a few finalists and put it up for a vote on our social media channels.</p><p>Ok, fine. There ARE some rules. Here are a few:</p><ol><li>You can&rsquo;t use the name of our school, college, or institute. That means &ldquo;School of Interactive Computing,&rdquo; &ldquo;College of Computing,&rdquo; and &ldquo;Georgia Tech,&rdquo; and all varying forms thereof, are out. But using those words is a little on-the-nose anyway, isn&rsquo;t it?</li><li>As much as we all love <a href="https://en.wikipedia.org/wiki/Buzz_(mascot)">Buzz</a> (and would LOVE to see him as a robot), he&rsquo;s out too, due to copyright guidelines. Sorry, everyone.</li><li>By participating in this contest, you are agreeing to the disclaimer below.</li><li><strong>BE CREATIVE!</strong> Our school and our community mean something different to everybody, so don&rsquo;t be afraid to go outside the box.</li></ol><p>So, to summarize: <strong>Design our T-shirt, leave your mark on the school, and earn some cash</strong>, to boot. Not too bad.</p><p>Check out the timeline below and start designing! We can&rsquo;t wait to see what you come up with.</p><p><strong>Important dates</strong></p><ul><li>Contest is OPEN &ndash; April 6</li><li>Submission deadline &ndash; May 4</li><li>Finalists announced &ndash; May 11</li><li>Voting on finalists &ndash; May 11-18</li></ul><p><strong>Disclaimer</strong></p><p><em>This Competition is in no way sponsored, endorsed or administered by, or associated with, Facebook, Twitter, Instagram, or Amazon. You are providing your information to the Georgia Tech College of Computing and not to Facebook, Twitter, Instagram, or Amazon. The information you provide will be used only by and for Georgia Tech.</em></p><p><em>By submitting your design, you are agreeing to allow Georgia Tech&#39;s College of Computing&nbsp;to utilize your images for marketing and communications purposes.</em></p><p><em>This campaign is open to all people of all ages affiliated, whether currently or formerly, with Georgia Tech&rsquo;s School of Interactive Computing. No purchase or payment of any kind is necessary to enter or win. Grand prize winner is subject to&nbsp;selection by the Georgia Tech College of Computing. We reserve the right to disqualify submissions, without notice, and for any reason. By submitting, you agree to release and hold harmless Georgia Tech and the Georgia Tech College of Computing and their employees and affiliates, Facebook, Twitter, Instagram or any and all Internet access and service providers from and against all claims and damages arising in connection with your entry in the campaign and contest, including your receipt or use of giveaways to be distributed in connection with the campaign and contest</em></p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1523020757</created>  <gmt_created>2018-04-06 13:19:17</gmt_created>  <changed>1523020757</changed>  <gmt_changed>2018-04-06 13:19:17</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[We are producing a new shirt for the IC community, and we’re asking YOU to design it. Dust off those graphics skills and put together a concept of a design for what “interactive computing” means to you.]]></teaser>  <type>news</type>  <sentence><![CDATA[We are producing a new shirt for the IC community, and we’re asking YOU to design it. Dust off those graphics skills and put together a concept of a design for what “interactive computing” means to you.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-04-06T00:00:00-04:00</dateline>  <iso_dateline>2018-04-06T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-04-06 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>604799</item>      </media>  <hg_media>          <item>          <nid>604799</nid>          <type>image</type>          <title><![CDATA[IC T-Shirt Contest Flag]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Screen Shot 2018-04-06 at 9.00.53 AM.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Screen%20Shot%202018-04-06%20at%209.00.53%20AM.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Screen%20Shot%202018-04-06%20at%209.00.53%20AM.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Screen%2520Shot%25202018-04-06%2520at%25209.00.53%2520AM.png?itok=g49zC40P]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Design our T-Shirts. Leave your mark. Earn some cash.]]></image_alt>                    <created>1523019780</created>          <gmt_created>2018-04-06 13:03:00</gmt_created>          <changed>1523019780</changed>          <gmt_changed>2018-04-06 13:03:00</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="47511"><![CDATA[t-shirt design contest]]></keyword>          <keyword tid="4887"><![CDATA[GVU Center]]></keyword>          <keyword tid="12888"><![CDATA[IPaT]]></keyword>          <keyword tid="81491"><![CDATA[Institute for Robotics and Intelligent Machines (IRIM)]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="604645">  <title><![CDATA[College of Computing Sends Ph.D., Online Master’s Students to Women in Cybersecurity Conference Chicago]]></title>  <uid>27998</uid>  <body><![CDATA[<p>This year&rsquo;s <a href="https://www.wicys.net/">Women in Cybersecurity (WiCyS) Conference</a> was bigger and better attended than ever before, much like the cybersecurity industry at large. In celebration of the Georgia Tech College of Computing&rsquo;s role at the forefront of this booming field, the College took the lead in sending, for the first time, students on scholarship to attend the conference, as well as hosting a celebration for online Master of Science in Computer Science (OMSCS) students from the Chicago-area.</p><p>Held in Chicago from March 23-24, WiCyS drew some 4,000 attendees and more than 50 sponsors--both corporate and academic--including the College of Computing. The four student scholar representatives sent by the College were Tina Fatouros (OMSCS), Jenna McGrath (Ph.D. Public Policy), Stacey Truex (Ph.D. Computer Science), and Chenzi Wang (OMSCS). Attendees had the opportunity to enjoy meals and network with other women in the field of cybersecurity, attend technical and career-focused talks like &ldquo;Practical Network Forensics,&rdquo; &ldquo;Teaching Cyber Ethics and Societal Impacts in Introduction Computing Courses,&rdquo; and &ldquo;Watson for Cybersecurity and IBM&rsquo;s Cyber Range,&rdquo; and also take advantage of the conference&rsquo;s Career Fair, at which the College hosted a recruitment booth.</p><p>&ldquo;Overall, it was a great conference and definitely worth the time to attend,&rdquo; said Fatouros, who works in security and compliance for AT&amp;T. &ldquo;There was a good mix of students, professionals, and teaching faculty attending, which provided many opportunities to interact and network. I was able to pick up job-relevant information from all of the sessions, including workshops, distinguished speakers, and lightning talks. I work in cybersecurity and left WiCyS more focused and encouraged about the many challenging, rewarding, and attainable opportunities in my field.&rdquo;</p><p>While in Chicago, the College of Computing and Wenke Lee, co-executive director of the <a href="https://cyber.gatech.edu/">Institute for Information Security &amp; Privacy (IISP)</a>, hosted a celebration for local OMSCS students and College alumni. The event was held at the Chicago Athletic Association Hotel, and the 35 attendees enjoyed an evening of celebration with their fellow students -- many of whom met in person for the first time. Lee shared news of all of the exciting cybersecurity research success at Georgia Tech, as well as tips and tricks on how to succeed in the OMSCS program (in which he teaches <a href="https://www.omscs.gatech.edu/cs-6035-introduction-to-information-security">Introduction to Information Security</a>).</p><p>&ldquo;It was fantastic to see so many students from OMSCS come out for this social,&rdquo; said Lee. &ldquo;I always enjoy meeting students to hear what they are motivated to do in their careers and their feedback about the program.&rdquo;</p><p><em>If you are interested in learning more about cybersecurity at the Institute for Information Security &amp; Privacy (IISP) at Georgia Tech, visit <a href="https://cyber.gatech.edu/">https://cyber.gatech.edu/</a></em></p><p><em>If you are interested in learning more about the online Master of Science in Computer Science (OMSCS) program, visit <a href="http://www.omscs.gatech.edu/">http://www.omscs.gatech.edu/</a>.</em></p>]]></body>  <author>Brittany Aiello</author>  <status>1</status>  <created>1522768876</created>  <gmt_created>2018-04-03 15:21:16</gmt_created>  <changed>1522769371</changed>  <gmt_changed>2018-04-03 15:29:31</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[In celebration of the Georgia Tech College of Computing’s role in cybersecurity, the College sent students on scholarship to attend WiCyS Conference, as well as hosting a celebration for online M.S. CS students.]]></teaser>  <type>news</type>  <sentence><![CDATA[In celebration of the Georgia Tech College of Computing’s role in cybersecurity, the College sent students on scholarship to attend WiCyS Conference, as well as hosting a celebration for online M.S. CS students.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-04-03T00:00:00-04:00</dateline>  <iso_dateline>2018-04-03T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-04-03 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[baiello@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Brittany Aiello, OMSCS Communications</p><p>baiello@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>604642</item>          <item>604643</item>          <item>604644</item>      </media>  <hg_media>          <item>          <nid>604642</nid>          <type>image</type>          <title><![CDATA[Stacey Truex, Ph.D. Computer Science, and Jenna McGrath, Ph.D. Public Policy]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Screen Shot 2018-03-29 at 4.12.39 PM.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Screen%20Shot%202018-03-29%20at%204.12.39%20PM.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Screen%20Shot%202018-03-29%20at%204.12.39%20PM.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Screen%2520Shot%25202018-03-29%2520at%25204.12.39%2520PM.png?itok=xLZnDkpW]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Pictured left to right: Stacey Truex, Ph.D. Computer Science, and Jenna McGrath, Ph.D. Public Policy]]></image_alt>                    <created>1522768603</created>          <gmt_created>2018-04-03 15:16:43</gmt_created>          <changed>1522769296</changed>          <gmt_changed>2018-04-03 15:28:16</gmt_changed>      </item>          <item>          <nid>604643</nid>          <type>image</type>          <title><![CDATA[Brittany Aiello, OMSCS Communications, and Tina Fatouros, OMSCS student and WiCyS scholarship attendee]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[TinaandBrittany-WiCyS2018.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/TinaandBrittany-WiCyS2018.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/TinaandBrittany-WiCyS2018.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/TinaandBrittany-WiCyS2018.jpg?itok=YBN549Ij]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Pictured left to right: Brittany Aiello, OMSCS Communications, and Tina Fatouros, OMSCS student and WiCyS scholarship attendee]]></image_alt>                    <created>1522768683</created>          <gmt_created>2018-04-03 15:18:03</gmt_created>          <changed>1522769236</changed>          <gmt_changed>2018-04-03 15:27:16</gmt_changed>      </item>          <item>          <nid>604644</nid>          <type>image</type>          <title><![CDATA[Wenke Lee and OMSCS Chicago-area students]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Screen Shot 2018-03-29 at 4.11.58 PM.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Screen%20Shot%202018-03-29%20at%204.11.58%20PM.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Screen%20Shot%202018-03-29%20at%204.11.58%20PM.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Screen%2520Shot%25202018-03-29%2520at%25204.11.58%2520PM.png?itok=u-u0n5uX]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Wenke Lee and OMSCS Chicago-area students]]></image_alt>                    <created>1522768781</created>          <gmt_created>2018-04-03 15:19:41</gmt_created>          <changed>1522769198</changed>          <gmt_changed>2018-04-03 15:26:38</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1305"><![CDATA[Georgia Tech Academic Advising Network (GTAAN)]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="131901"><![CDATA[Provost]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="177620"><![CDATA[WiCyS]]></keyword>          <keyword tid="10893"><![CDATA[wenke lee]]></keyword>          <keyword tid="1404"><![CDATA[Cybersecurity]]></keyword>          <keyword tid="177624"><![CDATA[women in cybersecurity]]></keyword>          <keyword tid="1270"><![CDATA[conference]]></keyword>          <keyword tid="4833"><![CDATA[chicago]]></keyword>          <keyword tid="121521"><![CDATA[OMSCS]]></keyword>          <keyword tid="69631"><![CDATA[Online Master of Science in Computer Science]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="604288">  <title><![CDATA[Bust a Move: IC Ph.D. Student Caitlyn Seim Tests Passive Haptic Learning for Dance at Get a Move On Hackathon]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Earlier this month, School of Interactive Computing Ph.D. student <strong>Caitlyn Seim</strong> participated in the College of Computing&rsquo;s <a href="https://cchackathon.github.io/geta-moveon/">Get a Move On hackathon</a>, which focused on music, dance, fitness, gaming, and sports.</p><p>As a graduate researcher who has studied wearable computing devices that provide haptic, or tactile, stimulation in Professor <strong><a href="https://www.cc.gatech.edu/people/thad-starner">Thad Starner&rsquo;s</a></strong> lab, she took the opportunity to apply what she knew to the lower body. Most of what she and Starner have worked on in the past was focused on upper-body learning &ndash; teaching piano, Braille, making you faster at typing &ndash; but in this case, she wanted to focus on dance.</p><p>&ldquo;While brainstorming for the hackathon, I reached out to Carnegie Hall tap dancer <strong>Christopher Erk</strong>,&rdquo; she said. &ldquo;He was immediately interested and provided us with three elementary tap routines that we could integrate into the wearable.&rdquo;</p><p>Using a technique called passive haptic learning, individuals can learn new skills through tactile cues provided by a wearable device such as a watch or glove. While continuing normal daily tasks, the instructional stimuli repeat in the background and help them learn. In the past, Starner&rsquo;s lab has been able to produce results in skills like piano playing or learning Morse code.</p><p>Seim&rsquo;s thought leading up to the hackathon was that she could have similar success in affecting muscle memory for dance.</p><p>&ldquo;Every song is a new pattern of key presses,&rdquo; said Seim, referring to the computerized haptic gloves that helped teach the finger patterns of different piano songs. &ldquo;Likewise, every dance is a new pattern of steps. This is what inspired me to prototype a wearable to teach dance steps.&rdquo;</p><p>Over the course of two days at the hackathon, Seim worked with <strong>David Purcell</strong>, a student in Georgia Tech&rsquo;s <a href="http://www.omscs.gatech.edu/">online master of science in computer science</a> (OMSCS) program, to create the prototype. It comes in the form of haptic socks, which are cordless, and is synchronized and programmed to teach a routine sent by Erk using tactile taps from embedded motors.</p><p>Results thus far are limited to the pilot program tested by Seim and another student. But, Seim said, the paradigm is exactly like the hands. Seim&rsquo;s team finished in the top five overall and second place in hardware at the hackathon.</p><p>Seim is looking at potential dance-related collaborations to continue the project. Interested students should <a href="mailto:seimresearch@gmail.com">contact Seim via email</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1522097721</created>  <gmt_created>2018-03-26 20:55:21</gmt_created>  <changed>1522097721</changed>  <gmt_changed>2018-03-26 20:55:21</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Most of what Caitlyn Seim and Starner have worked on in the past was focused on upper-body learning – teaching piano, Braille, making you faster at typing – but in this case, she wanted to focus on dance.]]></teaser>  <type>news</type>  <sentence><![CDATA[Most of what Caitlyn Seim and Starner have worked on in the past was focused on upper-body learning – teaching piano, Braille, making you faster at typing – but in this case, she wanted to focus on dance.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-03-26T00:00:00-04:00</dateline>  <iso_dateline>2018-03-26T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-03-26 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>604287</item>      </media>  <hg_media>          <item>          <nid>604287</nid>          <type>image</type>          <title><![CDATA[PHL for Dance]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[diagram.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/diagram.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/diagram.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/diagram.png?itok=DJ9uPnkQ]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Passive Haptic Learning for Dance]]></image_alt>                    <created>1522097460</created>          <gmt_created>2018-03-26 20:51:00</gmt_created>          <changed>1522097460</changed>          <gmt_changed>2018-03-26 20:51:00</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="170072"><![CDATA[Caitlyn Seim]]></keyword>          <keyword tid="1944"><![CDATA[Thad Starner]]></keyword>          <keyword tid="10353"><![CDATA[wearable computing]]></keyword>          <keyword tid="61371"><![CDATA[Hackathon]]></keyword>          <keyword tid="104221"><![CDATA[passive haptic learning]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="603980">  <title><![CDATA[College of Computing Rises to No. 8 in U.S. News Rankings]]></title>  <uid>32045</uid>  <body><![CDATA[<h3><strong><em><strong>Move</strong>&nbsp;is GT Computing&rsquo;s second jump in last three rankings</em></strong></h3><p>The Georgia Tech College of Computing continued its climb up the U.S. News and World Report rankings of graduate computer science programs, rising one spot to No. 8 in the 2018 rankings that were released March 20.</p><p>The new position represents Georgia Tech&rsquo;s second jump in the last three CS rankings, all released since 2012, and is the highest U.S. News has ever ranked the <a href="https://www.cc.gatech.edu/" target="_blank">College of Computing</a>.</p><h4><strong>GT Computing&#39;s global impact</strong></h4><p>&ldquo;We are thrilled but not surprised at this latest recognition of the work we&rsquo;re doing in GT Computing,&rdquo; said <a href="https://www.cc.gatech.edu/people/zvi-galil" target="_blank"><strong>Zvi Galil</strong></a>, John P. Imlay Jr. Dean of Computing.</p><p>&ldquo;I attribute this to our visible leadership in computing education and research, to the fact that we are now the largest computing program in the United States counting both faculty and students&ndash;and likely number 2 in terms of faculty size&ndash;and to the <a href="http://gtcomputing2017.cc.gatech.edu/" target="_blank">global impact we are having</a> both through our research and the work of our thousands of alumni.&rdquo;</p><p>U.S. News ranks computer science programs through a reputational survey. With our average score of 4.4, Georgia Tech has now tied with Princeton and one spot ahead of No. 10 University of Texas-Austin. In the 2018 rankings, Georgia Tech rose in both points and ranking&mdash;and was the only Top 10 program to rise in either.</p><p>The College also achieved rankings in the following specialties:</p><ul><li><a href="https://www.ic.gatech.edu/content/artificial-intelligence-machine-learning" target="_blank">Artificial Intelligence</a> (No. 7)</li><li><a href="https://www.scs.gatech.edu/content/programming-languages-software-engineering" target="_blank">Programming Language</a> (No. 16)</li><li><a href="https://www.scs.gatech.edu/content/systems" target="_blank">Systems</a> (No. 10)</li><li><a href="https://www.scs.gatech.edu/content/theory">Theory</a> (No. 9)</li></ul><p>Coincidentally, the No. 8 overall ranking matches the spot Georgia Tech earned in last year&rsquo;s <a href="https://www.timeshighereducation.com/world-university-rankings/2018/subject-ranking/computer-science#!/page/0/length/25/sort_by/rank/sort_order/asc/cols/stats">Times Higher Education/Wall Street Journal rankings</a>, when the College was named the No. 8 program in the world.</p><h4><strong>Other GT ranking highlights</strong></h4><p>&ldquo;Over the past several years,&rdquo; Galil continued, &ldquo;we have made deliberate, strategic investments of time and treasure, both in research and education, and this recognition is one of the fruits of those efforts.&rdquo;</p><p>The College of Computing was just one of many Georgia Tech programs to place highly in the 2018 rankings.</p><p>The <a href="https://coe.gatech.edu/">College of Engineering</a><a href="https://coe.gatech.edu/" target="_blank"> </a>also ranked No. 8 (No. 4 among public universities), and all 11 of its programs ranked in the top 10. In the <a href="https://cos.gatech.edu/" target="_blank">College of Sciences</a>, Chemistry jumped four to No. 20, Math moved up two to No. 26, Physics moved up one to No. 28, Earth Sciences moved up four to No. 38, and Biology moved up one to No. 54. Within mathematics, the discrete math/combinatorics specialty had Georgia Tech at No. 2, up two positions.</p><p><a href="https://www.usnews.com/best-graduate-schools" target="_blank">[READ:&nbsp;U.S. News and World Report 2019 Graduate School Rankings]</a></p><p>The <a href="https://www.scheller.gatech.edu/index.html" target="_blank">Scheller College of Business</a> full-time MBA program moved up one to No. 28, and its part-time MBA moved up five to No. 25. Scheller was also ranked in the following specialties:</p><ul><li>Production/Operations (No. 7)</li><li>Supply Chain/Logistics (No. 17)</li><li>Information Systems (No. 12)</li></ul><p>In the <a href="https://www.iac.gatech.edu/" target="_blank">Ivan Allen College of Liberal Arts</a>, the Sam Nunn School of Public Policy moved up two to No. 43 overall with the Information and Technology Management specialty remaining at No. 2, Public Policy Analysis debuting at No. 20 and the Environmental Policy and Management specialty debuting at No. 12.</p>]]></body>  <author>Ben Snedeker</author>  <status>1</status>  <created>1521480447</created>  <gmt_created>2018-03-19 17:27:27</gmt_created>  <changed>1521653310</changed>  <gmt_changed>2018-03-21 17:28:30</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech's computer science moves up list of best U.S. graduate schools.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech's computer science moves up list of best U.S. graduate schools.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-03-19T00:00:00-04:00</dateline>  <iso_dateline>2018-03-19T00:00:00-04:00</iso_dateline>  <gmt_dateline>2018-03-19 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[mterraza@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Mike Terrazas, Communications Director</p><p><a href="mailto:mterraza@cc.gatech.edu?subject=U.S.%20News%202019%20Best%20Graduate%20Schools">mterraza@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>603992</item>      </media>  <hg_media>          <item>          <nid>603992</nid>          <type>image</type>          <title><![CDATA[GT Computing Binary Bridge code close up]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[BinaryBridge_july16 copy 2.JPG]]></image_name>            <image_path><![CDATA[/sites/default/files/images/BinaryBridge_july16%20copy%202.JPG]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/BinaryBridge_july16%20copy%202.JPG]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/BinaryBridge_july16%2520copy%25202.JPG?itok=sZrCovHo]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Close up of Binary Bridge at Georgia Tech]]></image_alt>                    <created>1521483862</created>          <gmt_created>2018-03-19 18:24:22</gmt_created>          <changed>1521483862</changed>          <gmt_changed>2018-03-19 18:24:22</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="545781"><![CDATA[Institute for Data Engineering and Science]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="576491"><![CDATA[CRNCH]]></group>          <group id="1305"><![CDATA[Georgia Tech Academic Advising Network (GTAAN)]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="431631"><![CDATA[OMS]]></group>          <group id="131901"><![CDATA[Provost]]></group>          <group id="430601"><![CDATA[Institute for Information Security and Privacy]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="177484"><![CDATA[US News rankings]]></keyword>          <keyword tid="177485"><![CDATA[eighth place]]></keyword>          <keyword tid="2523"><![CDATA[cs]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="602797">  <title><![CDATA[IC Assistant Professor Alex Endert Earns NSF CAREER Award]]></title>  <uid>33939</uid>  <body><![CDATA[<p>School of Interactive Computing Assistant Professor <strong><a href="https://www.ic.gatech.edu/people/7069/alex-enderts">Alex Endert</a></strong> received a CAREER Award from the <a href="https://www.nsf.gov/">National Science Foundation</a> for a project titled <em>CAREER: Visual Analytics by Demonstration for Interactive Data Analysis.</em></p><p>The award, which will total $493,000 paid out over the course of five years, begins on May 1 and builds on Endert&rsquo;s prior work on demonstration-based user interaction to create tools that make data science more usable and accessible to people without formal data science training.</p><p>The research will build knowledge in how people can visually demonstrate their questions about data. In turn, visual analytic system interfaces will need to change to interpret these demonstrations and perform the appropriate analytic operations. Finally, people will be able to leverage complex and powerful analytic functions without the need to provide formal parameterizations of the model being used.</p><p>&ldquo;In today&rsquo;s data-driven era, everyday decisions are becoming increasingly data-driven problems,&rdquo; Endert explained. &ldquo;While this provides opportunity for people to make better decisions, it requires technology for visual data analysis to become easier to use for people without formal data science training.&rdquo;</p><p>It&rsquo;s not just people in business who are constantly utilizing data to inform decisions. Everyday people encounter data on a daily basis &ndash; comparing car models, searching for houses, and more. Endert noted impactful areas of interest like health care and national security.</p><p>&ldquo;This research will create new methods for people to interact with data, focusing on domains of interest to society including health care and national security,&rdquo; said Endert, who added that he and his students will develop visual analytic prototypes released on the web, toolkits for developers to leverage, adopt and expand research, and provide empirical evidence to support the increase in usability.</p><p>A key challenge, Endert said, is fostering the interactive feedback loop between people and systems. The overall goal is to simplify aspects of this iterative process by building by-demonstration alternatives to existing control panels providing precise, yet complex controls.</p><p>&ldquo;If successful, the proposed work has the potential to transform user interfaces for data science systems,&rdquo; he said.</p><p>Endert is a part of a separate team of researchers that was <a href="https://www.ic.gatech.edu/news/600879/georgia-tech-tufts-university-and-wisconsin-researchers-awarded-27m-make-data-science">recently awarded $2.7 million</a> from the <a href="https://www.darpa.mil/">Defense Advanced Research Projects Agency</a> <a href="https://www.darpa.mil/program/data-driven-discovery-of-models">Data-Driven Discovery of Models</a> program to study similar advances in the accessibility of data science.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1519334073</created>  <gmt_created>2018-02-22 21:14:33</gmt_created>  <changed>1519334073</changed>  <gmt_changed>2018-02-22 21:14:33</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The award will total $493,000 paid out over five years and builds on Endert’s prior work on tools that make data science more accessible to people without formal data science training.]]></teaser>  <type>news</type>  <sentence><![CDATA[The award will total $493,000 paid out over five years and builds on Endert’s prior work on tools that make data science more accessible to people without formal data science training.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-02-22T00:00:00-05:00</dateline>  <iso_dateline>2018-02-22T00:00:00-05:00</iso_dateline>  <gmt_dateline>2018-02-22 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>355601</item>      </media>  <hg_media>          <item>          <nid>355601</nid>          <type>image</type>          <title><![CDATA[Alex Endert - Compressed]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[alex-endert.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/alex-endert.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/alex-endert.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/alex-endert.jpg?itok=7CWkEXGl]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Alex Endert - Compressed]]></image_alt>                    <created>1449245756</created>          <gmt_created>2015-12-04 16:15:56</gmt_created>          <changed>1475895087</changed>          <gmt_changed>2016-10-08 02:51:27</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="112421"><![CDATA[alex endert]]></keyword>          <keyword tid="92811"><![CDATA[data science]]></keyword>          <keyword tid="176401"><![CDATA[visual analytics]]></keyword>          <keyword tid="363"><![CDATA[NSF]]></keyword>          <keyword tid="362"><![CDATA[National Science Foundation]]></keyword>          <keyword tid="174710"><![CDATA[National Science Foundation CAREER Award]]></keyword>          <keyword tid="9413"><![CDATA[CAREER Award]]></keyword>          <keyword tid="7842"><![CDATA[NSF CAREER Award]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>      </keywords>  <core_research_areas>          <term tid="39431"><![CDATA[Data Engineering and Science]]></term>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="602715">  <title><![CDATA[Professor Amy Bruckman Joins Seven Other IC Faculty in CHI Academy]]></title>  <uid>33939</uid>  <body><![CDATA[<p>School of Interactive Computing Professor <strong>Amy Bruckman</strong> can still remember the first paper she ever presented at the ACM CHI Conference on Human Factors in Computing Systems (CHI).</p><p>It was in 2001 when she was early in her time as a faculty member at Georgia Tech. Co-authored with Jason Ellis, the paper was titled <em>Designing Palaver Tree Online: Supporting Social Roles in a Community of Oral History</em>.</p><p>Now, 17 years later, Bruckman joins an ever-expanding list of Georgia Tech faculty that have earned entry into the CHI Academy. She was announced this month as a 2018 inductee into the prestigious group. She is one of eight who will be inducted this year, and she is the eighth Georgia Tech faculty member, all from the School of Interactive Computing, to join the group.</p><p>&ldquo;It was such a big honor to have a single paper in CHI as a young researcher,&rdquo; Bruckman said. &ldquo;To be actually inducted into the CHI Academy is beyond words. All I can say is that I&rsquo;m honored.&rdquo;</p><p>Fellow IC Professors <strong>Beki Grinter</strong> and <strong>Jim Foley</strong> provided a nomination for Bruckman to the CHI Academy. In it, they highlighted the depth and breadth of her research in content creation for educational purposes, social computing, and examination of the adoption of online social systems in countries like Cuba. A second sustained emphasis of her research, the nomination said, highlights the ethical issues that affect our community.</p><p>&ldquo;In this, not only has she demonstrated research excellence, but also a commitment to serving SIGCHI,&rdquo; they wrote.</p><p>More than her own induction, Bruckman noted what it means to have Georgia Tech continuously recognized for its commitment to the field of human-computer interaction.</p><p>Georgia Tech has seen new members join the CHI Academy in five of the past six years. Professor <strong>Thad Starner</strong> was inducted in 2017, Professor <strong>John Stasko</strong> in 2016, Professor <strong>Keith Edwards</strong> in 2014, and Grinter in 2013. Before that recent run, Professors. <strong>Gregory Abowd</strong> and <strong>Beth Mynatt</strong> were inducted in back-to-back years in 2008-09. Professor Emeritus Jim Foley, who retired in December, was the first of a long line of successful researchers in 2001.</p><p>&ldquo;My colleagues have been making an impact in the field for a long time,&rdquo; Bruckman said. &ldquo;It&rsquo;s humbling to be added to that group.&rdquo;</p><p>Grinter said that it&rsquo;s been Georgia Tech&rsquo;s commitment to human-computer interaction that has resulted in this kind of international recognition.</p><p>&ldquo;We&rsquo;ve been committed to a vision in which HCI plays a critical role,&rdquo; Grinter said. &ldquo;So, as we&rsquo;ve recruited and retained key faculty over time, we&rsquo;ve been recognized by the CHI Academy.&rdquo;</p><p>Or, as Foley simply put it:</p><p>&ldquo;Great faculty get recognized.&rdquo;</p><p>Bruckman will be recognized at CHI 2018, which will be held on April 21-26 in Montr&eacute;al, Canada.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1519243182</created>  <gmt_created>2018-02-21 19:59:42</gmt_created>  <changed>1519243182</changed>  <gmt_changed>2018-02-21 19:59:42</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Professor Amy Bruckman was announced this month as a 2018 inductee into the prestigious CHI Academy.]]></teaser>  <type>news</type>  <sentence><![CDATA[Professor Amy Bruckman was announced this month as a 2018 inductee into the prestigious CHI Academy.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-02-21T00:00:00-05:00</dateline>  <iso_dateline>2018-02-21T00:00:00-05:00</iso_dateline>  <gmt_dateline>2018-02-21 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>590524</item>      </media>  <hg_media>          <item>          <nid>590524</nid>          <type>image</type>          <title><![CDATA[Amy Bruckman]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[asb_full.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/asb_full.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/asb_full.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/asb_full.jpg?itok=DmmaSSsY]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Professor Amy Bruckman to serve as School of Interactive Computing Interim Chair]]></image_alt>                    <created>1492457925</created>          <gmt_created>2017-04-17 19:38:45</gmt_created>          <changed>1492457925</changed>          <gmt_changed>2017-04-17 19:38:45</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="8472"><![CDATA[amy bruckman]]></keyword>          <keyword tid="177194"><![CDATA[CHI Academy]]></keyword>          <keyword tid="1027"><![CDATA[chi]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="600879">  <title><![CDATA[Georgia Tech, Tufts University, and Wisconsin Researchers Awarded $2.7M to Make Data Science More Accessible]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Researchers at the Georgia Institute of Technology, Tufts University, and University of Wisconsin will develop new techniques to make machine learning in data science more accessible to non-data scientists under a $2.7 million grant from the <a href="https://www.darpa.mil/">Defense Advanced Research Projects Agency</a> (DARPA) <a href="https://www.darpa.mil/program/data-driven-discovery-of-models">Data-Driven Discovery of Models</a> (D3M) program.</p><p>Over the years, advances in machine learning have resulted in more complex, and more powerful, applications in information visualization. As a consequence, machine learning techniques to achieve specific insights from data have also gotten more complicated. Most require data science degrees or some formal data science training in order to use the tools that are being built.</p><p>Thus, the gap between subject matter experts &ndash; international politics majors, historians, biology experts, or climatologists, for example &ndash; and the complexity of the machine learning tools used to contextualize data will continue to grow.</p><p>&ldquo;Often, these experts have a wealth of knowledge about things like international affairs or cybersecurity, but they don&rsquo;t have a wealth of knowledge of what it means to use machine learning model X, Y, or Z,&rdquo; said Alex Endert, an assistant professor in the School of Interactive Computing at Georgia Tech, one of the four collaborators on the project.</p><p>Currently, tools to adjust parameters on the data consist of buttons, control panels, dropdown menus and sliders, knobs and fields to adjust values, direct manipulations to define a machine learning model and letting it achieve the desired data.</p><p>This is less intuitive for non-data scientists, so the aim for the researchers is to move the user interaction into the visual space. Users could adjust the data within a scatter plot, for example, by zooming or panning, coloring items or generally demonstrating areas of interest inside the data. Then, they could infer how those parameters should change as a result of the exploration of the data.</p><p>&ldquo;If we are successful, we have the chance to bring data analysis to the public,&rdquo; said primary investigator Remco Chang, an associate professor in the Tufts University Department of Computer Science. &ldquo;But to get there, we will need to allow the end users to be able to intuitively ask questions about their data that can be formalized and executed in machine learning. We need to allow the user to make sense of the complex results from machine learning and help contextualize the results in the user&rsquo;s domain.&rdquo;</p><p>The grant, which took effect earlier this year, will fund four years of research. Other participants are Georgia Tech School of Interactive Computing Professor John Stasko, and University of Wisconsin Department of Computer Science Professor Michael Gleicher.</p><p>DARPA&rsquo;s D3M program aims to develop automated model discovery systems that enable users with subject matter expertise but no data science background to create empirical models of real, complex processes. Automated model discovery systems developed by the D3M program will be tested on real-world problems that will progressively get harder during the course of the program. Toward the end of the program, D3M will target problems that are both unsolved and underspecified in terms of data and instances of outcomes available for modeling.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1516127644</created>  <gmt_created>2018-01-16 18:34:04</gmt_created>  <changed>1518202125</changed>  <gmt_changed>2018-02-09 18:48:45</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Researchers at the Georgia Institute of Technology, Tufts University, and University of Wisconsin will develop new techniques to make machine learning in data science more accessible to non-data scientists.]]></teaser>  <type>news</type>  <sentence><![CDATA[Researchers at the Georgia Institute of Technology, Tufts University, and University of Wisconsin will develop new techniques to make machine learning in data science more accessible to non-data scientists.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-01-16T00:00:00-05:00</dateline>  <iso_dateline>2018-01-16T00:00:00-05:00</iso_dateline>  <gmt_dateline>2018-01-16 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="http://david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>600878</item>      </media>  <hg_media>          <item>          <nid>600878</nid>          <type>image</type>          <title><![CDATA[Internet map]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Internet_map_1024.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Internet_map_1024.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Internet_map_1024.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Internet_map_1024.jpg?itok=tZhZS13E]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Internet map]]></image_alt>                    <created>1516127458</created>          <gmt_created>2018-01-16 18:30:58</gmt_created>          <changed>1516127458</changed>          <gmt_changed>2018-01-16 18:30:58</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://vis.gatech.edu/]]></url>        <title><![CDATA[Georgia Tech Visualization Lab]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="545781"><![CDATA[Institute for Data Engineering and Science]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="13253"><![CDATA[DARPA grant]]></keyword>          <keyword tid="9167"><![CDATA[machine learning]]></keyword>          <keyword tid="172922"><![CDATA[information visualization]]></keyword>          <keyword tid="112421"><![CDATA[alex endert]]></keyword>          <keyword tid="11632"><![CDATA[john stasko]]></keyword>          <keyword tid="92811"><![CDATA[data science]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="176784"><![CDATA[tufts university]]></keyword>          <keyword tid="176785"><![CDATA[university of wisconsin]]></keyword>      </keywords>  <core_research_areas>          <term tid="39431"><![CDATA[Data Engineering and Science]]></term>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="601772">  <title><![CDATA[Grad Students in 3 Degree Programs Make Their Pitch to Industry at Interactivity 2018]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Graduates from three different degree programs participated in <a href="http://interactivity.cc.gatech.edu/">Interactivity 2018</a> on Thursday, pitching themselves and their research to potential employers in the annual One-Minute Madness and poster session at the Historic Academy of Medicine in midtown Atlanta.</p><p>One hundred and thirty-six students participated in the event, pitching to at least 150 industry guests from 75 different companies. Students from the MS-HCI, MS-Digital Media, and MS-Industrial Design degree programs participated in the event.</p><p>&ldquo;This event is such a great opportunity for our students to get in front of real industry people who are here to hire somebody,&rdquo; said School of Interactive Computing Professor of the Practice and MS-HCI Program Director <strong>Richard Henneman</strong>, who leads the event.</p><p>And even for those who aren&rsquo;t hiring, Interactivity is a great opportunity for related industries to stay connected with some of the best and brightest young minds coming through the academic ranks.</p><p>&ldquo;We&rsquo;ve had some who aren&rsquo;t hiring, but come just because they want to get to know our students and the interesting research they are engaged in,&rdquo; Henneman said. &ldquo;That&rsquo;s the kind of reputation these students have.&rdquo;</p><p>Interactivity is presented by the <a href="http://gvu.gatech.edu/">GVU Center</a> and sponsored by Mailchimp. GVU Director <strong>Keith Edwards</strong> expressed his affinity for the event during an announcement at the beginning of the One-Minute Madness session.</p><p>&ldquo;This is one of my all-time favorite events,&rdquo; he said. &ldquo;I think you&rsquo;ll be so impressed by the originality, but most of all by the quality of the work.&rdquo;</p><p>Some of the work this year included:</p><ul><li>A method to reveal criminal activities by visualizing the transportation trajectory of human trafficking,<br />&nbsp;</li><li>an MS-HCI student with an affinity for screenwriting who recently worked on a project to improve the purchase experience of clothes in retail stores for individuals in wheel chairs,<br />&nbsp;</li><li>and others who had worked on story development and prototyping of novel virtual reality concepts, among others.</li></ul><p>Resumes of the participating students <a href="http://interactivity.cc.gatech.edu/attendees/">can be found online here</a>. For photos from the event, go to <a href="https://www.flickr.com/photos/ccgatech/albums/72157663255245947">this album on the College of Computing&rsquo;s Fickr feed</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1517589648</created>  <gmt_created>2018-02-02 16:40:48</gmt_created>  <changed>1517589648</changed>  <gmt_changed>2018-02-02 16:40:48</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[One hundred and thirty-six students participated in the event, pitching to at least 150 industry guests from 75 different companies.]]></teaser>  <type>news</type>  <sentence><![CDATA[One hundred and thirty-six students participated in the event, pitching to at least 150 industry guests from 75 different companies.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2018-02-02T00:00:00-05:00</dateline>  <iso_dateline>2018-02-02T00:00:00-05:00</iso_dateline>  <gmt_dateline>2018-02-02 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>601771</item>      </media>  <hg_media>          <item>          <nid>601771</nid>          <type>image</type>          <title><![CDATA[Interactivity 2018]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[IMG_1899.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/IMG_1899.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/IMG_1899.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/IMG_1899.jpg?itok=gV-XNoFm]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Interactivity 2018]]></image_alt>                    <created>1517589211</created>          <gmt_created>2018-02-02 16:33:31</gmt_created>          <changed>1517589211</changed>          <gmt_changed>2018-02-02 16:33:31</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.flickr.com/photos/ccgatech/albums/72157663255245947]]></url>        <title><![CDATA[Photos from Interactivity 2018]]></title>      </link>          <link>        <url><![CDATA[http://interactivity.cc.gatech.edu/]]></url>        <title><![CDATA[Interactivity 2018]]></title>      </link>          <link>        <url><![CDATA[http://gvu.gatech.edu/index.php?q=home-page]]></url>        <title><![CDATA[GVU Center]]></title>      </link>          <link>        <url><![CDATA[https://www.ic.gatech.edu/academics/master-science-human-computer-interaction]]></url>        <title><![CDATA[MS-HCI]]></title>      </link>          <link>        <url><![CDATA[https://id.gatech.edu/mid]]></url>        <title><![CDATA[MS-ID]]></title>      </link>          <link>        <url><![CDATA[http://dm.lmc.gatech.edu/program/ms-program/]]></url>        <title><![CDATA[MS-DM]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="21141"><![CDATA[interactivity]]></keyword>          <keyword tid="107441"><![CDATA[ms-hci]]></keyword>          <keyword tid="176991"><![CDATA[ms-digital media]]></keyword>          <keyword tid="176992"><![CDATA[ms-industrial design]]></keyword>          <keyword tid="176993"><![CDATA[ms-dm]]></keyword>          <keyword tid="176994"><![CDATA[ms-id]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="111831"><![CDATA[Richard Henneman]]></keyword>          <keyword tid="13541"><![CDATA[Keith Edwards]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="599583">  <title><![CDATA[Iconic IC Professor, GVU Center Founder Jim Foley Bids Farewell]]></title>  <uid>33939</uid>  <body><![CDATA[<p>School of Interactive Computing Professor <strong>Jim Foley</strong> was in the midst of some well-deserved personal leave earlier this year when he had a realization. He was traveling around the world, skiing, swimming, and playing with his trains, a favorite hobby of his since childhood.</p><p>&ldquo;I was coming back next semester to teach,&rdquo; he said, referring to his planned return in Spring 2018, &ldquo;but I said, &lsquo;Wait a minute. I&rsquo;m enjoying this too much!&rsquo;&rdquo;</p><p>The <a href="http://cc.gatech.edu/">College of Computing</a> icon, who came to Georgia Tech in 1991 to establish the <a href="http://gvu.gatech.edu/">GVU Center</a>, instead elected to retire from teaching. It will be a welcome break for an individual who has left a vibrant mark on the College, the <a href="http://ic.gatech.edu/">School of Interactive Computing</a> (IC), and a number of associated centers, institutes, and labs.</p><p>&ldquo;Jim has always been and always will be my personal role model for thoughtful and graceful leadership,&rdquo; said <a href="https://www.cc.gatech.edu/people/amy-bruckman"><strong>Amy Bruckman</strong></a>, professor and interim IC chair. &ldquo;We&rsquo;re going to miss him so terribly here at Georgia Tech.&rdquo;</p><h3><strong>A Distinguished Career</strong></h3><p>Foley guided the GVU Center from inception until 1996, helping it garner a No. 1 ranking that final year for graduate computer science research in graphics and user interaction by U.S. News and World Report. He is a Fellow of the <a href="https://www.acm.org/">Association for Computing Machinery</a> (ACM), the <a href="https://www.ieee.org/index.html">Institute of Electrical and Electronics Engineers</a> (IEEE), and the <a href="https://www.aaas.org/">American Association for the Advancement of Science</a> (AAAS), an inaugural member of the <a href="https://en.wikipedia.org/wiki/CHI_Academy">ACM/CHI Academy</a>, and a recipient of two lifetime achievement awards: the biannual ACM/SIGGRAPH Stephen Coons Award for Outstanding Creative Contributions to Computer Graphics and the ACM/SIGCHI Lifetime Achievement Award. In 2008, he was elected to the <a href="https://www.nae.edu/">National Academy of Engineering</a> and also received Georgia Tech&rsquo;s highest faculty honor, the Class of 1934 Distinguished Professor Award.</p><p>He co-authored four widely-used graphics textbooks and advised nine students over the course of his 27 years at Tech. Two of them, <a href="https://www.cc.gatech.edu/people/elizabeth-mynatt"><strong>Beth Mynatt</strong></a> and <a href="https://www.cc.gatech.edu/people/melody-jackson"><strong>Melody Moore Jackson</strong></a>, are now IC faculty members.</p><p>His ability to guide students through their academic journeys was clear from the time he joined the College. He was named the &ldquo;most likely to make students want to grow up to be professors&rdquo; in 1992. Mynatt was one of those graduate students.</p><p>&ldquo;I was having trouble,&rdquo; said Mynatt of her graduate school experience. &ldquo;I couldn&rsquo;t find my path, and I was probably a semester away from walking out the door. I walked into his office and asked for a second of his time, said I had my project and my funding and I promised never to bother him, but could he please be my advisor. It was such a tremendous impact on my entire life that he said yes. He&rsquo;s continued to be my advisor every single day since then.&rdquo;</p><p>Mynatt earned her <a href="https://www.cc.gatech.edu/phd-computer-science">Ph.D. in computer science</a> shortly thereafter, in 1995, and joined the faculty at Georgia Tech in 1998.</p><p>Foley came to Georgia Tech in 1991 after being recruited by College of Computing Dean <strong>Peter Freeman</strong> and then-Georgia Tech president <strong>Pat Crecine</strong>. Crecine had been a provost at Carnegie Mellon, where Foley could see the power and influence of having a separate computing college. Foley was excited by the vision of the new College of Computing, which was to push beyond traditional computer science to its interaction with other disciplines.</p><p>&ldquo;Pat pushed a broad vision of computing here at Tech,&rdquo; Foley said. &ldquo;He was big on new media and the future of interactive computing, so they recruited me to come and do something here.&rdquo;</p><p>There was already a small but enthusiastic group of faculty devoted to graphics and user interface research at the College of Computing. Foley was able to take that group and the resources provided by the Institute to establish and grow the GVU Center into a nationally prominent organization in an astonishingly short period of time.</p><p>&ldquo;I&rsquo;ve enjoyed all of my 27 years at Georgia Tech, but those five years really stand out to me,&rdquo; Foley said. &ldquo;Those were heady times. There was a lot of excitement. The college was new, the GVU Center was new, we were growing, and we were getting national recognition. It was just very exciting. Everyone was committed, working hard, and making the GVU Center into what it became.&rdquo;</p><p>&ldquo;Jim was, obviously, the instrumental person in founding the GVU Center,&rdquo; said Professor Keith Edwards, the current GVU director. &ldquo;He defined what its mission would be, how its people would work together, and how the community would come together.&rdquo;</p><p>The center changed directors in 1996, when Foley briefly left for Mitsubishi Research, but his impact has been felt continuously in the succeeding years. So much so that in 2008, Mynatt, the GVU Center director at that time, led a fundraising campaign amongst GVU faculty, students, and friends to establish the Foley Scholars Endowment, which funds two $5,000 scholarships awarded annually to GVU-affiliated graduate students.</p><p>&ldquo;I was really overwhelmed by how many contributed to the endowment, and by the continuing contributions,&rdquo; Foley said. &ldquo;It has supported 20 awards to some of the strongest GVU students. It&rsquo;s a very humbling experience.&rdquo;</p><h3><strong>&lsquo;Remember How We Got Here&rsquo;</strong></h3><p>Foley&rsquo;s career has gone through a number of metamorphoses over the years. He started out as an electrical engineer at Lehigh University, drawing on his childhood dream of being an engineer &ldquo;of a different kind.&rdquo;</p><p>&ldquo;I had toy trains and did a lot of electrical wiring as a boy,&rdquo; he said. &ldquo;That led me to electrical engineering at Lehigh University.&rdquo;</p><p>There, he was introduced to computers and did some programming. He followed his undergraduate work with a degree in Computer Information and Control Engineering at the University of Michigan, learning about computer graphics and setting up the next stage of his career.</p><p>When he was first getting into computing, there were individuals from various fields &ndash; electrical engineering, math, physics, and more &ndash; beginning to pursue similar fields of study. It was a great foundation in his belief in collegiality and collaboration across research areas.</p><p>&ldquo;I&rsquo;ve always been a believer in the power of many,&rdquo; Foley said. &ldquo;Being able to collaborate with others has been a real high point in my career.&rdquo;</p><p>Indeed, that is one thing that attracted him to Georgia Tech. Edwards said Foley helped make Georgia Tech&rsquo;s reputation as a leader in collaboration that much more impressive.</p><p>&ldquo;I think one of the things that gets overlooked is that he was instrumental in defining GVU&rsquo;s culture &ndash; that we could have people from great different disciplines come together, respect each other, learn from each other, and work together. Jim was the role model for how to do this, since he lived it every day, and people emulated him because of that. Those seeds really took root because now, 25 years later, I think the cultural influence here in GVU is what he started: An open, collaborative, respectful, and fun group to work with.&rdquo;</p><p>Asked for the message he wants to leave his colleagues and students with at Georgia Tech, Foley offered familiar sentiments.</p><p>&ldquo;Firstly, what I learned from my parents: From my mom, I learned determination and to keep going after my goals,&rdquo; he said. &ldquo;From my dad, I learned to be kind to everyone. Be courteous, be friendly.</p><p>&ldquo;Secondly, I recognize that whatever I&rsquo;ve been able to accomplish has been with the help of many others. None of us have achieved our goals on our own. So, I say to everyone &ndash; to my faculty colleagues, to students, to friends: Remember how we got here, and help others achieve their own goals.&rdquo;</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1512509615</created>  <gmt_created>2017-12-05 21:33:35</gmt_created>  <changed>1512509615</changed>  <gmt_changed>2017-12-05 21:33:35</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Retiring School of Interactive Computing Professor Jim Foley looks back on his distinguished career.]]></teaser>  <type>news</type>  <sentence><![CDATA[Retiring School of Interactive Computing Professor Jim Foley looks back on his distinguished career.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-12-05T00:00:00-05:00</dateline>  <iso_dateline>2017-12-05T00:00:00-05:00</iso_dateline>  <gmt_dateline>2017-12-05 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p><a href="http://david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>249711</item>      </media>  <hg_media>          <item>          <nid>249711</nid>          <type>image</type>          <title><![CDATA[Jim Foley in GVU Center]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[08c1214-p4-032.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/08c1214-p4-032_0.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/08c1214-p4-032_0.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/08c1214-p4-032_0.jpg?itok=KiitfD-U]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Jim Foley in GVU Center]]></image_alt>                    <created>1449243795</created>          <gmt_created>2015-12-04 15:43:15</gmt_created>          <changed>1475894929</changed>          <gmt_changed>2016-10-08 02:48:49</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://gvu.gatech.edu]]></url>        <title><![CDATA[GVU Center at Georgia Tech]]></title>      </link>          <link>        <url><![CDATA[http://gvu.gatech.edu/james-d-foley-gvu-center-endowment]]></url>        <title><![CDATA[Foley Scholar Endowment]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="78531"><![CDATA[Jim Foley]]></keyword>          <keyword tid="4887"><![CDATA[GVU Center]]></keyword>          <keyword tid="175331"><![CDATA[Foley Scholars Program]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="599503">  <title><![CDATA[IC Professor John Stasko Earns Grant to Explore Future Interfaces for Data Visualization]]></title>  <uid>33939</uid>  <body><![CDATA[<p>The film industry has explored a future in which characters interact with large, projected wall displays through speech, gaze, and gesture. Characters like Tony Stark from the <em>Iron Man</em> franchise or those in <em>Minority Report</em> can perform data exploration and analysis activities using various visualizations by non-haptic means.</p><p>Today&rsquo;s systems for data visualization, which utilize desktop and laptop computers to interact via mouse-driven direct manipulation interfaces following the window-icon-menu-pointer (WIMP) paradigm, pale in comparison to the natural, fluid interactions presented in those futuristic film sequences.</p><p>Through a <a href="https://www.nsf.gov/awardsearch/showAward?AWD_ID=1717111">new grant</a> provided by the <a href="https://www.nsf.gov/index.jsp">National Science Foundation</a> (NSF) Information <a href="https://www.nsf.gov/funding/pgm_summ.jsp?pims_id=503303&amp;org=CISE">Integration and Informatics (III) program</a>, School of Interactive Computing Professor <a href="https://www.cc.gatech.edu/people/john-stasko">John Stasko</a> aims to explore, design, develop, and evaluate post-WIMP interfaces for data visualization and data analytics.</p><p>The project, titled <a href="https://www.cc.gatech.edu/gvu/ii/naturalvis/"><em>Creating Natural Data Visualization and Analysis Environments</em></a>, has received three years of funding worth a total of $493,752.</p><p>To move beyond WIMP interfaces, Stasko said, new forms of natural user interfaces (NUIs) employing multimodal interactions such as speech, pen, touch, gestures, gaze, and head and body movements must be developed.</p><p>&ldquo;While no one interaction modality may provide all desired capabilities, combinations of modalities &ndash; speech, gaze, and pen, for example &ndash; could provide a more natural, intuitive, and integrated interface experience,&rdquo; Stasko said.</p><p>In this scenario, the system could assist individuals who know the information they want to extract from their data but not the specific commands or interface actions to take in a visualization system to produce the proper charts.</p><p>&ldquo;If the objects to be acted upon are not clear from speech commands, then gaze, gesture, and touch can clarify a person&rsquo;s intent,&rdquo; Stasko said. &ldquo;Furthermore, these input modalities may excel when a conventional mouse and keyboard are not available.&rdquo;</p><p>Stasko is assisted on the project by Ph.D. student <a href="https://www.ic.gatech.edu/content/arjun-srinivasan">Arjun Srinivasan</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1512416055</created>  <gmt_created>2017-12-04 19:34:15</gmt_created>  <changed>1512416055</changed>  <gmt_changed>2017-12-04 19:34:15</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Through a new grant provided by the National Science Foundation (NSF) Information Integration and Informatics (III) program, School of Interactive Computing Professor John Stasko aims to explore, design, develop, and evaluate interfaces for data analytics]]></teaser>  <type>news</type>  <sentence><![CDATA[Through a new grant provided by the National Science Foundation (NSF) Information Integration and Informatics (III) program, School of Interactive Computing Professor John Stasko aims to explore, design, develop, and evaluate interfaces for data analytics]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-12-04T00:00:00-05:00</dateline>  <iso_dateline>2017-12-04T00:00:00-05:00</iso_dateline>  <gmt_dateline>2017-12-04 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>599502</item>      </media>  <hg_media>          <item>          <nid>599502</nid>          <type>image</type>          <title><![CDATA[Minority report interface]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Minority Report.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Minority%20Report.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Minority%20Report.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Minority%2520Report.jpg?itok=TmEWoZEE]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Man uses interactive visualization interface]]></image_alt>                    <created>1512415792</created>          <gmt_created>2017-12-04 19:29:52</gmt_created>          <changed>1512415792</changed>          <gmt_changed>2017-12-04 19:29:52</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.cc.gatech.edu/gvu/ii/]]></url>        <title><![CDATA[Information Interfaces]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="11632"><![CDATA[john stasko]]></keyword>          <keyword tid="172922"><![CDATA[information visualization]]></keyword>          <keyword tid="176401"><![CDATA[visual analytics]]></keyword>          <keyword tid="33301"><![CDATA[data analytics]]></keyword>          <keyword tid="176402"><![CDATA[minority report]]></keyword>          <keyword tid="9614"><![CDATA[Iron Man]]></keyword>          <keyword tid="362"><![CDATA[National Science Foundation]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="599262">  <title><![CDATA[Wearable Computing Ring Allows Users to Write Words and Numbers with Thumb]]></title>  <uid>27560</uid>  <body><![CDATA[<p>With the whirl of a thumb, Georgia Tech researchers have created technology that allows people to trace letters and numbers on their fingers and see the figures appear on a nearby computer screen. The system is triggered by a thumb ring outfitted with a gyroscope and tiny microphone. As wearers strum their thumb across the fingers, the hardware detects the movement.</p><p>In a <a href="https://www.youtube.com/watch?v=6IIx7nceVeY&amp;feature=youtu.be">video demonstration</a>, the &ldquo;written&rdquo; figures appear on an adjacent screen. In the future, the researchers say the technology could be used to send phone calls to voicemail or answer text messages &mdash; all without the wearer reaching for their phone or even looking at it.</p><p>&ldquo;When a person grabs their phone during a meeting, even if trying to silence it, the gesture can infringe on the conversation or be distracting,&rdquo; said Thad Starner, the Georgia Tech School of Interactive Computing professor leading the project. &ldquo;But if they can simply send the call to voicemail, perhaps by writing an &lsquo;x&rsquo; on their hand below the table, there isn&rsquo;t an interruption.&rdquo;</p><p>Starner also says the technology could be used in virtual reality, replacing the need to take off a head-mounted device in order to input commands via a mouse or keyboard.</p><p>The research team wanted to build a system that would always be available and easy to use.</p><p>&ldquo;A ring augments the fingers in a way that is fairly non-obstructive during daily activities. A ring is also socially acceptable, unlike other wearable input devices,&rdquo; said Cheng Zhang, the Georgia Tech graduate student who created the technology.</p><p>The system is called Fingersound. While other gesture-based systems require the user to perform gestures in the air, Fingersound uses the fingers as a canvas. This allows the system to clearly recognize the beginning and end of an intended gesture by using the microphone and gyroscope to detect the signal. In addition to helping recognize the start and end of a gesture, it also provides tactile feedback while performing the gestures. This feedback is crucial for user experience and is missing on other in-air gestures</p><p>&ldquo;Our system uses sound and movement to identify intended gestures, which improves the accuracy compared to a system just looking for movements,&rdquo; said Zhang. &ldquo;For instance, to a gyroscope, random finger movements during walking may look very similar to the thumb gestures. But based on our investigation, the sounds caused by these daily activities are quite different from each other.&rdquo; &nbsp;</p><p>Fingersound sends the sound captured by the contact microphone and motion data captured by the gyroscope sensor through multiple filtering mechanisms. The system then analyzes it to determine whether a gesture was performed or whether it was simply noise from other finger-related activity.</p><p>The research was presented earlier this year at Ubicomp and the ACM International Symposium on Wearable Computing along with two other papers that feature ring-based gesture technology. <a href="https://www.youtube.com/watch?v=GTiisg_gqwA&amp;feature=youtu.be">FingOrbits</a> allows the wearer to control apps on a smartwatch or head-mounted display by rubbing their thumb on their hand. With <a href="https://www.youtube.com/watch?v=m-i3HJrNc0A&amp;feature=youtu.be">SoundTrak</a>, people can write words or 3-D doodles in the air by localizing the absolute position of the finger in 3-D space, then see the results simultaneously on a computer screen.</p><p>The new technologies were developed by the same team that created a technique that <a href="http://www.news.gatech.edu/2017/01/24/new-techniques-allow-greater-control-smartwatches">allowed smartwatch wearers to control their device by tapping its sides</a>.</p>]]></body>  <author>Jason Maderer</author>  <status>1</status>  <created>1511976884</created>  <gmt_created>2017-11-29 17:34:44</gmt_created>  <changed>1512135926</changed>  <gmt_changed>2017-12-01 13:45:26</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Technology allows people to trace letters and numbers on their fingers and see the figures appear on a nearby computer screen.]]></teaser>  <type>news</type>  <sentence><![CDATA[Technology allows people to trace letters and numbers on their fingers and see the figures appear on a nearby computer screen.]]></sentence>  <summary><![CDATA[<p>With the whirl of a thumb, Georgia Tech researchers have created technology that allows people to trace letters and numbers on their fingers and see the figures appear on a nearby computer screen. The system is triggered by a thumb ring outfitted with a gyroscope and tiny microphone. As wearers strum their thumb across the fingers, the hardware detects the movement.</p>]]></summary>  <dateline>2017-11-29T00:00:00-05:00</dateline>  <iso_dateline>2017-11-29T00:00:00-05:00</iso_dateline>  <gmt_dateline>2017-11-29 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[Technology provides eyes-free way to interact with smart devices]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[maderer@gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Jason Maderer<br />National Media Relations<br />maderer@gatech.edu<br />404-660-2926</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>599254</item>      </media>  <hg_media>          <item>          <nid>599254</nid>          <type>image</type>          <title><![CDATA[FingerSound Number Illustrations ]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[gestures_2.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/gestures_2.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/gestures_2.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/gestures_2.png?itok=CN5CUz0S]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[FingerSound Gestures ]]></image_alt>                    <created>1511975225</created>          <gmt_created>2017-11-29 17:07:05</gmt_created>          <changed>1511975225</changed>          <gmt_changed>2017-11-29 17:07:05</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://www.news.gatech.edu/2017/01/24/new-techniques-allow-greater-control-smartwatches]]></url>        <title><![CDATA[Previous Research with Smartwatches]]></title>      </link>          <link>        <url><![CDATA[https://www.ic.gatech.edu/]]></url>        <title><![CDATA[School of Interactive Computing]]></title>      </link>          <link>        <url><![CDATA[https://www.cc.gatech.edu/home/thad/]]></url>        <title><![CDATA[Meet Thad Starner]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="1214"><![CDATA[News Room]]></group>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="10353"><![CDATA[wearable computing]]></keyword>          <keyword tid="176353"><![CDATA[FingerSound]]></keyword>          <keyword tid="176354"><![CDATA[finger]]></keyword>          <keyword tid="1944"><![CDATA[Thad Starner]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71881"><![CDATA[Science and Technology]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="598129">  <title><![CDATA[GT Computing Faculty and Alum Awarded ASSETS Paper Impact Award]]></title>  <uid>33939</uid>  <body><![CDATA[<p>School of Interactive Computing Professors <strong>Keith Edwards</strong> and <strong>Beth Mynatt</strong> were given the 2017 ASSETS Paper Impact Award for their 1994 paper <em>Providing Access to Graphical User Interfaces &ndash; Not Graphical Screens</em>.</p><p>The award is given every other year to the authors of a paper from the ASSETS conference that was presented at least 10 years ago, and has had significant and sustained impact in the literature.</p><p>College of Computing and GVU Center alum <strong>Kathryn Stockton</strong> (M.S. CS, &rsquo;94) was also a co-author of the paper and was recognized for her contributions, as well.</p><p>Edwards and Mynatt, the current and former directors of the GVU Center, were presented the award at this year&rsquo;s <a href="https://assets17.sigaccess.org/">ASSETS conference</a>, taking place this week in Baltimore, Md. Each received a plaque, and the team was awarded a cash prize of $500.</p><p>The awarded paper highlighted the Mercator project, which had a significant and lasting impact on accessibility to graphical user interfaces. It was foundational in enabling and setting the direction of screen reader technology for <a href="https://en.wikipedia.org/wiki/X_Window_System">X Windows</a>, and opening up opportunities for assistive technology.</p><p>The paper was one of the first to raise and tackle the challenge of providing screen reader capabilities in graphical user interfaces. It proposed that translation of the GUI should be done at a semantic, rather than syntactic level.</p><p>The work includes several ideas that have proven to be important and influential in accessibility, including the use of auditory icons to represent different objects, audio formatting to confer status and other properties, and hierarchical modelling of containment and cause-effect relationships between interface objects.</p><p>The notion of defining user interfaces at an abstract level to allow for realization in many forms has been a major research thread in accessibility, leading to the development of several standards, and the underpinning ongoing efforts to develop personalized user interfaces.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1509462353</created>  <gmt_created>2017-10-31 15:05:53</gmt_created>  <changed>1509462353</changed>  <gmt_changed>2017-10-31 15:05:53</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[School of Interactive Computing Professors Keith Edwards and Beth Mynatt were given the 2017 ASSETS Paper Impact Award for their 1994 paper Providing Access to Graphical User Interfaces – Not Graphical Screens.]]></teaser>  <type>news</type>  <sentence><![CDATA[School of Interactive Computing Professors Keith Edwards and Beth Mynatt were given the 2017 ASSETS Paper Impact Award for their 1994 paper Providing Access to Graphical User Interfaces – Not Graphical Screens.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-10-31T00:00:00-04:00</dateline>  <iso_dateline>2017-10-31T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-10-31 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>598128</item>      </media>  <hg_media>          <item>          <nid>598128</nid>          <type>image</type>          <title><![CDATA[Keith Edwards and Beth Mynatt Impact Award]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Impact Award.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Impact%20Award.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Impact%20Award.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Impact%2520Award.jpeg?itok=n_w5LE0K]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Beth Mynatt and Keith Edwards receive ASSETS Impact Award]]></image_alt>                    <created>1509462218</created>          <gmt_created>2017-10-31 15:03:38</gmt_created>          <changed>1509462218</changed>          <gmt_changed>2017-10-31 15:03:38</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="13541"><![CDATA[Keith Edwards]]></keyword>          <keyword tid="10989"><![CDATA[Beth Mynatt]]></keyword>          <keyword tid="56611"><![CDATA[ASSETS]]></keyword>          <keyword tid="4887"><![CDATA[GVU Center]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="597580">  <title><![CDATA[College of Computing Makes a Splash at GHC 2017 Orlando]]></title>  <uid>27998</uid>  <body><![CDATA[<p>A group of 57 College of Computing students recently traveled to Orlando, Fla., to attend the 2017 <a href="https://ghc.anitab.org/">Grace Hopper Celebration of Women in Computing (GHC)</a> as representatives of Georgia Tech.</p><p>The celebration was held at the Orange County Convention Center from Oct. 4-6 and welcomed more than 17,000 female technologists from across the globe, as well as more than 100 companies in attendance to recruit the top talent in the tech industry. Keynote speakers included Melinda Gates of the Gates Foundation, Fei-Fei Li of Stanford University AI Lab, and Georgia Tech&rsquo;s own <a href="http://robotics.gatech.edu/faculty/howard">Ayanna Howard</a> of the School of Electrical and Computer Engineering, among many other inspiring female leaders in STEM fields.</p><p>The College of Computing is a platinum-level sponsor of Grace Hopper Celebration and sends a number of scholarship attendees to the conference each year. Among this year&rsquo;s 57 student attendees from Georgia Tech, 40 were on-campus undergraduate and graduate students based in Atlanta and 17 were online M.S. in Computer Science (OMS CS) students. An additional 15 current Georgia Tech graduate students attended as recruiters for their companies or through outside scholarships with companies like Microsoft and Disney.</p><p>&ldquo;If Georgia Tech wants to be known for our efforts to support women in computing, it&rsquo;s important for us to have a presence at the nation&rsquo;s foremost gathering of female technologists -- so that we can be allies with a passion for creating a diverse workforce to meet the growing needs of the industry,&rdquo; said Jennifer Whitlow, director of computing enrollment in the College of Computing. &ldquo;The conference provides current female computing students with amazing opportunities to network with others who share a similar background and pathway in the field, as well as the opportunity to seek career and graduate school opportunities with companies and universities from across the nation.&rdquo;</p><p>The OMS CS students in attendance traveled from all over the United States -- from San Francisco to Washington D.C. to Las Vegas -- and Canada. Student Rwithu Menon even traveled from Bangalore, India, to attend and meet her fellow students, with plans to make a pit stop in Atlanta on her way home in order to see Georgia Tech&rsquo;s campus for the first time.</p><p>College of Computing attendees participated in sessions and talks about their fields of interest, interviewed for jobs and internships with top tech companies (some, even receiving job offers on the spot), and gathered at an all-GT Computing reception on Thursday, Oct. 5. The reception included a surprise visit from Charles Isbell, executive associate dean and professor in the College of Computing.</p><p>&quot;Attending Grace Hopper with Georgia Tech was a great opportunity to meet lots of talented women in tech and to hear their stories and experiences, ups and downs,&rdquo; said Azade Sanjari, a current OMS CS student from California. &ldquo;Also, I was able to finally meet other students, in-person, from the OMS CS program! We talked about our experiences with our courses and our plans for the future. It made me even more determined to complete the program and hopefully, start my career path in machine learning.&quot;</p><p><br />If you&rsquo;d like to learn more about the experiences of women in computing at Georgia Tech and the significance of Grace Hopper, you can view our <a href="https://youtu.be/YABHaUePscU">#SheisGTComputing video</a> or explore <a href="https://anitab.org/">https://anitab.org/</a>.</p>]]></body>  <author>Brittany Aiello</author>  <status>1</status>  <created>1508356251</created>  <gmt_created>2017-10-18 19:50:51</gmt_created>  <changed>1508356351</changed>  <gmt_changed>2017-10-18 19:52:31</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A group of 57 College of Computing students recently traveled to Orlando, Fla., to attend the 2017 Grace Hopper Celebration of Women in Computing (GHC) as representatives of Georgia Tech.]]></teaser>  <type>news</type>  <sentence><![CDATA[A group of 57 College of Computing students recently traveled to Orlando, Fla., to attend the 2017 Grace Hopper Celebration of Women in Computing (GHC) as representatives of Georgia Tech.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-10-18T00:00:00-04:00</dateline>  <iso_dateline>2017-10-18T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-10-18 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[baiello@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Brittany Aiello</p><p>OMS CS Communications</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>597581</item>      </media>  <hg_media>          <item>          <nid>597581</nid>          <type>image</type>          <title><![CDATA[GHC 2017 Group Photo]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[ghc-17-group-photo.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/ghc-17-group-photo.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/ghc-17-group-photo.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/ghc-17-group-photo.jpg?itok=R4ltX1ox]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[GHC 2017 Group Photo]]></image_alt>                    <created>1508356324</created>          <gmt_created>2017-10-18 19:52:04</gmt_created>          <changed>1508356324</changed>          <gmt_changed>2017-10-18 19:52:04</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1305"><![CDATA[Georgia Tech Academic Advising Network (GTAAN)]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="431631"><![CDATA[OMS]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="172628"><![CDATA[GHC]]></keyword>          <keyword tid="8471"><![CDATA[grace hopper]]></keyword>          <keyword tid="45501"><![CDATA[Grace Hopper Celebration]]></keyword>          <keyword tid="175977"><![CDATA[Grace Hopper 2017]]></keyword>          <keyword tid="8469"><![CDATA[women in computing]]></keyword>          <keyword tid="175978"><![CDATA[#sheisgtcomputing]]></keyword>          <keyword tid="825"><![CDATA[Ayanna Howard]]></keyword>          <keyword tid="175979"><![CDATA[jennifer whitlow]]></keyword>          <keyword tid="66341"><![CDATA[OMS CS]]></keyword>          <keyword tid="69631"><![CDATA[Online Master of Science in Computer Science]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="597147">  <title><![CDATA[Foley Finalist Pavalanathan Turns Chaos into Computing Success]]></title>  <uid>33939</uid>  <body><![CDATA[<p>As a child, <a href="https://www.cc.gatech.edu/~upavalan/"><strong>Umashanthi Pavalanathan</strong></a> had a morning routine.</p><p>She and the other four members of her immediate family would wake up and get ready. As a group, they would walk out of the house in the direction of her school, not knowing whether all five would meet again at the end of the day.</p><p>An uncertainty foreign to most growing up inside the United States, it was a way of life for Pavalanathan, who grew up during the height of a violent civil war in Sri Lanka that claimed the lives of more than 100,000 and displaced nearly one million.</p><p>&ldquo;It just happened,&rdquo; says Pavalanathan of the sporadic bombing that affected the northern area of Sri Lanka, where she lived. &ldquo;It was normal. It was part of our life. We couldn&rsquo;t ... [just] stay home and stop living our lives. We had to go.&rdquo;&nbsp;</p><p>And, so, every day, Pavalanathan and her family went through that same routine, committed to pursuing one thing they felt could help them achieve a better future: an education.</p><p>That commitment put Pavalanathan on a path that has led her to Georgia Tech&rsquo;s School of Interactive Computing, where she has contributed to important research in computational sociolinguistics and was recently named a <a href="http://gvu.gatech.edu/index.php?q=james-d-foley-gvu-center-endowment">Foley Scholars</a> finalist.</p><h4><strong>Hope in the Dark</strong></h4><p>The violence began in 1983 and lasted until 2009. Pavalanathan was born in 1986 and didn&rsquo;t move to the United States until 2011, meaning the vast majority of her life was spent living under these tenuous circumstances.</p><p>&ldquo;It was scary,&rdquo; she admits. &ldquo;It was just the uncertainty. To see friends and family members, and you don&rsquo;t know what&rsquo;s going to happen.&rdquo;</p><p>One thing she had to look forward to, though, was her education. It was a common point of emphasis among parents of Tamil families living in northern Sri Lanka: Vigorously pursue an education in order to advance to a university with the hope of preparing for a better future when the country regains peace and stability.</p><p>&ldquo;Even with the hopelessness, there was some hope that someday things can change through education,&rdquo; says Pavalanathan. &ldquo;So, we had a goal. Even though there were bombings, and friends and neighbors were being killed, we had a goal.&rdquo;</p><p>There were challenges. Beyond the obvious &ndash; the bombings and displacement &ndash; Pavalanathan also grew up without electricity until she was 12 years old. They had kerosene lamps they would use at night. The scarcity led to innovation, she says, as people would come up with novel ways to limit the amount of kerosene being used.</p><p>She was displaced for an extended period of time once in 1995, when she was 9 years old. A heavy attack forced her family and many others out of their homes for about six months. Living as a refugee within her own country, she attended school in the evenings to keep up with everyone else her age.</p><p>Along the way, she was introduced to computing.</p><p>Before high school, she saw her first computer at an exhibition at a university. There, she learned about the internet.</p><p>&ldquo;I was excited about it,&rdquo; she says. &ldquo;That was something I enjoyed.&rdquo;</p><p>When she was about 12 years old, her family had electricity for the first time. It wasn&rsquo;t available for 24 hours a day, so when her family got its first computer two years later she was only able to use it at certain times of the day.</p><p>&ldquo;The dial-up was faster at night, so I would stay up late and try to do as much as I could,&rdquo; she said. &ldquo;I enjoyed solving problems in that way. That&rsquo;s when I knew I wanted to do something in computing.&rdquo;</p><h4><strong>A Helpful Challenge</strong></h4><p>The story may have ended there, if not for the high school she attended in Sri Lanka. It was a missionary school that focused on more than just standard education.</p><p>There were extracurricular activities like sports and fine arts, things that pushed Pavalanathan to be more outgoing.</p><p>&ldquo;I think I realized that [as an undergraduate] when I met students from other top schools in my hometown known for good grades that there was a difference in the way I was brought up,&rdquo; says Pavalanathan. &ldquo;Many could do well in exams, but they couldn&rsquo;t present themselves or go up and speak to people. But I think my school was very influential in giving us the challenge in those areas.&rdquo;</p><p>They were vital skills she claims helped her when she made the big decision to come to the United States following her undergraduate studies. Taking that course was considered very much outside the norm in her family.</p><p>&ldquo;I asked my dad not to tell that to the relatives, because they were brought up in a strict, tight-knit environment,&rdquo; she says. &ldquo;It was something that wasn&rsquo;t really accepted by everyone.&rdquo;</p><p>However, when she made it to the United States, first as a visiting scholar at Indiana University and then as a Ph.D. student at Georgia Tech, she found that things came naturally.</p><p>She had a number of Ph.D. offers from other schools&nbsp;but says she chose Georgia Tech because of the welcoming and diverse environment in the School of Interactive Computing. Also key to her decision was the relationship she established with her advisor, <a href="https://www.cc.gatech.edu/people/jacob-eisenstein"><strong>Jacob Eisenstein</strong></a>, and the research she was able to pursue.</p><h4><strong>Pursuing Impactful Research</strong></h4><p>Pavalanathan&rsquo;s research is focused on the field of <a href="https://www.ic.gatech.edu/content/social-computing-computational-journalism">computational sociolinguistics</a>, a fusion between computer science, social computing, and natural language processing that studies the relationship between language and society in a computational way.</p><p>Sociolinguists have long studied the impact context has on the development of language, but only recently have they had large online social systems like Facebook, Twitter, and others to observe natural communication on a large scale.</p><p>Pavalananthan is interested in studying why and how people say the things they do in a given context.</p><p>&ldquo;When we speak, we have non-verbal cues,&rdquo; says Pavalanathan. &ldquo;Those don&rsquo;t exist in writing, so we are trying to invent new ways.</p><p>&ldquo;Say that they&rsquo;re happy or sad or want to argue. Sometimes they&rsquo;ll use all capitals or punctuation. On Twitter, you&rsquo;ll see repeating characters. That&rsquo;s not just random. There is a reason people do this.&rdquo;</p><p>Additionally, she examines how language changes with the audience. For example, when someone speaks directly to a peer on Twitter, they will speak in a different manner, likely more informal, than they would to a broader audience.</p><p>&ldquo;We see that in face-to-face communication, as well,&rdquo; she says.</p><p>One paper published last year looked at how the introductions of emojis on Twitter caused any changes in writing style from the more dated emoticons.</p><p>The ultimate goal of the research is to improve language tools to make them more aware of linguistic patterns in different social contexts.</p><p>&ldquo;But we&rsquo;re still a long way from that,&rdquo; says Pavalanathan. &ldquo;We are trying to understand the patterns of variation in online language, and this could potentially help us to improve language processing tools in the future.&rdquo;</p><p>The work has gotten her recognized as a finalist in the 2017 Foley Scholars program. Winners will be announced at the <a href="http://gvu.gatech.edu/gvu-25-program">GVU 25<sup>th</sup> Anniversary</a> celebration on Oct. 18 at the Tech Square Research Building.</p><p>&ldquo;It was a nice surprise to be selected as a finalist,&rdquo; she says. &ldquo;It really validates the work that we&rsquo;ve done.&rdquo;</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1507637364</created>  <gmt_created>2017-10-10 12:09:24</gmt_created>  <changed>1507837912</changed>  <gmt_changed>2017-10-12 19:51:52</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Growing up during height of civil war in Sri Lanka, IC student Umashanthi Pavalanathan pursues success in education.]]></teaser>  <type>news</type>  <sentence><![CDATA[Growing up during height of civil war in Sri Lanka, IC student Umashanthi Pavalanathan pursues success in education.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-10-10T00:00:00-04:00</dateline>  <iso_dateline>2017-10-10T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-10-10 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>597322</item>      </media>  <hg_media>          <item>          <nid>597322</nid>          <type>image</type>          <title><![CDATA[Umashanthi Pavalanathan update]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[umashanthi_main_updated.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/umashanthi_main_updated.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/umashanthi_main_updated.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/umashanthi_main_updated.jpg?itok=5zH1eeyx]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Umashanthi Pavalanathan]]></image_alt>                    <created>1507837849</created>          <gmt_created>2017-10-12 19:50:49</gmt_created>          <changed>1507837849</changed>          <gmt_changed>2017-10-12 19:50:49</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://gvu.gatech.edu/index.php?q=james-d-foley-gvu-center-endowment]]></url>        <title><![CDATA[James D. Foley Scholars Program]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="175317"><![CDATA[umashanthi pavalanathan]]></keyword>          <keyword tid="175331"><![CDATA[Foley Scholars Program]]></keyword>          <keyword tid="175862"><![CDATA[computational sociolinguistics]]></keyword>          <keyword tid="111941"><![CDATA[Jacob Eisenstein; Twitter]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="596987">  <title><![CDATA[Seeing is Believing: Georgia Tech Becoming a Leader in Visualization, Visual Analytics]]></title>  <uid>33939</uid>  <body><![CDATA[<p>For a long time, School of Interactive Computing (IC) Professor <a href="https://www.cc.gatech.edu/people/john-stasko"><strong>John Stasko</strong></a> was information visualization and visual analytics at Georgia Tech. After joining the faculty in 1989, he spent the better part of two decades as a one-man shop, teaching and leading research with just a handful of graduate students at a time.</p><p>Recent years, however, have seen a steep rise in interest in the field, and Georgia Tech has positioned itself as a national leader.</p><p>That leadership is again on display this week at <a href="http://ieeevis.org/">IEEE VIS 2017</a> in Phoenix, Ariz., where Georgia Tech is presenting six conference papers, two journal articles, six workshop papers, and six posters across the multiple co-located conferences &ndash; Visual Analytics Science and Technology (VAST), Information Visualization (InfoVis), and Scientific Visualization (SciVis), among others.</p><p>&ldquo;Most universities have a vis person,&rdquo; Stasko explained. &ldquo;Maybe one. There are a few places that have more than that. With the faculty and resources we have now, I think we&rsquo;re among the biggest presences out there. At the visualization conferences, people know about us. They know we&rsquo;re a force.&rdquo;</p><h4><strong>Rising Interest Leads to Diverse Research</strong></h4><p>There are a number of indicators that point to the rising emphasis in the field, both in academia and beyond.</p><p>For one, there has been a tremendous growth in course enrollments at Georgia Tech.</p><p>&ldquo;If that&rsquo;s an indicator of interest, then, yes, the appetite is definitely there,&rdquo; said Associate Professor <a href="https://www.cc.gatech.edu/people/rahul-basole"><strong>Rahul Basole</strong></a>, who joined IC in 2012.</p><p>The CS 4460, Introduction to Information Visualization, class is taught every semester &ndash; spring, summer, and fall &ndash; frequently garnering over 100 students in each session.</p><p>&ldquo;And there&rsquo;s even higher demand than that,&rdquo; Stasko said.</p><p>Beyond that, though, there are endless fields that utilize or could utilize expertise in visual analytics and information visualization, from health care to financial technology, sports to public policy, international affairs, and more.</p><p>&ldquo;Data can come from anything,&rdquo; said IC Assistant Professor <a href="https://www.cc.gatech.edu/people/alex-endert"><strong>Alex Endert</strong></a>, who came to Georgia Tech in 2013. &ldquo;More and more domains are becoming data-driven. They&rsquo;re collecting data, and they&rsquo;re saying, &lsquo;How do we make sense of this? What do I know now that I didn&rsquo;t know before?&rsquo; I think that&rsquo;s where vis plays a big role.&rdquo;</p><p>With the support of former IC school chair Annie Ant&oacute;n and others, the Georgia Tech Visualization Lab grew five-fold over the course of the past decade.</p><p>Professor <a href="https://www.cc.gatech.edu/people/james-foley"><strong>Jim Foley</strong></a> began working in information visualization with Stasko about 10 years ago, focusing on teaching the 4460 undergraduate course. Basole, Endert, and School of Computational Science &amp; Engineering (CSE) Assistant Professor <a href="https://www.cc.gatech.edu/people/polo-chau"><strong>Polo Chau</strong></a> joined the mix shortly thereafter. Others, like CSE Professor <a href="https://www.cc.gatech.edu/people/haesun-park"><strong>Haesun Park</strong></a>, regularly contribute to research in the field, as well. Each brings what Basole called a &ldquo;slightly different flavor,&rdquo; establishing well-rounded resources to potential students and industry partners.</p><p>Basole looked at the field through the lens of enterprise, mapping complex markets and providing organizational-level visualizations. Endert comes at it from angles of human-computer interaction, machine learning, and data mining, among others. Stasko saw the skyrocketing amount of available data, fueled by the growth of the internet, and became focused on providing tools to analyze and understand these data sets.</p><p>&ldquo;More students are joining because we have such a diverse set of research areas that are complimentary to each other,&rdquo; Basole said.</p><p>&ldquo;Georgia Tech has the perfect culture for collaborative research,&rdquo; Chau added. &ldquo;Students are encouraged to collaborate to innovate across disciplines. Faculty can easily work across schools and colleges and with industry partners.&rdquo;</p><p>Diverse expertise means diverse areas of study for students at every level, as well. Six classes examining different areas of information visualization and visual and data analytics are offered to both undergraduate and graduate students at Georgia Tech:</p><ul><li>CS 4460</li><li>CS 7450 &ndash; Information Visualization, offered every fall</li><li>CS 8803 CV &ndash; Data Visualization: Principles and Applications, a new course that began last spring primarily for Scheller College of Business MBA students and those in the one-year Data Analytics master&rsquo;s program</li><li>CS 8803 VDA &ndash; Visual Data Analysis</li><li>CS 8803 VEA &ndash; Visual Enterprise Analytics</li><li>In CSE, Chau offers a combined undergraduate and graduate course, CX 4242/CSE 6242 &ndash; Data and Visual Analytics &ndash; that has between 150-200 students per term, as well.</li></ul><p>&ldquo;I can confidently say that if you come here interested in visualization and you take the number of courses that we have available, you are more than likely to leave with a well-rounded education in what it means to do visualization,&rdquo; Endert said. &ldquo;I don&rsquo;t know many other universities that can make that claim.&rdquo;</p><h4><strong>Opportunities for Industry</strong></h4><p>Beyond the resources the College offers in the academic setting, which also include a spacious lab and equipment for use by students and researchers, faculty members see a future that could also face outward to the greater Atlanta landscape.</p><p>Basole pointed to the growth in associated industry in Atlanta &ndash; like the NCR Corporation, which is building a new headquarters in Technology Square, and audit, tax, and advisory firm KPMG, which is opening an innovation hub in Midtown &ndash; as opportunity for collaboration.</p><p>And then, of course, there is a rise in visualization in areas like public policy and the news media.</p><p>&ldquo;For them to understand all the things we are doing in here, that would be incredibly beneficial,&rdquo; Basole said. &ldquo;If industry understood the capabilities we have in analyzing data and making it more accessible to everyone, that would be a win-win for everyone.&rdquo;</p><p>In the meantime, they will take advantage of the resources they have to lead the way in research that pushes the boundaries of the field.</p><p>&ldquo;We&rsquo;re in an envious position,&rdquo; Stasko said.</p><p>&ldquo;We have opportunities coming from inside and outside, industry and government. The ability for us to digest and be able to deliver on that is the biggest challenge. We would love to continue to attract more bright Ph.D. students to the program. That is essential, and will allow us to explore areas that haven&rsquo;t really been explored before.&rdquo;</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1507210696</created>  <gmt_created>2017-10-05 13:38:16</gmt_created>  <changed>1507210696</changed>  <gmt_changed>2017-10-05 13:38:16</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The Georgia Tech Visualization Lab has grown by leaps and bounds over the past decade, becoming a national thought leader in the field.]]></teaser>  <type>news</type>  <sentence><![CDATA[The Georgia Tech Visualization Lab has grown by leaps and bounds over the past decade, becoming a national thought leader in the field.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-10-05T00:00:00-04:00</dateline>  <iso_dateline>2017-10-05T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-10-05 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>596985</item>      </media>  <hg_media>          <item>          <nid>596985</nid>          <type>image</type>          <title><![CDATA[Vis lab 2]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Vis Lab.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Vis%20Lab.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Vis%20Lab.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Vis%2520Lab.jpg?itok=rDnmSIXt]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Georgia Tech visualization lab]]></image_alt>                    <created>1507210394</created>          <gmt_created>2017-10-05 13:33:14</gmt_created>          <changed>1507210394</changed>          <gmt_changed>2017-10-05 13:33:14</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://vis.gatech.edu/]]></url>        <title><![CDATA[Georgia Tech Visualization Lab]]></title>      </link>          <link>        <url><![CDATA[http://poloclub.gatech.edu/cse6242/2017fall/]]></url>        <title><![CDATA[Data and Visual Analytics]]></title>      </link>          <link>        <url><![CDATA[http://va.gatech.edu/]]></url>        <title><![CDATA[Visual Analytics Lab]]></title>      </link>          <link>        <url><![CDATA[https://www.cc.gatech.edu/gvu/ii/]]></url>        <title><![CDATA[Information Interfaces Group]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="175808"><![CDATA[georgia tech visualization lab]]></keyword>          <keyword tid="11632"><![CDATA[john stasko]]></keyword>          <keyword tid="112421"><![CDATA[alex endert]]></keyword>          <keyword tid="53931"><![CDATA[Rahul Basole]]></keyword>          <keyword tid="83261"><![CDATA[Polo Chau]]></keyword>          <keyword tid="78531"><![CDATA[Jim Foley]]></keyword>          <keyword tid="10475"><![CDATA[Haesun Park]]></keyword>          <keyword tid="7257"><![CDATA[visualization]]></keyword>          <keyword tid="172922"><![CDATA[information visualization]]></keyword>          <keyword tid="175777"><![CDATA[ieee vis 2017]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="596952">  <title><![CDATA[IC Researchers Earn Test of Time Award for VAST 2007 Paper]]></title>  <uid>33939</uid>  <body><![CDATA[<p>School of Interactive Computing Professor <a href="https://www.cc.gatech.edu/people/john-stasko"><strong>John Stasko</strong></a> and three co-authors were presented with one of five Test of Time awards Tuesday at the <a href="http://ieeevis.org/">IEEE VIS 2017</a> conference in Phoenix, Ariz., for research presented at the Visual Analytics Science and Technology (VAST) 2007 conference.</p><p>The paper of note, titled <a href="https://www.cc.gatech.edu/~stasko/papers/vast07-jigsaw.pdf"><em>Jigsaw: Supporting Investigative Analysis through Interactive Visualization</em></a>, was co-authored by Stasko, <strong>Carsten G&ouml;rg</strong>, <strong>Zhicheng Liu</strong>, and <strong>Kanupriya Singhal</strong>.</p><p>The team&rsquo;s 2007 research developed a visual analytic system, called Jigsaw, which addresses a challenge investigative analysts face when working with large collections of text documents: Connecting embedded threads of evidence to formulate hypotheses. As the number of documents and concepts in such cases grows larger, making sense of the information becomes more difficult.</p><p>Jigsaw represents documents and their contents visually in order to help analysts examine reports more efficiently and develop theories about potential actions more quickly. The system performs rudimentary text analysis including sentiment detection, similarity comparison, and clustering, among other tasks, on the documents and then provides multiple interactive visualizations of the documents&rsquo; text. It provides multiple coordinated views with emphasis on visually illustrating connections between entities across the different documents.</p><p>&ldquo;This was probably the biggest project in my lab, maybe, over my entire career in terms of how many students were on it,&rdquo; Stasko said. &ldquo;It was a big effort for maybe seven or eight years, and this paper was our first introduction of the idea.&rdquo;</p><p>Stasko&rsquo;s lab published a number of subsequent papers relating to the system (a list can be found <a href="https://www.cc.gatech.edu/gvu/ii/jigsaw/">here</a>).</p><p>IEEE VIS is being held Oct. 1-6 in Phoenix, Ariz., and includes a number of co-located conferences and programs, including IEEE VAST, IEEE Information Visualization, and IEEE Scientific Visualization.</p><p>A full account of Georgia Tech&rsquo;s participation at the conference can be found <a href="https://www.ic.gatech.edu/news/596888/vis-2017-georgia-tech-visualization-research-expands-new-paths-understanding-data">here</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1507139861</created>  <gmt_created>2017-10-04 17:57:41</gmt_created>  <changed>1507139861</changed>  <gmt_changed>2017-10-04 17:57:41</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[School of Interactive Computing Professor John Stasko and three co-authors were presented with one of five Test of Time awards Tuesday at the IEEE VIS 2017 conference in Phoenix, Ariz., for research presented at the VAST 2007 conference.]]></teaser>  <type>news</type>  <sentence><![CDATA[School of Interactive Computing Professor John Stasko and three co-authors were presented with one of five Test of Time awards Tuesday at the IEEE VIS 2017 conference in Phoenix, Ariz., for research presented at the VAST 2007 conference.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-10-04T00:00:00-04:00</dateline>  <iso_dateline>2017-10-04T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-10-04 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>596950</item>      </media>  <hg_media>          <item>          <nid>596950</nid>          <type>image</type>          <title><![CDATA[IEEE VIS 2017 Test of Time Award 3]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[test of time award 2.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/test%20of%20time%20award%202.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/test%20of%20time%20award%202.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/test%2520of%2520time%2520award%25202.jpg?itok=MdIJMv-U]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Co-authors Zicheng Liu, Carsten Görg, and John Stasko display their Test of Time award at IEEE VIS 2017]]></image_alt>                    <created>1507139575</created>          <gmt_created>2017-10-04 17:52:55</gmt_created>          <changed>1507139575</changed>          <gmt_changed>2017-10-04 17:52:55</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://www.cc.gatech.edu/~stasko/papers/vast07-jigsaw.pdf]]></url>        <title><![CDATA[Jigsaw: Supporting Investigative Analysis through Interactive Visualization]]></title>      </link>          <link>        <url><![CDATA[http://www.ic.gatech.edu/news/596888/vis-2017-georgia-tech-visualization-research-expands-new-paths-understanding-data]]></url>        <title><![CDATA[Georgia Tech at IEEE VIS 2017]]></title>      </link>          <link>        <url><![CDATA[http://ieeevis.org/year/2017/info/awards/test-of-time-awards]]></url>        <title><![CDATA[IEEE VIS 2017 Test of Time Awards]]></title>      </link>          <link>        <url><![CDATA[https://vis.gatech.edu/]]></url>        <title><![CDATA[Georgia Tech Visualization Lab]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="11632"><![CDATA[john stasko]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="172922"><![CDATA[information visualization]]></keyword>          <keyword tid="175784"><![CDATA[vast 2007]]></keyword>          <keyword tid="175777"><![CDATA[ieee vis 2017]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="596888">  <title><![CDATA[Vis 2017: Georgia Tech Visualization Research Expands New Paths to Understanding Data ]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Georgia Tech researchers are presenting new techniques and research for information visualization and visual analytics this week, Oct. 1-6, at the IEEE Vis 2017 conference in Phoenix, Ariz.&nbsp;</p><p>Georgia Tech research is led by School of Interactive Computing faculty and students, and also includes School of Computational Science and Engineering researchers.&nbsp;</p><p>Among the 20 Georgia Tech papers and posters in the technical program at InfoVis and VAST (Visual Analytics Science and Technology) are those that include a variety of approaches portending a future where data analysis tools will be as commonplace as word processing software.&nbsp; &nbsp;</p><p>Visualization work, already inherent to many enterprises, is gaining wider adoption and creating a wave of new opportunities and research innovations in the space. Visualizations are designed to create interactive representations of data that allow users to explore its many facets and connections in order to gain greater insight into data sets.</p><p>Some emerging themes in this year&rsquo;s Georgia Tech work include machine learning methods, new techniques to explore data patterns (including augmented reality), modeling neural networks, and finding connections within graphs, such as for biological systems, network security and finance.</p><p>John Stasko, Interactive Computing, and his co-authors were presented one of five Test of Time awards Tuesday morning at the plenary ceremony for their research from VAST 2007. The paper, <em>Jigsaw: Supporting Investigative Analysis through Interactive Visualization</em> was co-authored by John Stasko, Carsten G&ouml;rg, Zhicheng Liu, and Kanupriya Singhal.</p><p>Explore GT research from Vis 2017 below and come back through the week for a look at the people in Georgia Tech&rsquo;s VIS Lab as well as coverage of the Test of Time Award.</p><p>&nbsp;</p><h2><strong>GT Involvement at IEEE VIS 2017</strong></h2><p>&nbsp;</p><p><strong>Awards</strong></p><p>VAST Test of Time Award</p><p>&ldquo;Jigsaw: Supporting Investigative Analysis through Interactive Visualization&rdquo;</p><p>John Stasko, Carsten G&ouml;rg, Zhicheng Liu, and Kanupriya Singhal</p><p>For paper at VAST 2007 Conference</p><p>&nbsp;</p><p><strong>InfoVis&nbsp;Papers</strong></p><p>&ldquo;Orko: Facilitating Multimodal Interaction for Visual Network Exploration and Analysis&rdquo;&nbsp;</p><p>Arjun Srinivasan and John Stasko</p><p><a href="http://www.cc.gatech.edu/~stasko/papers/infovis17-orko.pdf">http://www.cc.gatech.edu/~stasko/papers/infovis17-orko.pdf</a></p><p>&nbsp;</p><p><strong>VAST&nbsp;Papers</strong></p><p>&ldquo;Graphiti: Interactive Specification of Attribute-based Edges for Network Modeling and Visualization&rdquo;</p><p>Arjun Srinivasan, Hyunwoo Park, Alex Endert, and Rahul C. Basole</p><p><a href="http://arjun010.github.io/static/papers/graphiti-vast-17.pdf">http://arjun010.github.io/static/papers/graphiti-vast-17.pdf</a></p><p>&nbsp;</p><p>&ldquo;ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models&rdquo;</p><p>Minsuk Kahng, Pierre Andrews, Aditya Kalro, and Polo Chau</p><p><a href="https://www.cc.gatech.edu/~dchau/papers/17-vast-activis.pdf">https://www.cc.gatech.edu/~dchau/papers/17-vast-activis.pdf</a></p><p><em>(Collaboration with Facebook; deployed on Facebook&rsquo;s machine learning platform)</em></p><p>&nbsp;</p><p>&ldquo;VIGOR: Interactive Visual Exploration of Graph Query Results&rdquo;&nbsp;</p><p>Robert Pienta, Fred Hohman, Alex Endert, Acar Tamersoy, Kevin Roundy, Chris Gates, Shamkant Navathe, and Polo Chau</p><p><a href="https://www.cc.gatech.edu/~dchau/papers/17-vast-vigor.pdf">https://www.cc.gatech.edu/~dchau/papers/17-vast-vigor.pdf</a></p><p><em>(Collaboration with Symantec)</em></p><p>&nbsp;</p><p>&quot;Warning, Bias May Occur: A Proposed Approach to Detecting Cognitive Bias in Interactive Visual Analytics&quot;</p><p>Emily Wall, Leslie Blaha, Lyndsey Franklin, and Alex Endert</p><p><a href="https://www.cc.gatech.edu/~ewall9/media/papers/BiasVAST17.pdf">https://www.cc.gatech.edu/~ewall9/media/papers/BiasVAST17.pdf</a></p><p>&nbsp;</p><p>&ldquo;Podium: Ranking Data Using Mixed-Initiative Visual Analytics&ldquo;</p><p>Emily Wall, Subhajit Das, Ravish Chawla, Bharath Kalidindi, Eli T. Brown, and Alex Endert</p><p><a href="https://www.cc.gatech.edu/~ewall9/media/papers/PodiumVAST17.pdf">https://www.cc.gatech.edu/~ewall9/media/papers/PodiumVAST17.pdf</a></p><p>&nbsp;</p><p><strong>TVCG (journal paper being presented at VIS)</strong></p><p>&ldquo;vispubdata.org: A Metadata Collection about IEEE Visualization (VIS) Publications&rdquo;</p><p>Petra Isenberg, Florian Heimerl, Steffen Koch, Tobias Isenberg, Panpan Xu, Charles Stolper, Michael Sedlmair, Jian Chen, Torsten M&ouml;ller, and John T. Stasko</p><p><a href="http://www.cc.gatech.edu/~stasko/papers/tvcg17-vispubdata.pdf">http://www.cc.gatech.edu/~stasko/papers/tvcg17-vispubdata.pdf</a></p><p>&nbsp;</p><p>&ldquo;Evaluating Interactive Graphical Encodings for Data Visualization&rdquo;</p><p>Bahador Saket, Arjun Srinivasan, Eric Ragan, and Alex Endert</p><p><a href="http://bahadorsaket.com/publication/encodingsPaper.pdf">http://bahadorsaket.com/publication/encodingsPaper.pdf</a></p><p>Blog Post: <a href="https://goo.gl/YwkjqX">https://goo.gl/YwkjqX</a></p><p>&nbsp;</p><p><strong>Posters</strong></p><p>&ldquo;Equity Monitor: Visualizing Attributes of Health Inequity in Atlanta&rdquo;</p><p>Xiaoxue Zhang, Alex Godwin, John Stasko</p><p><a href="http://www.cc.gatech.edu/~stasko/papers/vis17-poster-health.pdf">http://www.cc.gatech.edu/~stasko/papers/vis17-poster-health.pdf</a></p><p>&nbsp;</p><p>&ldquo;CricVis: Interactive Visual Exploration and Analysis of Cricket Matches&rdquo;</p><p>Ayan Das, Arjun Srinivasan, John Stasko</p><p><a href="http://www.cc.gatech.edu/~stasko/papers/vis17-poster-cricket.pdf">http://www.cc.gatech.edu/~stasko/papers/vis17-poster-cricket.pdf</a></p><p>&nbsp;</p><p>&ldquo;Atomic Operations for Specifying Graph Visualization Techniques&rdquo;</p><p>Charles D. Stolper, Will Price, Matt Sanford, Duen Horng Chau, John Stasko</p><p><a href="http://www.cc.gatech.edu/~stasko/papers/vis17-poster-glo.pdf">http://www.cc.gatech.edu/~stasko/papers/vis17-poster-glo.pdf</a></p><p>&nbsp;</p><p>&ldquo;3D Exploration of Graph Layers via Vertex Cloning&rdquo;</p><p>James Abello, Fred Hohman, Duen Horng (Polo) Chau</p><p><a href="https://www.cc.gatech.edu/~dchau/papers/17-vis-playground.pdf">https://www.cc.gatech.edu/~dchau/papers/17-vis-playground.pdf</a></p><p>&nbsp;</p><p>&ldquo;High-Recall Document Retrieval from Large-Scale Noisy Documents via Visual Analytics based on Targeted Topic Modeling&rdquo;</p><p>Hannah Kim, Jaegul Choo, Alex Endert, Haesun Park</p><p><a href="https://www.cc.gatech.edu/~aendert3/resources/Kim2017HighRecall.pdf">https://www.cc.gatech.edu/~aendert3/resources/Kim2017HighRecall.pdf</a>&nbsp;</p><p>&nbsp;</p><p>&ldquo;PredVis: Interaction Techniques for Time Series Prediction&rdquo;</p><p>Sakshi Sanjay Pratap and Alex Endert</p><p><a href="https://www.cc.gatech.edu/~aendert3/resources/Pratap2017PredVis.pdf">https://www.cc.gatech.edu/~aendert3/resources/Pratap2017PredVis.pdf</a>&nbsp;</p><p>&nbsp;</p><p><strong>Workshop Papers</strong></p><p><strong>&ldquo;</strong><a href="https://scholar.google.com/citations?view_op=view_citation&amp;hl=en&amp;user=y8DBOyMAAAAJ&amp;citation_for_view=y8DBOyMAAAAJ:uWiczbcajpAC">VisAR: Bringing Interactivity to Static Data Visualizations through Augmented Reality</a>&rdquo;</p><p>Taeheon Kim, Bahador Saket, Alex Endert, Blair MacIntyre&nbsp;</p><p>Workshop on Immersive Analytics</p><p><a href="http://bahadorsaket.com/publication/VisAR.pdf">http://bahadorsaket.com/publication/VisAR.pdf</a></p><p>&nbsp;</p><p>&ldquo;A Viz of Ice and Fire:Exploring Entertainment Video Using Color and Dialogue&rdquo;</p><p>Fred Hohman, Sandeep Soni, Ian Stewart, and John Stasko</p><p>2nd Workshop on Visualization for the Digital Humanities</p><p><a href="https://www.cc.gatech.edu/~stasko/papers/vis4dh17-thrones.pdf">https://www.cc.gatech.edu/~stasko/papers/vis4dh17-thrones.pdf</a></p><p>&nbsp;</p><p>&ldquo;Affordances of Input Modalities for Visual Data Exploration in Immersive Environments&rdquo;</p><p>Sriram Karthik Badam, Arjun Srinivasan, Niklas Elmqvist, and John Stasko</p><p>Workshop on Immersive Analytics</p><p><a href="http://www.cc.gatech.edu/~john.stasko/papers/immersive17-input.pdf">http://www.cc.gatech.edu/~john.stasko/papers/immersive17-input.pdf</a></p><p>&nbsp;</p><p>&ldquo;Designing a Visual Analytics System for Industry-Scale Deep Neural Network Models&rdquo;</p><p>Minsuk Kahng, Pierre Andrews, Aditya Kalro, and Polo Chau</p><p>Workshop on Visual Analytics for Deep Learning</p><p>&nbsp;</p><p>&ldquo;Four Perspectives on Human Bias in Visual Analytics&rdquo;</p><p>Emily Wall, Leslie Blaha, Celeste Paul, Kris Cook, and Alex Endert</p><p>DECISIVe: Workshop on Dealing with Cognitive Biases in Visualizations</p><p><a href="https://www.cc.gatech.edu/~ewall9/media/papers/BiasDECISIVe17.pdf">https://www.cc.gatech.edu/~ewall9/media/papers/BiasDECISIVe17.pdf</a></p><p>&nbsp;</p><p>&ldquo;Designing Breadth-Oriented Data Exploration for Mitigating Cognitive Biases&rdquo;</p><p>Po-Ming Law, and Rahul Basole</p><p>DECISIVe: Workshop on Dealing with Cognitive Biases in Visualizations</p><p>&nbsp;</p><p><strong>Doctoral Colloquium</strong></p><p>Alex Godwin</p><p>Bahador Saket</p><p>Emily Wall</p><p>&nbsp;</p><p><strong>Organizing Committee</strong></p><p>Rahul Basole - InfoVis Program Committee</p><p>Rahul Basole - VIS Supporters Co-Chair</p><p>John Stasko - VAST Steering Committee</p><p>John Stasko - InfoVis Program Committee</p><p>Alex Endert - VIS Panels Co-Chair</p><p>Alex Endert - VAST Program Committee</p><p>Alex Endert - Workshop on Immersive Analytics Co-Organizer</p><p>Alex Endert - DECISIVe 2017: 2nd Workshop on Dealing with Cognitive Biases in Visualizations Co-Organizer</p><p>Bahador Saket - Workshop on Immersive Analytics Co-Organizer</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1507052695</created>  <gmt_created>2017-10-03 17:44:55</gmt_created>  <changed>1507052939</changed>  <gmt_changed>2017-10-03 17:48:59</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech researchers are presenting new techniques and research for information visualization and visual analytics this week, Oct. 1-6, at the IEEE Vis 2017 conference in Phoenix, Ariz. ]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech researchers are presenting new techniques and research for information visualization and visual analytics this week, Oct. 1-6, at the IEEE Vis 2017 conference in Phoenix, Ariz. ]]></sentence>  <summary><![CDATA[<p>Georgia Tech researchers are presenting new techniques and research for information visualization and visual analytics this week, Oct. 1-6, at the IEEE Vis 2017 conference in Phoenix, Ariz. Georgia Tech research is led by School of Interactive Computing faculty and students, and also includes School of Computational Science and Engineering researchers.&nbsp;</p>]]></summary>  <dateline>2017-10-03T00:00:00-04:00</dateline>  <iso_dateline>2017-10-03T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-10-03 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jpreston@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Joshua Preston</p><p>jpreston@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>596889</item>      </media>  <hg_media>          <item>          <nid>596889</nid>          <type>image</type>          <title><![CDATA[Vis 2017 faculty]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[faculty at vis 2017.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/faculty%20at%20vis%202017.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/faculty%20at%20vis%202017.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/faculty%2520at%2520vis%25202017.jpg?itok=PpmbSyA1]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1507052818</created>          <gmt_created>2017-10-03 17:46:58</gmt_created>          <changed>1507052818</changed>          <gmt_changed>2017-10-03 17:46:58</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39431"><![CDATA[Data Engineering and Science]]></term>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="596563">  <title><![CDATA[GT Computing Takes the Spotlight at Tapia 2017]]></title>  <uid>27998</uid>  <body><![CDATA[<p><strong>GT Computing Takes the Spotlight at Tapia 2017</strong></p><p>Atlanta played host to the <a href="http://tapiaconference.org/">2017 Richard Tapia Celebration of Diversity in Computing</a>, held Sept. 20-23 in the downtown Hyatt Regency, and once again a strong contingent of GT Computing students, faculty, and staff represented the College of Computing and all its efforts to build equity of access to computing education.</p><p>This year, however, those efforts literally took center stage, as the College received the inaugural <a href="https://www.cc.gatech.edu/news/596289/award-highlights-college-computings-efforts-grow-diversity-cs">University Award for Retention of Minorities and Students with Disabilities in Computer Science</a>. The awarded recognizes U.S. institutions that have demonstrated a proven commitment to recruiting and retaining students from underrepresented groups in undergraduate computing programs.</p><p>Awarded by the <a href="http://www.cmd-it.org/">Center for Minorities and People with Disabilities in IT</a> (CMD-IT), the honor was accepted on behalf of the College by Executive Associate Dean Charles Isbell, Assistant Dean Cedric Stallworth, and Director of Computing Enrollment Jennifer Whitlow, however it recognized the work of many more Georgia Tech faculty and staff&mdash;several of whom also had official roles to play at Tapia.</p><p>Professor Mark Guzdial and Research Scientist Barb Ericson, for example, helped build several of the programs that helped Georgia Tech win the award, such as the College&rsquo;s undergraduate program in computational media and the Georgia Computes! initiative to bring CS education into more of Georgia&rsquo;s K-12 schools. Both Guzdial and Ericson spoke in separate sessions as part of the official Tapia program (<em>see below</em>).</p><p>Overall, it was quite an introduction to GT Computing&rsquo;s diversity for the undergraduate and graduate students who attended Tapia, which included 17 online M.S. in Computer Science (OMS CS) students traveling from around the country and the world. In all, some 57 students comprised Georgia Tech&rsquo;s delegation.</p><p>In fact, two OMS CS students (one former and one current) participated in a panel with Isbell titled, &ldquo;<a href="http://tapiaconference.org/schedule/thursday-september-21-2017/130pm-230pm-1/how-can-digital-degrees-make-higher-education-more-accessible/">How Can Digital Degrees Make Higher Education More Accessible?</a>&rdquo; Program alumnus Miguel Morales, a 2017 graduate, joined current student Tia Pope in sharing their experiences as students from underrepresented groups.</p><p>In fact, the entire Tapia program was dotted with GT Computing speakers, including:</p><ul><li>Guzdial, who took part in the panel, &ldquo;<a href="http://tapiaconference.org/schedule/thursday-september-21-2017/1045am-1215pm/increasing-diversity-in-computing-sharing-of-good-practices/">Increasing Diversity in Computing: Sharing of Good Practices</a>&rdquo;</li><li>Professor Ayanna Howard (School of Electrical &amp; Computer Engineering), in the panels, &ldquo;<a href="http://tapiaconference.org/schedule/thursday-september-21-2017/1045am-1215pm/entrepreneurial-skills-thinking/">Entrepreneurial Skills &amp; Thinking</a>&rdquo; and &ldquo;<a href="http://tapiaconference.org/schedule/friday-september-22-2017/330pm-500pm/fairness-accountability-and-transparency-in-algorithmic-decision-making/">Fairness, Accountability, and Transparency in Algorithmic Decision Making</a>&rdquo;</li><li>Ericson, as moderator of the workshop, &ldquo;<a href="http://tapiaconference.org/schedule/thursday-september-21-2017/1045am-1215pm/how-to-use-and-customize-free-interactive-ebooks/">How to Use and Customize Free Interactive Ebooks</a>&rdquo;</li><li>Research Scientist Lorna Rivera (Center for Education Integrating Science, Mathematics &amp; Computing), on the panel, &ldquo;<a href="http://tapiaconference.org/schedule/thursday-september-21-2017/130pm-230pm-1/using-advanced-computing-to-affect-social-change/">Using Advanced Computing to Affect Social Change</a>&rdquo;</li><li>Research Scientist Rosa Arriaga, on the panel, &ldquo;<a href="http://tapiaconference.org/schedule/friday-september-22-2017/130pm-300pm/strategies-for-human-human-interaction/">Strategies for Human-Human Interaction</a>&rdquo;</li><li>Associate Professor Ada Gavriloska, on the panel, &ldquo;<a href="http://tapiaconference.org/schedule/friday-september-22-2017/130pm-300pm/internet-of-things/">Data Challenges for the Internet of Things</a>&rdquo;</li><li>M.S. student Nicole de Vries, in the workshop, &ldquo;<a href="http://tapiaconference.org/schedule/friday-september-22-2017/330pm-500pm/using-why-to-build-a-better-what-a-human-centered-approach-to-systems-and-data/">Using &lsquo;Why&rsquo; to Build a Better &lsquo;What&rsquo;: A Human-Centered Approach to Systems &amp; Data</a>&rdquo;</li><li>Isbell, on the panel, &ldquo;<a href="http://tapiaconference.org/schedule/friday-september-22-2017/330pm-500pm/national-scale-committee-the-process-and-the-requirements/">National-Scale Committee: The Process &amp; the Requirements</a>&rdquo;</li></ul><p>On Thursday evening, the College hosted a reception on campus for its Tapia attendees and Atlanta-area alumni to celebrate GT Computing&rsquo;s leadership in diversity. Isbell and Stallworth both discussed the past efforts that had won Georgia Tech the inaugural CMD-IT award and gave an attendees-only preview of the exciting work that lay ahead.</p><p>&ldquo;What is the role for Georgia Tech in ensuring that students at the K-12 level have equity of access to a computing education?&rdquo; Stallworth said in his remarks. &ldquo;How can we help all kids in the state of Georgia and beyond tap into the incredible opportunities that accompany that kind of an education?&rdquo;</p><p>On Friday, several OMS CS attendees took an afternoon break from Tapia to enjoy their own personalized campus tour&mdash;for most, the first (and possibly only) opportunity they would have to see Georgia Tech firsthand.</p><p>&ldquo;OMS CS has been my first online educational experience, and one worry I had was, what kind of community would I find in the program?&rdquo; said OMS CS student Shipra De, who traveled from Las Vegas to attend Tapia. &ldquo;The collaboration among students has been so encouraging, with many going above and beyond in their efforts to help each other. Then there&rsquo;s the fact that they&rsquo;re doing all this for people they&rsquo;ve never met!&rdquo;</p><p>&ldquo;In Ecuador, there&rsquo;s not as much of a culture of improvement for technical skills or software engineers, so OMS CS was just what I needed,&rdquo; said Romeo Cabrera, who traveled from his hometown of Guayaquil, Ecuador. &ldquo;This program has improved my life in so many ways, not just because of the technical experience&mdash;its flexibility has also allowed me time to share with my family. I can&rsquo;t say enough about it.&rdquo;</p>]]></body>  <author>Brittany Aiello</author>  <status>1</status>  <created>1506538037</created>  <gmt_created>2017-09-27 18:47:17</gmt_created>  <changed>1506538272</changed>  <gmt_changed>2017-09-27 18:51:12</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Atlanta was this year's host to the Tapia Celebration of Diversity in Computing and the Georgia Tech College of Computing represented its expansive community in full-force.]]></teaser>  <type>news</type>  <sentence><![CDATA[Atlanta was this year's host to the Tapia Celebration of Diversity in Computing and the Georgia Tech College of Computing represented its expansive community in full-force.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-09-27T00:00:00-04:00</dateline>  <iso_dateline>2017-09-27T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-09-27 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[mterraza@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Mike Terrazas</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>596565</item>          <item>596566</item>      </media>  <hg_media>          <item>          <nid>596565</nid>          <type>image</type>          <title><![CDATA[Tapia 2017 - Undergraduate Scholar Students]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Screen Shot 2017-09-27 at 2.55.54 PM.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Screen%20Shot%202017-09-27%20at%202.55.54%20PM.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Screen%20Shot%202017-09-27%20at%202.55.54%20PM.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Screen%2520Shot%25202017-09-27%2520at%25202.55.54%2520PM.png?itok=Or6KwUJE]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Tapia 2017 Undergraduate Scholar Students]]></image_alt>                    <created>1506538191</created>          <gmt_created>2017-09-27 18:49:51</gmt_created>          <changed>1506538191</changed>          <gmt_changed>2017-09-27 18:49:51</gmt_changed>      </item>          <item>          <nid>596566</nid>          <type>image</type>          <title><![CDATA[Tapia 2017 - OMS CS Scholar Students]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Screen Shot 2017-09-22 at 2.10.48 PM.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Screen%20Shot%202017-09-22%20at%202.10.48%20PM.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Screen%20Shot%202017-09-22%20at%202.10.48%20PM.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Screen%2520Shot%25202017-09-22%2520at%25202.10.48%2520PM.png?itok=pnUYhOSv]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Tapia 2017 - OMS CS Scholar Students]]></image_alt>                    <created>1506538245</created>          <gmt_created>2017-09-27 18:50:45</gmt_created>          <changed>1506538245</changed>          <gmt_changed>2017-09-27 18:50:45</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1305"><![CDATA[Georgia Tech Academic Advising Network (GTAAN)]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="431631"><![CDATA[OMS]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="175368"><![CDATA[Tapia Celebration of Diversity in Computing]]></keyword>          <keyword tid="170724"><![CDATA[TAPIA]]></keyword>          <keyword tid="10664"><![CDATA[charles isbell]]></keyword>          <keyword tid="10666"><![CDATA[cedric stallworth]]></keyword>          <keyword tid="66341"><![CDATA[OMS CS]]></keyword>          <keyword tid="489"><![CDATA[atlanta]]></keyword>          <keyword tid="736"><![CDATA[diversity]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="596144">  <title><![CDATA[IC Faculty, Alumni Awarded with 10-Year Impact Award at Ubicomp 2017]]></title>  <uid>33939</uid>  <body><![CDATA[<p>School of Interactive Computing faculty and alumni were among a group of five recognized for the 10-Year Impact Award at the <a href="http://ubicomp.org/ubicomp2017/">ACM International Joint Conference on Pervasive and Ubiquitous Computing</a> (Ubicomp 2017) for their paper titled <a href="https://homes.cs.washington.edu/~shwetak/papers/ubicomp2007_flick.pdf"><em>At the Flick of a Switch: Detecting and Classifying Unique Electrical Events on the Residential Power Line</em></a>.</p><p>The paper, which presented an approach that uses a single plug-in sensor to detect a variety of electrical events throughout the home, earned Best Paper and Best Presentation honors at Ubicomp 2007. This year, the paper was one of three awarded at Ubicomp 2017 for having outstanding influence over the past 10 years.</p><p>Co-authors on the paper included current Georgia Tech Professor <strong>Gregory Abowd</strong>, former Georgia Tech postdoctoral student and Research Scientist <strong>Matt Reynolds</strong>, alumni <strong>Shwetak Patel</strong> and <strong>Julie Kientz</strong>, and <strong>Tom Robertson</strong>, who worked in Abowd&rsquo;s lab for two years around the time of publication.</p><p>&nbsp;</p><h4>► <a href="https://public.tableau.com/views/Ubicomp-ISWC2017/Dashboard1?:embed=y&amp;:display_count=no&amp;publish=yes&amp;:showVizHome=no">Interactive Graphic of Ubicomp/ISWC 2017 Papers Program</a></h4><h4>&nbsp;</h4><p>Reynolds, Patel, and Kientz now each hold faculty positions at the University of Washington.</p><p>To achieve desired results, the researchers applied machine learning techniques to recognize electrically noisy events such as turning on or off a particular light switch, a television set, or an electric stove. They tested their system in one home for several weeks and in five homes for one week each to evaluate the system performance over time in different types of houses. Results indicated that it is possible to learn and classify various electrical events with accuracies ranging from 85-90 percent.</p><p>The method has become known as Infrastructural Mediated Sensing, a concept developed and commercialized in a variety of subsequent ways by Patel and Reynolds.</p><p>Ubicomp 2017 took place earlier this month in conjunction with the <a href="https://iswc2017.semanticweb.org/">ACM International Symposium on Wearable Computing</a> in Maui, Hawaii.</p><p>Georgia Tech received another notable accolade at the co-located ISWC 2017 conference, which shares a technical program with Ubicomp. The Jury Prize for Best Paper and Entry in the aesthetics category of the <a href="http://iswc.net/iswc17/program/designexhibition.html">Design Exhibition</a> was awarded to <a href="http://www.clintzeagler.com/2017/03/12/le-monstre-from-characters/">Le Monstr&eacute;</a>, an interactive participatory performance costume developed by Ph.D. HCC student and research scientist <strong>Clint Zeagler</strong>. The team also included IMTC research scientists <strong>Scott Gilliland</strong> and <strong>Laura Levy</strong>.</p><p>This year, Georgia Tech had 11 paper accepted at the conference. Titles, authors, and available links for each can be found below.</p><ul><li><a href="http://ws.iat.sfu.ca/papers/passivehapticslearning.pdf">Passive Haptic Training to Improve Speed and Performance on a Keypad</a> (Caitlyn Seim, Nick Doering, Yang Zhang, Wolfgang Stuerzlinger, Thad Starner)<br />&nbsp;</li><li>FingerSound: Recognizing Unistroke Thumb Gestures Using a Ring (Cheng Zhang, Anandghan Waghmare, Pranav Kundra, Yiming Pu, Scott Gilliland, Thomas Ploetz, Thad Starner, Omer Inan, Gregory Abowd)<br />&nbsp;</li><li><a href="http://delivery.acm.org/10.1145/3140000/3132030/a108-vigil-hayes.pdf?ip=128.61.126.225&amp;id=3132030&amp;acc=OPEN&amp;key=A79D83B43E50B5B8%2E5E2401E94B5C98E0%2E4D4702B0C3E38B35%2E4201BFF1B9FFDE9A&amp;CFID=959967287&amp;CFTOKEN=45486595&amp;__acm__=1505503648_4a93c99d866">FiDO: A Community-based Web Browsing Agent and CDN for Challenged Network Environments</a> (Morgan Vigil-Hayes, Elizabeth Belding, Ellen Zegura)<br />&nbsp;</li><li><a href="http://delivery.acm.org/10.1145/3130000/3123041/p62-zhang.pdf?ip=128.61.126.225&amp;id=3123041&amp;acc=OPEN&amp;key=A79D83B43E50B5B8%2E5E2401E94B5C98E0%2E4D4702B0C3E38B35%2E6D218144511F3437&amp;CFID=959967287&amp;CFTOKEN=45486595&amp;__acm__=1505503735_e58b56acad5ca423e0">FingOrbits: Interaction With Wearables Using Synchronized Thumb Movements</a> (Cheng Zhang, Xiaoxuan Wang, Anandghan Waghmare, Sumeet Jain, Thomas Ploetz, Omer Inan, Thad Starner, Gregory Abowd)<br />&nbsp;</li><li><a href="http://delivery.acm.org/10.1145/3130000/3123060/p94-lee.pdf?ip=128.61.126.225&amp;id=3123060&amp;acc=OPEN&amp;key=A79D83B43E50B5B8%2E5E2401E94B5C98E0%2E4D4702B0C3E38B35%2E6D218144511F3437&amp;CFID=959967287&amp;CFTOKEN=45486595&amp;__acm__=1505503858_b1dd594bc241169aa203">Itchy Nose: Discreet Gesture Interaction Using EOG Sensors in Smart Eye-Wear</a> (Juyoung Lee, Hui-Shyong Yeo, Murtaza Dhuliawala, Jedidiah Akano, Junichi Shimizu, Thad Starner, Aaron Quigley, Woontack Woo, Kai Kunze)<br />&nbsp;</li><li>Detecting Gaze Towards Eyes in Natural Social Interactions and Its Use in Child Assessment (Eunji Chong, Katha Chanda, Zhefan Ye, Audrey Southerland, Nataniel Ruiz, Rebecca Jones, Agata Rozga, Jim Rehg)<br />&nbsp;</li><li>EarBit: Using Wearable Sensors to Detect Eating Episodes in Unconstrained Environments (Abdelkareem Bedri, Richard Li, Malcolm Haynes, Raj Prateek Kosaraju, Ishaan Grover, Temiloluwa Prioleau, Min Yan Beh, Mayank Goel, Thad Starner, Gregory Abowd)<br />&nbsp;</li><li><a href="http://delivery.acm.org/10.1145/3130000/3123042/p150-zeagler.pdf?ip=128.61.126.225&amp;id=3123042&amp;acc=OPEN&amp;key=A79D83B43E50B5B8%2E5E2401E94B5C98E0%2E4D4702B0C3E38B35%2E6D218144511F3437&amp;CFID=959967287&amp;CFTOKEN=45486595&amp;__acm__=1505504122_71e44b91df88e7f">Where to Wear It: Functional, Technical, and Social Considerations in On-Body Location for Wearable Technology, 20 Years of Designing for Wearability</a> (Clint Zeagler)<br />&nbsp;</li><li><a href="http://www.czhang.org/wp-content/uploads/2017/05/SoundTrak_Journal__Authorversion_.pdf">SoundTrak: Continuous 3D Tracking of a Finger Using Active Acoustics</a> (Cheng Zhang, Qiuyue Xue, Anandghan Waghmare, Sumeet Jain, Yiming Pu, Jordan Conant, Sinan Hersek, Kent Lyons, Kenneth A. Cunefare, Omer T. Inan, Gregory Abowd)<br />&nbsp;</li><li><a href="http://www.munmund.net/pubs/IMWUT_SM_EMA.pdf">Inferring Mood Instability on Social Media by Leveraging Ecological Momentary Assessments</a> (Koustuv Saha, Larry Chan, Kaya de Barbaro, Gregory Abowd, Munmun De Choudhury)</li></ul>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1505835680</created>  <gmt_created>2017-09-19 15:41:20</gmt_created>  <changed>1505907365</changed>  <gmt_changed>2017-09-20 11:36:05</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Professor Gregory Abowd, a former research scientist, and two former students were among those awarded for a paper's lasting impact.]]></teaser>  <type>news</type>  <sentence><![CDATA[Professor Gregory Abowd, a former research scientist, and two former students were among those awarded for a paper's lasting impact.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-09-19T00:00:00-04:00</dateline>  <iso_dateline>2017-09-19T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-09-19 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>596142</item>      </media>  <hg_media>          <item>          <nid>596142</nid>          <type>image</type>          <title><![CDATA[Ubicomp test of time 2017]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Ubicomp-2017-test-of-time-winners.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Ubicomp-2017-test-of-time-winners.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Ubicomp-2017-test-of-time-winners.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Ubicomp-2017-test-of-time-winners.jpg?itok=4mdg-fVU]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Winners of the Ubicomp Test of Time award pose with their certificates.]]></image_alt>                    <created>1505834242</created>          <gmt_created>2017-09-19 15:17:22</gmt_created>          <changed>1505834242</changed>          <gmt_changed>2017-09-19 15:17:22</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="4923"><![CDATA[Ubicomp]]></keyword>          <keyword tid="171123"><![CDATA[shwetak patel]]></keyword>          <keyword tid="175586"><![CDATA[julie kientz]]></keyword>          <keyword tid="175587"><![CDATA[matt reynolds]]></keyword>          <keyword tid="175588"><![CDATA[tom robertson]]></keyword>          <keyword tid="11002"><![CDATA[Gregory Abowd]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="595697">  <title><![CDATA[Michaelanne Dye Awarded ARCS Global Impact Award for 2nd Consecutive Year]]></title>  <uid>33939</uid>  <body><![CDATA[<p>School of Interactive Computing Ph.D. student <strong><a href="https://michaelannedye.wordpress.com/">Michaelanne Dye</a></strong> was awarded an <a href="https://www.arcsfoundation.org/">Achievement Rewards for College Scientist</a> (ARCS) Scholar award for the second year in a row, recognizing her research in Cuba and its potential for future global impact.</p><p>Specifically, she was awarded the Global Impact Award, which goes to just one ARCS Scholar each year. The award, which she also won last year, provides $10,000 of unrestricted funding, meaning that she is able to choose what to use the money for.</p><p>&ldquo;It allows me to spend longer periods of time conducting field work by helping cover the costs of having my son travel with me,&rdquo; Dye said.</p><p>Dye&rsquo;s research in <a href="https://www.ic.gatech.edu/academics/human-centered-computing-phd-program">human-centered computing</a> explores interaction and development issues from a social computing perspective. Drawing on her bachelor&rsquo;s degree in Spanish and master&rsquo;s in cultural anthropology, Dye uses qualitative methods to investigate socio-technical issues surrounding internet and social media use and non-use among low-resource communities during times of political, economic, and social transitions.</p><p>Currently, her research lies in Cuba, where, up until recently, internet access was limited to 5 percent of the population. Through fieldwork, observation, and interviews with Cubans, Dye is developing a holistic understanding of how new internet infrastructures interact with cultural values and local constraints.</p><p>Using Cuba as a case study, her work explores how future internet access initiatives might successfully map onto local information infrastructures to provide meaningful, sustainable engagements among under-connected communities in resource-constrained parts of the world.</p><p>The ARCS Foundation is a nationally recognized nonprofit organization started and run entirely by women who boost American leadership and aid advancement in science and technology. According to the foundation&rsquo;s website, nine out of 10 ARCS Scholars work in their sponsored fields after they graduate.</p><p>Dye is co-advised by Professor <a href="https://www.cc.gatech.edu/fac/Amy.Bruckman/">Amy Bruckman</a> and Assistant Professor <a href="https://nehakumar.org/">Neha Kumar</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1504820742</created>  <gmt_created>2017-09-07 21:45:42</gmt_created>  <changed>1504820742</changed>  <gmt_changed>2017-09-07 21:45:42</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Dye was awarded an ARCS Scholar Award for her research in Cuba and its potential for future global impact.]]></teaser>  <type>news</type>  <sentence><![CDATA[Dye was awarded an ARCS Scholar Award for her research in Cuba and its potential for future global impact.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-09-07T00:00:00-04:00</dateline>  <iso_dateline>2017-09-07T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-09-07 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>586831</item>      </media>  <hg_media>          <item>          <nid>586831</nid>          <type>image</type>          <title><![CDATA[Michaelanne Dye]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Dye_ARCS_082016.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Dye_ARCS_082016.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Dye_ARCS_082016.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Dye_ARCS_082016.jpg?itok=Iz0iQmXt]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1486063573</created>          <gmt_created>2017-02-02 19:26:13</gmt_created>          <changed>1486063573</changed>          <gmt_changed>2017-02-02 19:26:13</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://www.ic.gatech.edu/academics/human-centered-computing-phd-program]]></url>        <title><![CDATA[Ph.D. in Human-Centered Computing]]></title>      </link>          <link>        <url><![CDATA[https://www.arcsfoundation.org/]]></url>        <title><![CDATA[ARCS Foundation]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="175458"><![CDATA[Amy Bruckman; Michaelanne Dye; School of Interactive Computing; Cuba; multi-user domains]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="595650">  <title><![CDATA[Paper Co-Authored by IC Faculty Earns Best Paper at EMNLP 2017]]></title>  <uid>33939</uid>  <body><![CDATA[<p>A paper co-authored by Assistant Professor <strong>Dhruv Batra</strong> and Research Scientist <strong>Stefan Lee</strong> of the School of Interactive Computing will be awarded at this week&rsquo;s <a href="http://emnlp2017.net/">Conference on Empirical Methods in Natural Language Processing</a> (EMNLP 2017), which begins Thursday in Copenhagen, Denmark.</p><p>The paper, titled <em>Natural Language Does Not Emerge &lsquo;Naturally&rsquo; in Multi-Agent Dialog</em>, earned one of four Best Paper awards (out of 1,500 submissions) for its findings.</p><p>It explores the conditions under which human-interpretable languages simply emerge between goal-driven interacting AI agents that invent their own communication protocols. In contrast to many recent works that have shown compositional, human-interpretable languages emerging between agents in multi-agent game settings, this work shows that while most agent-invented languages are effective, achieving near-perfect rewards, they are decidedly not interpretable or compositional.</p><p>Batra and Lee, along with collaborators from Carnegie Mellon University, used a Task-and-Tell reference game between two agents as a testbed to come to this conclusion.</p><p>Task and Tell is a simple reference game between a questioner and answerer agent set in a simple world. In the game, the answerer is presented with a simple object &ndash; a colored shape, for example, with a specific style, such as a circle drawn with a red-dashed line. The questioner is tasked with discovering two of the three attributes of the object. The agents communicate in ungrounded vocabulary, using symbols with no pre-specified meanings. Exchanging such single-symbol utterances over two rounds of dialog, the questioner must predict the requested attributes.</p><p>While the language exchanged between the two agents was effective, it was not interpretable.</p><p>&ldquo;As our goal is to explore how natural, human interpretable languages emerge in multi-agent dialogs, we consider these negative results,&rdquo; Lee said.</p><p>In essence, they found that natural language does not, in fact, emerge naturally. Further, they find that restricting the agents&rsquo; vocabularies and limiting how they interact is essential for human-interpretable languages to emerge in such a setting. Using just the right set of controls, the two bots invent their own communication protocol and start using certain symbols to ask or answer about certain visual attributes of a given object.</p><p>EMNLP is one of the top natural language processing conferences. This year, six papers co-authored by College of Computing faculty and students were accepted to the conference.</p><p>Read more on each below.</p><ul><li><a href="https://arxiv.org/abs/1706.08502"><strong>Natural Language Does Not Emerge &lsquo;Naturally&rsquo; in Multi-Agent Dialog</strong></a><strong> (Satwik Kottur, Jos&eacute; M.F. Moura, Stefan Lee, Dhruv Batra)</strong></li></ul><p>ABSTRACT: A number of recent works have proposed techniques for end-to-end learning of communication protocols among cooperative multi-agent populations, and have simultaneously found the emergence of grounded human-interpretable language in the protocols developed by the agents, all learned without any human supervision!&nbsp;In this paper, using a Task and Tell reference game between two agents as a testbed, we present a sequence of &#39;negative&#39; results culminating in a &#39;positive&#39; one -- showing that while most agent-invented languages are effective (i.e. achieve near-perfect task rewards), they are decidedly not interpretable or compositional.&nbsp;In essence, we find that natural language does not emerge &#39;naturally&#39;, despite the semblance of ease of natural-language-emergence that one may gather from recent literature. We discuss how it is possible to coax the invented languages to become more and more human-like and compositional by increasing restrictions on how two agents may communicate.</p><ul><li><a href="https://arxiv.org/abs/1705.06476"><strong>ParlAI: A Dialog Research Software Platform</strong></a><strong> (Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston)</strong></li></ul><p>ABSTRACT: We introduce ParlAI (pronounced &quot;par-lay&quot;), an open-source software platform for dialog research implemented in Python, available at&nbsp;<a href="http://parl.ai./">this http URL</a>&nbsp;Its goal is to provide a unified framework for sharing, training and testing of dialog models, integration of Amazon Mechanical Turk for data collection, human evaluation, and online/reinforcement learning; and a repository of machine learning models for comparing with others&#39; models, and improving upon existing architectures. Over 20 tasks are supported in the first release, including popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail, CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated, including neural models such as memory networks, seq2seq and attentive LSTMs.</p><ul><li><strong><a href="https://arxiv.org/abs/1706.05125">Deal or No Deal? End-to-End Learning for Negotiation Dialogues (Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra)</a></strong></li></ul><p>ABSTRACT: Much of human dialogue occurs in semi-cooperative settings, where agents with different goals attempt to agree on common decisions. Negotiations require complex communication and reasoning skills, but success is easy to measure, making this an interesting task for AI. We gather a large dataset of human-human negotiations on a multi-issue bargaining task, where agents who cannot observe each other&#39;s reward functions must reach an agreement (or a deal) via natural language dialogue. For the first time, we show it is possible to train end-to-end models for negotiation, which must learn both linguistic and reasoning skills with no annotated dialogue states. We also introduce dialogue rollouts, in which the model plans ahead by simulating possible complete continuations of the conversation, and find that this technique dramatically improves performance. Our code and dataset are publicly available (<a href="https://github.com/facebookresearch/end-to-end-negotiator">this https URL</a>).</p><ul><li><a href="https://arxiv.org/abs/1705.00601"><strong>The Promise of Premise: Harnessing Question Premises in Visual Question Answering</strong></a><strong> (Aroma Mahendru, Viraj Prabhu, Akrit Mohapatra, Dhruv Batra, Stefan Lee)</strong></li></ul><p>ABSTRACT: In this paper, we make a simple observation that questions about images often contain premises - objects and relationships implied by the question - and that reasoning about premises can help Visual Question Answering (VQA) models respond more intelligently to irrelevant or previously unseen questions. When presented with a question that is irrelevant to an image, state-of-the-art VQA models will still answer purely based on learned language biases, resulting in non-sensical or even misleading answers. We note that a visual question is irrelevant to an image if at least one of its premises is false (i.e. not depicted in the image). We leverage this observation to construct a dataset for Question Relevance Prediction and Explanation (QRPE) by searching for false premises. We train novel question relevance detection models and show that models that reason about premises consistently outperform models that do not. We also find that forcing standard VQA models to reason about premises during training can lead to improvements on tasks requiring compositional reasoning.</p><ul><li><a href="https://arxiv.org/abs/1703.01720"><strong>Sound-Word2Vec: Learning Word Representations Grounded in Sounds</strong></a><strong> (Ashwin K. Vijayakumar, Ramakrishna Vedantam, Devi Parikh)</strong></li></ul><p>ABSTRACT: To be able to interact better with humans, it is crucial for machines to understand sound - a primary modality of human perception. Previous works have used sound to learn embeddings for improved generic textual similarity assessment. In this work, we treat sound as a first-class citizen, studying downstream textual tasks which require aural grounding. To this end, we propose sound-word2vec - a new embedding scheme that learns specialized word embeddings grounded in sounds. For example, we learn that two seemingly (semantically) unrelated concepts, like leaves and paper are similar due to the similar rustling sounds they make. Our embeddings prove useful in textual tasks requiring aural reasoning like text-based sound retrieval and discovering foley sound effects (used in movies). Moreover, our embedding space captures interesting dependencies between words and onomatopoeia and outperforms prior work on aurally-relevant word relatedness datasets such as AMEN and ASLex.</p><ul><li><a href="https://arxiv.org/abs/1707.06961"><strong>Mimicking Word Embeddings Using Subword RNNs</strong></a><strong> (Yuval Pinter, Robert Guthrie, Jacob Eisenstein)</strong></li></ul><p>ABSTRACT: Word embeddings improve generalization over lexical features by placing each word in a lower-dimensional space, using distributional information obtained from unlabeled data. However, the effectiveness of word embeddings for downstream NLP tasks is limited by out-of-vocabulary (OOV) words, for which embeddings do not exist. In this paper, we present MIMICK, an approach to generating OOV word embeddings compositionally, by learning a function from spellings to distributional embeddings. Unlike prior work, MIMICK does not require re-training on the original word embedding corpus; instead, learning is performed at the type level. Intrinsic and extrinsic evaluations demonstrate the power of this simple approach. On 23 languages, MIMICK improves performance over a word-based baseline for tagging part-of-speech and morphosyntactic attributes. It is competitive with (and complementary to) a supervised character-based model in low-resource settings.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1504793891</created>  <gmt_created>2017-09-07 14:18:11</gmt_created>  <changed>1504803723</changed>  <gmt_changed>2017-09-07 17:02:03</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[IC Assistant Professor Dhruv Batra and Research Scientist Stefan Lee contributed to a paper that was one of four best papers recognized at the conference.]]></teaser>  <type>news</type>  <sentence><![CDATA[IC Assistant Professor Dhruv Batra and Research Scientist Stefan Lee contributed to a paper that was one of four best papers recognized at the conference.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-09-07T00:00:00-04:00</dateline>  <iso_dateline>2017-09-07T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-09-07 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>595647</item>      </media>  <hg_media>          <item>          <nid>595647</nid>          <type>image</type>          <title><![CDATA[EMNLP 2017]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[EMNLP.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/EMNLP.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/EMNLP.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/EMNLP.png?itok=H-uqg3WA]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[EMNLP 2017 logo]]></image_alt>                    <created>1504793682</created>          <gmt_created>2017-09-07 14:14:42</gmt_created>          <changed>1504793682</changed>          <gmt_changed>2017-09-07 14:14:42</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="595357">  <title><![CDATA[Dragon Con 2017: Your Guide to GT Computing Panels This Weekend]]></title>  <uid>33939</uid>  <body><![CDATA[<p>The College of Computing will be represented at <a href="http://www.dragoncon.org/">Dragon Con</a> this week in Atlanta, with faculty members participating in a handful of panels.</p><p>There will be one panel each on Friday, Saturday, and Sunday that features a member of the College. All three are part of the video game track at the Westin.</p><p>The following is a rundown on events that will feature GT Computing panelists.</p><p><strong>Augmented and Virtual Reality, 1 p.m. Friday at the Westin Augusta E-G</strong></p><ul><li>This panel will feature <a href="http://ic.gatech.edu">School of Interactive Computing</a> (IC) Professor <a href="http://www.cc.gatech.edu/people/blair-macintyre"><strong>Blair MacIntyre</strong></a> and <a href="http://www.imtc.gatech.edu/people/maribeth-gandy-coleman-phd"><strong>Maribeth Coleman</strong></a>, who is the director of the <a href="http://www.imtc.gatech.edu/">Interactive Media Technology Center</a> (IMTC) and associate director of interactive media for the <a href="http://ipat.gatech.edu/">Institute for People and Technology</a> (IPaT). The panel will look at the history and future of virtual reality in video games, and also feature <strong>Roger Altizer</strong> (University of Utah), and <strong>Mike Capps</strong> (former president of Epic Games).</li></ul><p><strong>Dystopian Tech and Gaming, 11:30 a.m. Saturday at the Westin Augusta E-G</strong></p><ul><li>This panel will also feature MacIntyre, Coleman, and Altizer, along with Georgia Tech Research Scientist <a href="http://www.imtc.gatech.edu/people/clint-zeagler"><strong>Clint Zeagler</strong></a> (wearable computing, textile interfaces, animal computer interaction) and Emory University Professor <strong>Susan Tamasi</strong> (linguistics). The panel examines the ramifications of connecting our lives more closely through technology and the way we tell stories through that. What effect does gamifying our lives, health, experiences, and relationships have on our humanity and the future of how we relate to what surrounds us?</li></ul><p><strong>Toys That Are Changing the Future of Gaming, 5:30 p.m. Sunday at the Westin Augusta E-G</strong></p><ul><li>Coleman and MacIntyre will be joined by IC Professor <a href="http://www.cc.gatech.edu/people/thad-starner"><strong>Thad Starner</strong></a> and IMTC Research Scientist <strong><a href="http://www.imtc.gatech.edu/people/laura-levy">Laura Levy</a></strong>. <strong>Jos&eacute; P. Zagal</strong> (University of Utah) will also be on the panel. Panelists will discuss revolutionary technology like neural interfaces, contact lens monitors, and more innovations just over the horizon for consumers. Additionally, they will talk about how we could co-opt that tech for video games.</li></ul><p>Dragon Con is a multigenre convention founded in 1987 that takes place annually over Labor Day weekend in Atlanta. As of 2016, the convention draws over 77,000 attendees, features hundreds of guests, and encompasses five hotels in the Peachtree Center neighborhood of downtown Atlanta.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1504190936</created>  <gmt_created>2017-08-31 14:48:56</gmt_created>  <changed>1504190936</changed>  <gmt_changed>2017-08-31 14:48:56</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A number of GT Computing faculty members and researchers, including Professors Thad Starner and Blair MacIntyre, will participate in panels during Dragon Con.]]></teaser>  <type>news</type>  <sentence><![CDATA[A number of GT Computing faculty members and researchers, including Professors Thad Starner and Blair MacIntyre, will participate in panels during Dragon Con.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-08-31T00:00:00-04:00</dateline>  <iso_dateline>2017-08-31T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-08-31 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[david.mitchell@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>595250</item>      </media>  <hg_media>          <item>          <nid>595250</nid>          <type>image</type>          <title><![CDATA[DragonCon 2017]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[DragonCon logo.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/DragonCon%20logo.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/DragonCon%20logo.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/DragonCon%2520logo.png?itok=KuTMVnm_]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1504034059</created>          <gmt_created>2017-08-29 19:14:19</gmt_created>          <changed>1504034059</changed>          <gmt_changed>2017-08-29 19:14:19</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="576491"><![CDATA[CRNCH]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="140101"><![CDATA[dragon con]]></keyword>          <keyword tid="1944"><![CDATA[Thad Starner]]></keyword>          <keyword tid="11099"><![CDATA[Blair MacIntyre]]></keyword>          <keyword tid="172775"><![CDATA[Maribeth Gandy Coleman]]></keyword>          <keyword tid="173537"><![CDATA[Laura Levy]]></keyword>          <keyword tid="9873"><![CDATA[clint zeagler]]></keyword>          <keyword tid="2356"><![CDATA[gaming]]></keyword>          <keyword tid="1597"><![CDATA[Augmented Reality]]></keyword>          <keyword tid="145251"><![CDATA[virtual reality]]></keyword>          <keyword tid="10353"><![CDATA[wearable computing]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="595052">  <title><![CDATA[Walking the Wire: New IC Students Learn to Overcome Struggles at Leadership Challenge Course]]></title>  <uid>33939</uid>  <body><![CDATA[<p>The expressions on the faces of the 14 new <a href="http://ic.gatech.edu">School of Interactive Computing</a> Ph.D. students were varied.</p><p>On one, a bright grin spread wide across the face. On others, expressions of concentration. A few faces bore eyes glancing timidly toward the ground, as if afraid it would bite should they take a moment to look away.</p><p>There were 14 separate thought processes as the incoming students took part in Georgia Tech&rsquo;s <a href="http://www.crc.gatech.edu/leadership-challenge-course">Leadership Challenge Course</a> on Aug. 15 prior to their official orientation, but one similar goal: Work together to find a way to traverse wire-thin cables, unsteady wood platforms, and other assorted barriers &ndash; not unlike the many challenges they will face in pursuit of their common goal of earning a Ph.D. from the Georgia Institute of Technology.</p><p>It&rsquo;s a program current IC Professor and former Chair <a href="https://www.ic.gatech.edu/people/7080/annie-antons">Annie Ant&oacute;n</a> developed to achieve a handful of goals for her students.</p><p>One, she wanted to challenge them. Like the intense challenges they face over the course of their five or six years in the Ph.D. program, she wanted to force them into an uncomfortable situation that takes patience to overcome.</p><p>Two, she wanted to build a sense of community with other participants. There is no such thing as a graduating class when it comes to a graduate degree, so the idea was to create an environment where members of the same cohort could meet each other, develop friendships, and feel a sense of belonging during their time in school.</p><p>And three, the most important of Ant&oacute;n&rsquo;s goals was to give students an opportunity to share their excitement and concern about the challenge they were embarking on.</p><p>&ldquo;After the challenge course, we get together and discuss what they are most excited about working on their Ph.D.,&rdquo; Ant&oacute;n explained. &ldquo;You get all kinds of answers: I&rsquo;m excited to solve this problem; I&rsquo;m excited to work with this advisor; I&rsquo;m excited to become a professor when I finish. Then we ask what they&rsquo;re scared of. That&rsquo;s when you crack the nut open.&rdquo;</p><p>Students express concerns over things like not getting along with advisors or peers, fears of presenting papers at conferences or that their work won&rsquo;t even be accepted in the first place, worries about passing qualifying exams, and more.</p><p>&ldquo;But at the end of the day, when we&rsquo;ve gone through that circle, they realize that everyone else has the same concerns,&rdquo; Ant&oacute;n said. &ldquo;More than that, they have strategies and resources to go to within their new community to help them through.&rdquo;</p><h2>Click <a href="https://www.flickr.com/photos/ccgatech/albums/72157687741036505">HERE</a> for photos of IC&#39;s day at the Leadership Challenge Course.</h2><p><strong>Christopher Banks</strong>, who is pursuing his <a href="https://www.ic.gatech.edu/academics/robotics-phd-program">Ph.D. in robotics</a>, was one incoming student who said there was some trepidation in climbing onto the wires on the course.</p><p>&ldquo;I am afraid of heights, so I was very wary of the parts of the challenge course that required harnesses,&rdquo; he said. &ldquo;Luckily, with the support of my teammates, I was able to complete the course, something I would have never done under normal circumstances. I was definitely pushed out of my comfort zone.&rdquo;</p><p>He conceded that he likely wouldn&rsquo;t be tightrope walking anytime soon, but enjoyed the camaraderie that was built during the day.</p><p>There were also those who had experience in challenging climbing courses, like <strong>Nathan Hatch</strong>, who is pursuing his <a href="https://www.ic.gatech.edu/academics/computer-science-phd-program">Ph.D. in computer science</a>. Hatch said he enjoys rock climbing, and so the course was not a real challenge for him. But it was an opportunity to learn how to lead and share knowledge with others.</p><p>&ldquo;Hopefully, I was able to use my experience to encourage some of the others in my group,&rdquo; he said. &ldquo;In any case, these activities certainly built a rapport very quickly. I think they made it much easier to express our worries and hopes during the following group discussion. Everyone felt comfortable being honest in front of each other, which made the discussion very helpful.&rdquo;</p><p>In addition to the 14 incoming Ph.D. students, the <a href="https://www.ic.gatech.edu/academics/master-science-human-computer-interaction">MS HCI</a> program also took a sizeable group &ndash; around 50-60 &ndash; to participate in the course the following Saturday.</p><p>The Leadership Challenge Course is located off of Ferst Drive and is run by the <a href="http://www.crc.gatech.edu/">Campus Recreation Center</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1503679459</created>  <gmt_created>2017-08-25 16:44:19</gmt_created>  <changed>1503679459</changed>  <gmt_changed>2017-08-25 16:44:19</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Fourteen incoming Ph.D. students learn about overcoming challenges as they embark on their life in the IC Ph.D. program.]]></teaser>  <type>news</type>  <sentence><![CDATA[Fourteen incoming Ph.D. students learn about overcoming challenges as they embark on their life in the IC Ph.D. program.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-08-25T00:00:00-04:00</dateline>  <iso_dateline>2017-08-25T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-08-25 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[david.mitchell@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>595051</item>      </media>  <hg_media>          <item>          <nid>595051</nid>          <type>image</type>          <title><![CDATA[IC at Leadership Challenge Course]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[LCC Main.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/LCC%20Main.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/LCC%20Main.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/LCC%2520Main.jpg?itok=bZyBK-pg]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[A group of School of Interactive Computing Ph.D. students takes a break on the Leadership Challenge Course.]]></image_alt>                    <created>1503678725</created>          <gmt_created>2017-08-25 16:32:05</gmt_created>          <changed>1503678725</changed>          <gmt_changed>2017-08-25 16:32:05</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.ic.gatech.edu/academics/phd-programs]]></url>        <title><![CDATA[School of Interactive Computing Ph.D. Programs]]></title>      </link>          <link>        <url><![CDATA[http://www.crc.gatech.edu/leadership-challenge-course]]></url>        <title><![CDATA[Leadership Challenge Course]]></title>      </link>          <link>        <url><![CDATA[http://www.crc.gatech.edu/]]></url>        <title><![CDATA[Campus Recreation Center]]></title>      </link>          <link>        <url><![CDATA[https://www.flickr.com/photos/ccgatech/albums/72157687741036505]]></url>        <title><![CDATA[Photos from IC at the Leadership Challenge Course]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="134"><![CDATA[Student and Faculty]]></category>      </categories>  <news_terms>          <term tid="134"><![CDATA[Student and Faculty]]></term>      </news_terms>  <keywords>          <keyword tid="19441"><![CDATA[Leadership Challenge Course]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="27641"><![CDATA[annie anton]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="594286">  <title><![CDATA[Five IC Ph.D. Students Selected for Premier Workshop at Stanford University]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Five&nbsp;<a href="http://ic.gatech.edu">School of Interactive Computing</a>&nbsp;Ph.D. students were selected to participate in the&nbsp;<a href="https://risingstars2017.stanford.edu/">Rising Stars in EECS 2017</a>&nbsp;workshop at Stanford University on Nov. 5-7 of this year.</p><p><a href="https://www.tescafitzgerald.com/">Tesca Fitzgerald</a>&nbsp;(Computer Science),&nbsp;<a href="https://www.cc.gatech.edu/~vchu7/">Vivian Chu</a>&nbsp;(Robotics),&nbsp;<a href="https://www.cc.gatech.edu/people/barbara-ericson">Barbara Ericson</a>&nbsp;(Human-Centered Computing),&nbsp;<a href="https://www.cc.gatech.edu/~upavalan/">Umashanthi Pavalanathan</a>&nbsp;(Computer Science), and&nbsp;<a href="http://maiajacobs.com/">Maia Jacobs</a>&nbsp;(Human-Centered Computing) will participate in the workshop, which aims to bring together top senior Ph.D. and postdoctoral candidates preparing for careers in academia. It is organized by leading professors in computer science and electrical engineering and will entail scientific discussions and informal sessions aimed at navigating the early stages of an academic career.</p><p>Along with networking opportunities for participants, the workshop includes research presentations, panel discussions, and sessions on developing interviewing and promotional skills.</p><p>The application process consisted of a research statement, bio, curriculum vitae, and recommendation letters for each student. Around 60 applicants were selected from a competitive field of 323.</p><p>Fitzgerald&rsquo;s research lies at the intersections of human-robot interaction and cognitive systems. She develops algorithms and knowledge representations for robots to learn, adapt, and reuse knowledge through interaction with a human teacher. She is co-advised by IC Professor&nbsp;<a href="https://www.ic.gatech.edu/people/7068/ashok-goels">Ashok Goel</a>&nbsp;and former IC Associate Professor Andrea Thomaz, now at the University of Texas.</p><p>Chu&rsquo;s research interests include socially intelligent robots, interactive multi-sensory perception, natural language processing, and applying machine learning techniques for robotic learning in unstructured environments. She is co-advised by Thomaz and Assistant Professor&nbsp;<a href="https://www.ic.gatech.edu/people/11322/sonia-chernovas">Sonia Chernova</a>.</p><p>Ericson is also a senior research scientist in the College of Computing. Her research is focused on computing education, specifically in trying to increase the quality and quantity of secondary computing students and the quantity and diversity of computing students. She is the Director for Computing Outreach for the&nbsp;<a href="http://coweb.cc.gatech.edu/ice-gt/">Institute for Computing Education</a>&nbsp;in the College.</p><p>Pavalanathan&#39;s research deals in the computational analysis of language in online social media. Her thesis work focuses on computational approaches to understanding stylistic variation in online writing. She is a member of the&nbsp;<a href="https://gtnlp.wordpress.com/">Computational Linguistics Laboratory</a>&nbsp;and is advised by Assistant Professor Jacob Eisenstein.</p><p>Jacobs focuses on health informatics, mobile computing, and human-computer interaction. More broadly, she is interested in how mobile interfaces may be designed to address the changing needs and priorities of users. She is advised by Professor&nbsp;<a href="https://www.ic.gatech.edu/people/7122/elizabeth-mynatts">Beth Mynatt</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1502386851</created>  <gmt_created>2017-08-10 17:40:51</gmt_created>  <changed>1503604356</changed>  <gmt_changed>2017-08-24 19:52:36</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Maia Jacobs, Tesca Fitzgerald, Barbara Ericson, Umashanthi Pavalanathan, and Vivian Chu will attend the Rising Stars in EECS workshop in November.]]></teaser>  <type>news</type>  <sentence><![CDATA[Maia Jacobs, Tesca Fitzgerald, Barbara Ericson, Umashanthi Pavalanathan, and Vivian Chu will attend the Rising Stars in EECS workshop in November.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-08-10T00:00:00-04:00</dateline>  <iso_dateline>2017-08-10T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-08-10 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[david.mitchell@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>594284</item>      </media>  <hg_media>          <item>          <nid>594284</nid>          <type>image</type>          <title><![CDATA[Maia Jacobs, Vivian Chu, and Tesca Fitzgerald for Rising Stars in EECS]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[TescaVivianMaia.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/TescaVivianMaia.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/TescaVivianMaia.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/TescaVivianMaia.png?itok=WbjTbx-n]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Maia Jacobs, Vivian Chu, and Tesca Fitzgerald were selected to participate at Rising Stars in EECS]]></image_alt>                    <created>1502386192</created>          <gmt_created>2017-08-10 17:29:52</gmt_created>          <changed>1502386192</changed>          <gmt_changed>2017-08-10 17:29:52</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.ic.gatech.edu/academics/computer-science-phd-program]]></url>        <title><![CDATA[Computer Science Ph.D. Program]]></title>      </link>          <link>        <url><![CDATA[https://www.ic.gatech.edu/academics/robotics-phd-program]]></url>        <title><![CDATA[Robotics Ph.D. Program]]></title>      </link>          <link>        <url><![CDATA[https://www.ic.gatech.edu/academics/human-centered-computing-phd-program]]></url>        <title><![CDATA[Human-Centered Computing Ph.D. Program]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="175142"><![CDATA[rising stars in eecs]]></keyword>          <keyword tid="118671"><![CDATA[Maia Jacobs]]></keyword>          <keyword tid="69711"><![CDATA[Tesca Fitzgerald]]></keyword>          <keyword tid="172726"><![CDATA[Vivian Chu]]></keyword>          <keyword tid="10665"><![CDATA[barbara ericson]]></keyword>          <keyword tid="175317"><![CDATA[umashanthi pavalanathan]]></keyword>          <keyword tid="667"><![CDATA[robotics]]></keyword>          <keyword tid="1051"><![CDATA[Computer Science]]></keyword>          <keyword tid="10621"><![CDATA[hcc]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="594242">  <title><![CDATA[IC Presents Eight Papers at CVPR 2017]]></title>  <uid>33939</uid>  <body><![CDATA[<p>The College of Computing had a substantial presence&nbsp;at the <a href="http://cvpr2017.thecvf.com/">Computer Vision and Pattern Recognition 2017</a> (CVPR 2017) conference&nbsp;in Honolulu, Hawaii.</p><p>A total of eight faculty and students co-authored nine papers that were accepted and presented at the main conference.</p><p>From the <a href="http://ic.gatech.edu">School of Interactive Computing</a> (IC), Associate Professor <strong>James Hays</strong> contributed to two papers, one each with graduate students <strong>Patsorn Sangkloy</strong> (<a href="https://www.ic.gatech.edu/academics/computer-science-phd-program">Ph.D. CS</a>) and <strong>Samarth Brahmbhatt</strong> (<a href="https://www.ic.gatech.edu/academics/robotics-phd-program">Ph.D. Robotics</a>), Associate Professor <strong>Dhruv Batra</strong> contributed to four, including one with advisee <strong>Abhishek Das</strong> (Ph.D. CS), and Assistant Professor <strong>Devi Parikh</strong> contributed to five.</p><p>Associate Professor <strong>Le Song</strong> and his advisee <strong>Weiyang Liu</strong> (Ph.D. CS) from the <a href="http://cse.gatech.edu">School of Computational Science and Engineering</a> also presented a paper at the conference.</p><p>At least 13 alumni also attended the conference, two of whom &ndash; <strong>Gabe Brostow</strong> (Ph.D. CS, &rsquo;03) and <strong>Alireza Fathi</strong> (Ph.D. CS, &rsquo;13) &ndash; contributed to accepted papers.</p><p>Overall, the <a href="http://cc.gatech.edu">College of Computing</a> had nine main conference publications, seven invited workshop talks and demonstrations, two workshops organized, and four workshop publications. School of IC Professor <strong>Jim Rehg</strong> was also a program chair for the conference.</p><p>CVPR 2017 was held from July 21-26 at the Hawaii Convention Center and is the premier annual computer vision event, comprising the main conference and several co-located workshops and short courses.</p><p>The nine papers that current Georgia Tech faculty and students contributed to are listed with links and abstracts below:</p><ul><li><em><a href="http://openaccess.thecvf.com/content_cvpr_2017/papers/Sangkloy_Scribbler_Controlling_Deep_CVPR_2017_paper.pdf">Scribbler: Controlling Deep Image Synthesis with Sketch and Color</a></em> (<strong>Patsorn Sangkloy</strong>, Jingwan Lu, Chen Fang, Fisher Yu, <strong>James Hays</strong>)</li></ul><p><strong>ABSTRACT:</strong> Several recent works have used deep convolutional networks to generate realistic imagery. These methods sidestep the traditional computer graphics rendering pipeline and instead generate imagery at the pixel level by learning from large collections of photos (e.g. faces or bedrooms). However, these methods are of limited utility because it is difficult for a user to control what the network produces. In this paper, we propose a deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces. We demonstrate a sketch based image synthesis system which allows users to scribble over the sketch to indicate preferred color for objects. Our network can then generate convincing images that satisfy both the color and the sketch constraints of user. The network is feed-forward which allows users to see the effect of their edits in real time. We compare to recent work on sketch to image synthesis and show that our approach generates more realistic, diverse, and controllable outputs. The architecture is also effective at user-guided colorization of grayscale images.</p><ul><li><em><a href="http://openaccess.thecvf.com/content_cvpr_2017/papers/Brahmbhatt_DeepNav_Learning_to_CVPR_2017_paper.pdf">DeepNav: Learning to Navigate Large Cities</a></em> (<strong>Samarth Brahmbhatt</strong>, <strong>James Hays</strong>)</li></ul><p><strong>ABSTRACT:</strong> We present DeepNav, a Convolutional Neural Network (CNN) based algorithm for navigating large cities using locally visible street-view images. The DeepNav agent learns to reach its destination quickly by making the correct navigation decisions at intersections. We collect a large-scale dataset of street-view images organized in a graph where nodes are connected by roads. This dataset contains 10 city graphs and more than 1 million street-view images. We propose 3 supervised learning approaches for the navigation task and show how A* search in the city graph can be used to generate supervision for the learning. Our annotation process is fully automated using publicly available mapping services and requires no human input. We evaluate the proposed DeepNav models on 4 held-out cities for navigating to 5 different types of destinations. Our algorithms outperform previous work that uses hand-crafted features and Support Vector Regression (SVR).</p><ul><li><em><a href="http://openaccess.thecvf.com/content_cvpr_2017/papers/Sun_Bidirectional_Beam_Search_CVPR_2017_paper.pdf">Bidirectional Beam Search: Forward-Backward Inference in Neural Sequence Models for Fill-in-the-Blank Image Captioning</a></em> (Qing Sun, Stefan Lee, <strong>Dhruv Batra</strong>)</li></ul><p><strong>ABSTRACT:</strong> We develop the first approximate inference algorithm for 1-Best (and M-Best) decoding in bidirectional neural sequence models by extending Beam Search (BS) to reason about both forward and backward time dependencies. Beam Search (BS) is a widely used approximate inference algorithm for decoding sequences from unidirectional neural sequence models. Interestingly, approximate inference in bidirectional models remains an open problem, despite their significant advantage in modeling information from both the past and future. To enable the use of bidirectional models, we present Bidirectional Beam Search (BiBS), an efficient algorithm for approximate bidirectional inference. To evaluate our method and as an interesting problem in its own right, we introduce a novel Fill-in-the-Blank Image Captioning task which requires reasoning about both past and future sentence structure to reconstruct sensible image descriptions. We use this task as well as the Visual Madlibs dataset to demonstrate the effectiveness of our approach, consistently outperforming all baseline methods.</p><ul><li><em><a href="http://openaccess.thecvf.com/content_cvpr_2017/papers/Goyal_Making_the_v_CVPR_2017_paper.pdf">Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering</a></em> (Yash Goyal, Tejas Khot, Douglas Summers-Stay, <strong>Dhruv Batra</strong>, <strong>Devi Parikh</strong>)</li></ul><p><strong>ABSTRACT:</strong> Problems at the intersection of vision and language are of significant importance both as challenging research questions and for the rich set of applications they enable. However, inherent structure in our world and bias in our language tend to be a simpler signal for learning than visual modalities, resulting in models that ignore visual information, leading to an inflated sense of their capability. We propose to counter these language priors for the task of Visual Question Answering (VQA) and make vision (the V in VQA) matter! Specifically, we balance the popular VQA dataset [3] by collecting complementary images such that every question in our balanced dataset is associated with not just a single image, but rather a pair of similar images that result in two different answers to the question. Our dataset is by construction more balanced than the original VQA dataset and has approximately twice the number of image-question pairs. Our complete balanced dataset is publicly available at http://visualqa.org/ as part of the 2nd iteration of the Visual Question Answering Dataset and Challenge (VQA v2.0). We further benchmark a number of state-of-art VQA models on our balanced dataset. All models perform significantly worse on our balanced dataset, suggesting that these models have indeed learned to exploit language priors. This finding provides the first concrete empirical evidence for what seems to be a qualitative sense among practitioners. Finally, our data collection protocol for identifying complementary images enables us to develop a novel interpretable model, which in addition to providing an answer to the given (image, question) pair, also provides a counterexample based explanation. Specifically, it identifies an image that is similar to the original image, but it believes has a different answer to the same question. This can help in building trust for machines among their users.</p><ul><li><em><a href="http://openaccess.thecvf.com/content_cvpr_2017/papers/Chattopadhyay_Counting_Everyday_Objects_CVPR_2017_paper.pdf">Counting Everyday Objects in Everyday Scene</a>s</em> (Prithvijit Chattopadhyay, Ramakrishna Vedantam, Ramprasaath R. Selvaraju, <strong>Dhruv Batra, Devi Parikh</strong>)</li></ul><p><strong>ABSTRACT: </strong>We are interested in counting the number of instances of object classes in natural, everyday images. Previous counting approaches tackle the problem in restricted domains such as counting pedestrians in surveillance videos. Counts can also be estimated from outputs of other vision tasks like object detection. In this work, we build dedicated models for counting designed to tackle the large variance in counts, appearances, and scales of objects found in natural scenes. Our approach is inspired by the phenomenon of subitizing &ndash; the ability of humans to make quick assessments of counts given a perceptual signal, for small count values. Given a natural scene, we employ a divide and conquer strategy while incorporating context across the scene to adapt the subitizing idea to counting. Our approach offers consistent improvements over numerous baseline approaches for counting on the PASCAL VOC 2007 and COCO datasets. Subsequently, we study how counting can be used to improve object detection. We then show a proof of concept application of our counting methods to the task of Visual Question Answering, by studying the &lsquo;how many?&rsquo; questions in the VQA and COCO-QA datasets.</p><ul><li><em><a href="http://openaccess.thecvf.com/content_cvpr_2017/papers/Lu_Knowing_When_to_CVPR_2017_paper.pdf">Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning</a> </em>(Jiasen Lu, Caiming Xiong, <strong>Devi Parikh</strong>, Richard Socher)</li></ul><p><strong>ABSTRACT:</strong> Attention-based neural encoder-decoder frameworks have been widely adopted for image captioning. Most methods force visual attention to be active for every generated word. However, the decoder likely requires little to no visual information from the image to predict non-visual words such as &ldquo;the&rdquo; and &ldquo;of&rdquo;. Other words that may seem visual can often be predicted reliably just from the language model e.g., &ldquo;sign&rdquo; after &ldquo;behind a red stop&rdquo; or &ldquo;phone&rdquo; following &ldquo;talking on a cell&rdquo;. In this paper, we propose a novel adaptive attention model with a visual sentinel. At each time step, our model decides whether to attend to the image (and if so, to which regions) or to the visual sentinel. The model decides whether to attend to the image and where, in order to extract meaningful information for sequential word generation. We test our method on the COCO image captioning 2015 challenge dataset and Flickr30K. Our approach sets the new state-of-the-art by a significant margin.</p><ul><li><em><a href="http://openaccess.thecvf.com/content_cvpr_2017/papers/Das_Visual_Dialog_CVPR_2017_paper.pdf">Visual Dialog</a></em> (<strong>Abhishek Das</strong>, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos&eacute; M.F. Moura, <strong>Devi Parikh, Dhruv Batra</strong>)</li></ul><p><strong>ABSTRACT:</strong> We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial contains 1 dialog (10 question-answer pairs) on &sim;140k images from the COCO dataset, with a total of &sim;1.4M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders (Late Fusion, Hierarchical Recurrent Encoder and Memory Network) and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Our dataset, code, and trained models will be released publicly at visualdialog.org. Putting it all together, we demonstrate the first &lsquo;visual chatbot&rsquo;!</p><ul><li><em><a href="http://openaccess.thecvf.com/content_cvpr_2017/papers/Vedantam_Context-Aware_Captions_From_CVPR_2017_paper.pdf">Context-aware Captions from Context-agnostic Supervision</a></em> (Ramakrishna Vedantam, Samy Bengio, Kevin Murphy, <strong>Devi Parikh</strong>, Gal Chechik)</li></ul><p><strong>ABSTRACT:</strong> We introduce an inference technique to produce discriminative context-aware image captions (captions that describe differences between images or visual concepts) using only generic context-agnostic training data (captions that describe a concept or an image in isolation). For example, given images and captions of &ldquo;Siamese cat&rdquo; and &ldquo;tiger cat&rdquo;, we generate language that describes the &ldquo;Siamese cat&rdquo; in a way that distinguishes it from &ldquo;tiger cat&rdquo;. Our key novelty is that we show how to do joint inference over a language model that is context-agnostic and a listener which distinguishes closely-related concepts. We first apply our technique to a justification task, namely to describe why an image contains a particular fine-grained category as opposed to another closely-related category of the CUB- 200-2011 dataset. We then study discriminative image captioning to generate language that uniquely refers to one of two semantically-similar images in the COCO dataset. Evaluations with discriminative ground truth for justification and human studies for discriminative image captioning reveal that our approach outperforms baseline generative and speaker-listener approaches for discrimination.</p><ul><li><em><a href="http://openaccess.thecvf.com/content_cvpr_2017/papers/Liu_SphereFace_Deep_Hypersphere_CVPR_2017_paper.pdf">SphereFace: Deep Hypersphere Embedding for Face Recognition</a></em> (<strong>Weiyang Liu</strong>, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, <strong>Le Song</strong>)</li></ul><p><strong>ABSTRACT:</strong> This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter m. We further derive specific m to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), YouTube Faces (YTF) and MegaFace Challenge 1 show the superiority of A-Softmax loss in FR tasks.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1502291774</created>  <gmt_created>2017-08-09 15:16:14</gmt_created>  <changed>1502291835</changed>  <gmt_changed>2017-08-09 15:17:15</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Assistant Professor Devi Parikh contributed to five of the papers, while Associate Professors Dhruv Batra and James Hays contributed to four and two, respectively.]]></teaser>  <type>news</type>  <sentence><![CDATA[Assistant Professor Devi Parikh contributed to five of the papers, while Associate Professors Dhruv Batra and James Hays contributed to four and two, respectively.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-08-09T00:00:00-04:00</dateline>  <iso_dateline>2017-08-09T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-08-09 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[david.mitchell@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>594240</item>      </media>  <hg_media>          <item>          <nid>594240</nid>          <type>image</type>          <title><![CDATA[CVPR Image]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[CVPRLogo3.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/CVPRLogo3.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/CVPRLogo3.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/CVPRLogo3.jpg?itok=7rVK2F2_]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[The Computer Vision and Pattern Recognition conference was held in Honolulu on July 21-26.]]></image_alt>                    <created>1502290962</created>          <gmt_created>2017-08-09 15:02:42</gmt_created>          <changed>1502290962</changed>          <gmt_changed>2017-08-09 15:02:42</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>      </news_terms>  <keywords>          <keyword tid="173615"><![CDATA[dhruv batra]]></keyword>          <keyword tid="173616"><![CDATA[devi parikh]]></keyword>          <keyword tid="169167"><![CDATA[james hays]]></keyword>          <keyword tid="11506"><![CDATA[computer vision]]></keyword>          <keyword tid="8550"><![CDATA[visual pattern recognition]]></keyword>          <keyword tid="175127"><![CDATA[cvpr 2017]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="593471">  <title><![CDATA[What Machine Learning Will Change (Hint: Everything)]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Is that an image of a cat? It&rsquo;s a simple question for human beings, but was a tough one for machines&mdash;until recently. Today, if you type &ldquo;Siamese cats&rdquo; into Google&rsquo;s image search engine, voil&agrave;!, you&rsquo;ll be presented with scores of Siamese cats, categorized by breed (&ldquo;lilac point,&rdquo; &ldquo;totie point,&rdquo; &ldquo;chocolate point&rdquo;), as well as other qualities, such as &ldquo;kitten&rdquo; or &ldquo;furry.&rdquo;&nbsp;</p><p>What&rsquo;s key here is that while some of the images carry identifying, machine-readable text or meta information, many do not. Yet the search still found them. How? The answer is that the pictures&mdash; more accurately, a pattern in the pictures&mdash;was recognized as &ldquo;Siamese cat&rdquo; by a machine, without requiring a human to classify each instance.</p><p>This is machine learning. At its core, machine learning upends the programming model, forgoing the hard coded &ldquo;if this, then that&rdquo; instructions and explicit rules. Instead, it uses an artificial neural network (ANN)&mdash;a statistical model directly inspired by biological neural networks&mdash;that is &ldquo;trained&rdquo; on some data set (the bigger, the better) to accomplish some new task that uses similar but yet unknown data.&nbsp;</p><p>The data comes first in machine learning. The system finds its own way, adjusting and refining its model, iteratively.&nbsp;</p><p>But back to Siamese cats. Computer vision researchers worked on image recognition for decades, but Google effectively perfected it in months once the company developed a machine-learning algorithm. Today, machine-learning facial recognition systems for mug shots and passport photos outperform human operators.&nbsp;<br /><br /><strong>Not New But Definitely Now</strong><br />In fact, machine learning, neural networks and pattern recognition aren&rsquo;t new. In 1950, a computer program was written that improved its checkers performance the more it played (by studying winning strategies and incorporating these into its own program). In 1957, the first neural network for computers (the Perceptron) was designed. In 1967, the &ldquo;nearest neighbor&rdquo; algorithm, which allowed a computer to do very basic pattern recognition, was created.&nbsp;</p><p>Indeed, some would say that Alan Turing&rsquo;s famous machine that ultimately broke the German &ldquo;Enigma&rdquo; code during World War II was an instance of machine learning&mdash;in that it observed incoming data, analyzed it and extracted information.&nbsp;</p><p>So why has machine learning exploded on the scene now, pervading fields as diverse as marketing, health care, manufacturing, information security and transportation?&nbsp;</p><p>Researchers at Georgia Tech say the explanation is the confluence of three things:&nbsp;<br /><br />1. Faster, more powerful computer hardware (parallel processors, GPUs, etc.)<br />2. Software algorithms to take advantage of these computational architectures<br />3. Loads and loads of data for training (digitized documents, internet social media posts, YouTube videos, GPS coordinates, electronic health records, and, the fastest-growing category, all those networked sensors and processors behind the much-heralded Internet of Things).<br /><br />This digitalization began in earnest in the 1990s. According to IDC Research, digital data will grow at a compound annual growth rate of 42 percent through 2020. In the 2010-20 decade, the world&rsquo;s data will grow by 50 times, from about one Zettabyte (1ZB) in 2010 to about 50ZB in 2020.&nbsp;</p><p>These oceans of data and data sources not only enable machine learning, but also, in a sense, they create an urgent need for it, offering a solution to the human programmer bottleneck. &ldquo;The usual way of programming computers these days is, you write a program,&rdquo; says Irfan Essa, director of Tech&rsquo;s new Center for Machine Learning. &ldquo;Now we&rsquo;re saying, that cannot scale.&rdquo;&nbsp;</p><p>There are simply too many data sources, arriving too fast.</p><p>The ability of these systems to quickly and reliably make inferences from data has galvanized the attention of the world&rsquo;s biggest technology players and businesses, who&rsquo;ve seen the commercial benefits and opportunities.&nbsp;</p><p>&ldquo;It created a disruption,&rdquo; says Essa, who also serves as associate dean of the College of Computing, a professor in the School of Interactive Computing and an adjunct professor in the School of Electrical and Computer Engineering.&nbsp;</p><p>As Jeff Bezos, CEO of Amazon, put it in his widely circulated April 2017 letter to company shareholders, Amazon&rsquo;s use of machine learning in its autonomous delivery drones and speech-controlled assistant Alexa is only part of the story.</p><p>&ldquo;Machine learning drives our algorithms for demand forecasting, product search ranking, product and deals recommendations, merchandising placements, fraud detection, translations and much more,&rdquo; Bezos wrote. &ldquo;Though less visible, much of the impact of machine learning will be of this type&mdash;quietly but meaningfully improving core operations.&rdquo;</p><p>Two other drivers for the rapid growth of machine learning have been the widely available, open source toolkits (such as Google&rsquo;s TensorFlow) that can rapidly prototype a machine learning system, and cloud-based storage and computation services to host it.&nbsp;<br />This April, for instance, Amazon Web Services announced that Amazon Lex, the artificial intelligence service (AI) used to create applications that can interact with users via voice and text&mdash;and the technology behind Amazon Alexa&mdash;would be available to Amazon Web Services customers.</p><p>&ldquo;You can build a startup very, very fast,&rdquo; says Sebastian Pokutta, Georgia Tech&rsquo;s David M. McKenney Family Associate Professor in the H. Milton Stewart School for Industrial and Systems Engineering, and associate director of the Center for Machine Learning (ML@GT). &ldquo;Before, machine learning was very academic and somewhat esoteric. Now we have a toolbox that I can give a student, and within a week they can create something that&rsquo;s usable.&rdquo;</p><p><strong>Natural Language: Going Deeper</strong><br />Like image recognition, speech recognition has seen great strides thanks to machine learning. Consider Amazon&rsquo;s Alexa or Google Home, two darlings in the speech-controlled appliance space.&nbsp;</p><p>Georgia Tech researchers aren&rsquo;t competing with these new commercial efforts. &ldquo;We&rsquo;re working on things that we hope will be important components of systems in the much longer term,&rdquo; says Jacob Eisenstein, assistant professor in the School of Interactive Computing, where he leads the Computational Linguistics Laboratory. &ldquo;As a field right now, we&rsquo;re the intersection of machine learning and linguistics.&rdquo;&nbsp;</p><p>That said, Eisenstein points out that Google quietly incorporates increasingly sophisticated natural language processing into its search system every few months.&nbsp;</p><p>&ldquo;What I think they&rsquo;re doing is drawing ideas from the research literature, from the stuff that&rsquo;s produced at universities like Georgia Tech,&rdquo; he says.&nbsp;</p><p>Highlighting the market excitement over speech control, Eisenstein notes that five former Tech students are working at Amazon on Alexa development, as are a number of his undergrads and masters students.&nbsp;</p><p>So, what sorts of problems are Eisenstein and his colleagues working to solve?&nbsp;</p><p>&ldquo;Imagine you are interested in some new area of research, and could have a system that summarizes the 15 most important papers in that field into a four-page document,&rdquo; Eisenstein says.</p><p>But creating such a system goes far beyond word or phrase recognition. &ldquo;We know that to understand language, you have to have some understanding of linguistic structure&mdash;how sentences are put together,&rdquo; he explains. Language understanding is hard, from a machine standpoint, because it has very deep, nested structures.&nbsp;</p><p>Tackling subjects like language or other complex, non-linear relationships has given rise to a subset of machine learning known as deep learning. A deep neural network is an artificial neural network with multiple hidden layers between the input and output layers.&nbsp;<br /><br /><strong>Black Box Problems</strong><br />However, those hidden layers give rise to a black box problem. That is, if the artificial neural network contains hidden layers, its processes aren&rsquo;t transparent. To take a real-word example: how do we audit the autonomous car&rsquo;s decision to swerve right, not left?&nbsp;</p><p>That&rsquo;s an area of study for Dhruv Batra, an assistant professor in the School of Interactive Computing. His research aims to develop theory, algorithms and implementations for transparent deep neural networks that are able to provide explanations for their predictions, and to study the effect of developed transparent neural networks and explanations on user trust and perceived trustworthiness.</p><p>According to Batra: &ldquo;We have to be a little careful though, because if we tack on the explanatory piece&mdash;&lsquo;That&rsquo;s why I&rsquo;m calling this a cat&rsquo;&mdash;the system may learn to produce an explanation, a post hoc justification that may not have anything to do with its choice.&rdquo;&nbsp;</p><p>Other problems range from the practical, &ldquo;How can we remove human bias when setting up the algorithm?&rdquo; to the unexpectedly philosophical, &ldquo;How can we be sure these systems are, in fact, learning the right things?&rdquo;</p><p>Tech researchers are hard at work on these fascinating questions.&nbsp;</p><p>Essa admits there&rsquo;s a lot of hype around machine learning right now. But he notes that people are very good at overestimating the impact of technology in the short term, yet underestimating it in the long run.&nbsp;</p><p>If optical character recognition and, increasingly, speech recognition are taken for granted because they &ldquo;just work,&rdquo; there are other technologies that are far from perfect.&nbsp;</p><p>&ldquo;And we&rsquo;d like them to be perfect, which is why research and development needs to continue,&rdquo; Essa says.&nbsp;</p><p>Machine learning may even play a role in improving how Georgia Tech students are taught in the future.&nbsp;</p><p>&ldquo;At Tech we have a lot of educational data,&rdquo; he says. &ldquo;How do we now use that data to learn more about and support our student body&mdash;learn more about their learning, and provide the right kinds of guidance and support?&rdquo;</p><p>&nbsp;</p><h2><strong>INSIDE MACHINE LEARNING @ GEORGIA TECH</strong></h2><p>&ldquo;At Georgia Tech, we recognize machine learning to be a game-changer not just in computer science, but in a broad range of scientific, engineering, and business disciplines and practices,&rdquo; writes Irfan Essa, the inaugural director of the Center for Machine Learning at Georgia Tech (ML@GT), in his welcome note on the Center&rsquo;s web page.</p><p>Launched in June 2016, ML@GT is an interdisciplinary research center that combines assets from the College of Computing, the H. Milton Stewart School of Industrial and Systems Engineering and the School of Electrical and Computer Engineering. Its faculty, students and industry partners are working on research and real-world applications of machine learning in a variety of areas, including machine vision, information security, healthcare, logistics and supply chain, finance and education, among others.&nbsp;</p><p>The center truly is a collaborative effort across campus, with 125 to 150 Tech faculty involved, and more than 400 students, says Sebastian Pokutta, David M. McKenney Family Associate Professor in the School of Industrial and Systems Engineering, and an associate director of ML@GT. &ldquo;Tech has always had a lot of researchers working on machine learning, but they&rsquo;d been spread out, working in different departments independently,&rdquo; Pokutta says. &ldquo;There wasn&rsquo;t a real community on campus.&rdquo;</p><p>Echoing Essa&rsquo;s message, Pokutta says the goal of the Center is straightforward and daring: &ldquo;We want to become the leader in bringing together computing, learning, data and engineering.&rdquo;&nbsp;</p><p>True, there are other machine learning centers in higher ed&mdash;MIT, Columbia, Carnegie Mellon&mdash;but most focus on combining computing and statistics.&nbsp;</p><p>&ldquo;One of the unique things about Georgia Tech, since we&rsquo;re a big engineering school, is our machine learning effort is really closely embedded with our engineering units,&rdquo; Essa says. &ldquo;We&rsquo;re close to the sensor, close to the processor, close to the actuator.&rdquo;&nbsp;<br />This matters because of what is known as &ldquo;edge computing&rdquo;: the concept of moving applications, data and services to the logical extremes of a network, so that knowledge generation can occur at the point of action.</p><p>The objective is to use Tech&rsquo;s engineering prowess&mdash;and data-driven techniques&mdash;to help design the next generation of technologies and methodologies.</p><p>&nbsp;</p><h2><strong>MACHINE LEARNING&#39;S IMPACT ON PRECISION MEDICINE</strong></h2><p>Healthcare offers a rich source of data to machine learning researchers. There are scanned and electronic health records, claims data, procedure results, lab tests, genetics studies, and even telemetry from devices like heart monitors and wearables like Fitbits and smart watches.&nbsp;</p><p>A number of Georgia Tech&rsquo;s researchers are mining this data to better understand health outcomes at scale and to ultimately figure out the right treatment for each individual patient. This is known as individualist or precision medicine.&nbsp;<br />Jacob Eisenstein, an assistant professor in the School of Interactive Computing, and Jimeng Sun, an associate professor in the School of Computational Science and Engineering, are mining the text in electronic health records to better understand health outcomes at scale.&nbsp;</p><p>Today, patients and doctors try rounds of treatments for ailments, looking for the best fit. &ldquo;There&rsquo;s a lot of trial and error,&rdquo; Eisenstein explains. The project hopes to reduce that, by systematizing treatment based on a deeper understanding of patients, treatments and outcomes.&nbsp;</p><p>Last year, Sun was part of a group of researchers who developed a new, accurate-but-interpretable approach for machine learning in medicine.&nbsp;</p><p>Their Reverse Time Attention model (RETAIN) achieves high accuracy while remaining clinically interpretable. It is based on a two-level neural attention model that detects influential past visits and significant clinical variables within those visits (e.g., key diagnoses). RETAIN was tested on a large health system dataset with 14 million visits completed by 263,000 patients over an eight-year period and demonstrated predictive accuracy and computational scalability comparable to state-of-the-art methods such as recurrent neural networks, and ease of interpretability comparable to traditional models (logistic regression).</p><p>In other work, Tech professors and students are analyzing data from Geisinger, a hospital network in Pennsylvania, to help predict the risk for sepsis and septic shock in patients before they are admitted to the hospital. Other researchers within the School of Industrial and Systems Engineering&rsquo;s Health Analytics group are collecting health care utilization data involving millions of individuals for events such as hospitalizations that can be used in estimating the cost savings of preventive care.</p><h2><strong>PHOTO FINISH:&nbsp;<br />Why Facebook and Amazon Want to &ldquo;See&rdquo; Your Images Better</strong></h2><p>Facebook&rsquo;s interest in having machines better assess the billions of images uploaded to its platform&mdash;in order to describe, rank or even delete objectionable images&mdash;is obvious.</p><p>Georgia Tech faculty Dhruv Batra and Devi Parikh&mdash;married partners both in life and at work&mdash;are assistant professors in the College of Computing&rsquo;s School of Interactive Computing who are currently serving as visiting researchers at Facebook Artificial Intelligence Research (FAIR).&nbsp;</p><p>At Facebook, the duo is working on ways to improve the interaction between human beings, a machine platform and images posted on the social network platform. In April 2016, Facebook began automatically describing the content of photos to blind and visually impaired users. Called &ldquo;automatic alternative text,&rdquo; the feature was created by Facebook&rsquo;s accessibility team. The technology also works for Facebook versions in countries with limited internet speeds or that don&rsquo;t allow visual content.</p><p>And last December, Batra and Parikh also received Amazon Academic Research Awards for a pair of projects they are leading in computer vision and machine learning. They received $100,000 each from Amazon&mdash;$80,000 in gift money and $20,000 in Amazon Web Services credit&mdash;for projects that aim to produce the next generation of artificial intelligence agents.<br />Batra and Parikh are using giant image data sets with human annotations that have been built up at Mechanical Turk, Amazon&rsquo;s crowdsourcing internet marketplace.&nbsp;</p><p>One project, Visual Dialog, led by Batra, aims at creating an AI agent able to hold a meaningful dialogue with humans in natural, conversational language about visual content. Facebook can already generate automatic alternative text for an image, explains Batra. So a user can be told, &ldquo;This picture may contain a mug, a person, a cat.&rdquo; The goal, he said, is to go much further&mdash;to offer not only more information about the image but also engage the user in a dialog.&nbsp;</p><p>Training the machine learning algorithm for the task requires a huge data set&mdash;as many as 200,000 conversations on the same set of images, each conversation including 10 rounds of questions and answers (or roughly 2 million question-and-answer pairs).</p><p>Another project, titled &ldquo;Counting Everyday Objects in Everyday Scenes,&rdquo; is led by Parikh, and aims to enable an AI to count the number of objects belonging to the same category. One particularly interesting approach will try to estimate the counts of objects in one try by just glancing at the image as a whole. This is inspired by &ldquo;subitizing&rdquo;&mdash;an ability humans inherently possess to see a small number of objects and know how many there are without having to explicitly count.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1499872963</created>  <gmt_created>2017-07-12 15:22:43</gmt_created>  <changed>1500472409</changed>  <gmt_changed>2017-07-19 13:53:29</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Today, computer algorithms poring over vast datasets can derive predictions or models from that data—all on their own.  The “programming” paradigm has been upended.  Welcome to the Machine Learning Revolution.]]></teaser>  <type>news</type>  <sentence><![CDATA[Today, computer algorithms poring over vast datasets can derive predictions or models from that data—all on their own.  The “programming” paradigm has been upended.  Welcome to the Machine Learning Revolution.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-07-12T00:00:00-04:00</dateline>  <iso_dateline>2017-07-12T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-07-12 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Roger Slavens</p><p>Georgia Tech Alumni Magazine</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>593470</item>      </media>  <hg_media>          <item>          <nid>593470</nid>          <type>image</type>          <title><![CDATA[Machine Learning Robot Portrait]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[ML Robot.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/ML%20Robot.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/ML%20Robot.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/ML%2520Robot.jpg?itok=G0Yq2fKj]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[A robot reads a book while sitting on a stack of other books.]]></image_alt>                    <created>1499872729</created>          <gmt_created>2017-07-12 15:18:49</gmt_created>          <changed>1499872729</changed>          <gmt_changed>2017-07-12 15:18:49</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="576481"><![CDATA[ML@GT]]></group>      </groups>  <categories>          <category tid="135"><![CDATA[Research]]></category>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="135"><![CDATA[Research]]></term>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="174914"><![CDATA[Machine Learning; College of Computing; Robotics]]></keyword>      </keywords>  <core_research_areas>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="593322">  <title><![CDATA[Georgia Tech Hosts International Conference on Computational Creativity]]></title>  <uid>33939</uid>  <body><![CDATA[<p>The Georgia Institute of Technology hosted the 2017 <a href="http://computationalcreativity.net/iccc2017/">International Conference on Computational Creativity</a> on June 19-23.</p><p>The five-day event was attended by computing faculty and students from around the world. School of Interactive Computing Professor&nbsp;<strong>Ashok Goel </strong>served as the general chair for the event.</p><p>The local chair was graduate research assistant&nbsp;<strong>Mikhail Jacob</strong>, and the local committee was comprised of&nbsp;<strong>Jeff Collins</strong>,&nbsp;<strong>Heather&nbsp;Liger</strong>,&nbsp;<strong>Gerard Roma</strong>, and&nbsp;<strong>Anna Xamb&oacute;</strong>. IC Ph.D. student&nbsp;<strong>Matthew Guzdial</strong>&nbsp;served as the media chair, and the media committee was led by&nbsp;<strong>Anna Weisling</strong>.</p><p><strong>Bobbie Eicher</strong>,&nbsp;<strong>Tesca Fitzgerald</strong>, and&nbsp;<strong>Duri Long</strong>&nbsp;were student volunteers;&nbsp;<strong>Nicholas Davis</strong>,&nbsp;<strong>Katherine Fu</strong>,&nbsp;<strong>Julie Linsey</strong>, and&nbsp;<strong>Brian Magerko&nbsp;</strong>served on the program committee; and&nbsp;<strong>Gil Weinberg&nbsp;</strong>from the School of Music Technology gave one of the keynote talks.</p><p>Fitzgerald presented a paper,&nbsp;<a href="http://gatech.us3.list-manage.com/track/click?u=10091ee4bef3165440405cf07&amp;id=90c7834c06&amp;e=09dc537555" target="_blank"><em>Human-Robot Co-Creativity: Task Transfer on a Spectrum of Similarity</em></a>, which co-authored with Goel and&nbsp;<strong>Andrea Thomaz</strong>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1499453976</created>  <gmt_created>2017-07-07 18:59:36</gmt_created>  <changed>1499453976</changed>  <gmt_changed>2017-07-07 18:59:36</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The five-day event was attended by computing faculty and students from around the world.]]></teaser>  <type>news</type>  <sentence><![CDATA[The five-day event was attended by computing faculty and students from around the world.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-07-07T00:00:00-04:00</dateline>  <iso_dateline>2017-07-07T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-07-07 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[david.mitchell@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>593321</item>      </media>  <hg_media>          <item>          <nid>593321</nid>          <type>image</type>          <title><![CDATA[Gil Weinberg ICCC]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Gil Weinberg ICCC.JPG]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Gil%20Weinberg%20ICCC.JPG]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Gil%20Weinberg%20ICCC.JPG]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Gil%2520Weinberg%2520ICCC.JPG?itok=g_3XQjF9]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Gil Weinberg gives the closing keynote during ICCC 2017 at Georgia Tech.]]></image_alt>                    <created>1499453790</created>          <gmt_created>2017-07-07 18:56:30</gmt_created>          <changed>1499453790</changed>          <gmt_changed>2017-07-07 18:56:30</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="173295"><![CDATA[ICCC]]></keyword>          <keyword tid="246"><![CDATA[Georgia Institute of Technology]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="112431"><![CDATA[ashok goel]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="592858">  <title><![CDATA[Selfies: We Love How We Look and We’re Here to Show You]]></title>  <uid>27592</uid>  <body><![CDATA[<p>When it comes to selfies, appearance is (almost) everything. &nbsp;</p><p>To better understand the photographic phenomenon and how people form their identities online, Georgia Institute of Technology researchers combed through 2.5 million selfie posts on Instagram to determine what kinds of identity statements people make by taking and sharing selfies.</p><p>Nearly 52 percent of all selfies fell into the appearance category: pictures of people showing off their make-up, clothes, lips, etc. Pics about looks were two times more popular than the other 14 categories combined. After appearances, social selfies with friends, loved ones and pets were the most common (14 percent). Then came ethnicity pics (13 percent), travel (7 percent), and health and fitness (5 percent).</p><h4>►&nbsp;<a href="https://public.tableau.com/views/SelfieResearch/Dashboard1?:embed=y&amp;:display_count=no&amp;publish=yes" target="_blank">Explore the Top Selfie Trends</a></h4><p>The researchers noted that the prevalence of ethnicity selfies (selfies about a person&rsquo;s ethnicity, nationality or country of origin) is an indication that people are proud of their backgrounds. They also found that most selfies are solo pictures, rather than taken with a group.</p><p>The data was gathered in the summer of 2015. The Georgia Tech team believes the study is the first large-scale empirical research on selfies.</p><p>Overall, an overwhelming 57 percent of selfies on Instagram were posted by the 18-35-year-old crowd, something the researchers say isn&rsquo;t too surprising considering the demographics of the social media platform. The under-18 age group posted about 30 percent of selfies. The older crowd (35+) shared them far less frequently (13 percent). Appearance was most popular among all age groups.</p><p>Lead author Julia Deeb-Swihart says selfies are an identity performance &ndash; meaning that users carefully craft the way they appear online and that selfies are an extension of that. This is similar to William Shakespeare&rsquo;s famous line: &ldquo;All the world&rsquo;s a stage, and all the men and women merely players.&rdquo;</p><p>&ldquo;Just like on other social media channels, people project an identity that promotes their wealth, health and physical attractiveness,&rdquo; Deeb-Swihart said. &ldquo;With selfies, we decide how to present ourselves to the audience, and the audience decides how it perceives you.&rdquo;</p><p>This work is grounded in the theory presented by Erving Goffman in <em>The Presentation of Self in Everyday Life.</em> The clothes we choose to wear and the social roles we play are all designed to control the version of ourselves we want our peers to see.</p><p>&ldquo;Selfies, in a sense, are the blending of our online and offline selves,&rdquo; Deeb-Swihart said. &ldquo;It&rsquo;s a way to prove what is true in your life, or at least what you want people to believe is true.&rdquo;</p><p>The researchers gathered the data by searching for &ldquo;#selfie,&rdquo; then used computer vision to confirm that the pictures actually included faces. Nearly half of them didn&rsquo;t. They found plenty of spam with blank images or text. The accounts were using the hashtag to show up in more searches to gain more followers.</p><p>The study, &ldquo;Selfie-Presentation in Everyday Life: A Large-scale Characterization of Selfie Contexts on Instagram,&rdquo; was presented in May at the <a href="http://www.icwsm.org/2017/index.php">International AAAI Conference on Web and Social Media</a> in Montreal.</p><p>&nbsp;</p><p><em>Funding and sponsorship was provided by the U.S. Army Research Office (ARO) and Defense Advanced Research Projects Agency (DARPA) under Contract No. W911NF- 12-1-0043. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors. </em></p><p>&nbsp;</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1498054290</created>  <gmt_created>2017-06-21 14:11:30</gmt_created>  <changed>1498224702</changed>  <gmt_changed>2017-06-23 13:31:42</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[When it comes to selfies, appearance is (almost) everything.  ]]></teaser>  <type>news</type>  <sentence><![CDATA[When it comes to selfies, appearance is (almost) everything.  ]]></sentence>  <summary><![CDATA[<p>To better understand the photographic phenomenon and how people form their identities online, Georgia Institute of Technology researchers combed through 2.5 million selfie posts on Instagram to determine what kinds of identity statements people make by taking and sharing selfies.</p>]]></summary>  <dateline>2017-06-21T00:00:00-04:00</dateline>  <iso_dateline>2017-06-21T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-06-21 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[Study identifies most popular selfies for men and women, age ]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[maderer@gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Jason Maderer<br />National Media Relations<br /><a href="mailto:maderer@gatech.edu">maderer@gatech.edu</a><br />404-660-2926</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>592860</item>          <item>592861</item>      </media>  <hg_media>          <item>          <nid>592860</nid>          <type>image</type>          <title><![CDATA[Most Popular Selfie Type]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Most Popular Selfie Type_GT.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Most%20Popular%20Selfie%20Type_GT.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Most%20Popular%20Selfie%20Type_GT.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Most%2520Popular%2520Selfie%2520Type_GT.jpg?itok=1OVLiYAQ]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1498056681</created>          <gmt_created>2017-06-21 14:51:21</gmt_created>          <changed>1498056681</changed>          <gmt_changed>2017-06-21 14:51:21</gmt_changed>      </item>          <item>          <nid>592861</nid>          <type>image</type>          <title><![CDATA[Who Shares Selfies the Most]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Who Shares Selfies the Most_GT.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Who%20Shares%20Selfies%20the%20Most_GT.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Who%20Shares%20Selfies%20the%20Most_GT.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Who%2520Shares%2520Selfies%2520the%2520Most_GT.jpg?itok=5BZ1pBpK]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1498056727</created>          <gmt_created>2017-06-21 14:52:07</gmt_created>          <changed>1498056727</changed>          <gmt_changed>2017-06-21 14:52:07</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://public.tableau.com/views/SelfieResearch/Dashboard1?:embed=y&amp;:display_count=no&amp;publish=yes&amp;:showVizHome=no]]></url>        <title><![CDATA[Interactive Selfie Visualization]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>      </news_terms>  <keywords>          <keyword tid="174742"><![CDATA[selfies]]></keyword>          <keyword tid="174743"><![CDATA[Julie Deeb-Swihart]]></keyword>      </keywords>  <core_research_areas>          <term tid="39431"><![CDATA[Data Engineering and Science]]></term>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71881"><![CDATA[Science and Technology]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="592891">  <title><![CDATA[IC Professor and Student Merge Passions for Golf and CS With Interactive Visualization]]></title>  <uid>33939</uid>  <body><![CDATA[<p>One Georgia Institute of Technology professor and his recently graduated student are merging their passions for golf and computer science to help create an interactive visualization for avid golfers everywhere.</p><p>Utilizing available &ldquo;Top 100&rdquo; lists from <em>Golf</em> and <em>Golf Digest</em> magazines, former student <strong>Josh Kulas</strong>, who graduated in May with a Bachelor of Science in Industrial Engineering, Professor <strong>John Stasko</strong>, director of the <a href="http://www.cc.gatech.edu/gvu/ii/">Information Interfaces Lab</a> in Georgia Tech&rsquo;s <a href="http://ic.gatech.edu">School of Interactive Computing</a>, and current computer science Ph.D. student <strong>John Thompson</strong> created a <a href="http://www.cc.gatech.edu/gvu/ii/sportvis/golfcourses/">visual tool</a> to help golfers quantify their experiences playing some of the nation&rsquo;s greatest courses.</p><p>Count Kulas and Stasko among that enthusiastic community.</p><p>Kulas played golf regularly in high school with a self-reported handicap of around 6 or 7. He placed second in the county in his senior year in Champaign, Ill., with a personal best score of 73. Stasko, whose dad was a club professional, grew up around the sport. He played in high school and college (Bucknell University), was a member for 17 years at Atlanta&rsquo;s highly-rated East Lake Golf Club, and even won the match play club championship, called the Bobby Jones Memorial tournament, there in 1996.</p><p>&ldquo;I haven&rsquo;t played a lot of the overall top 100 courses, as they are mostly private, but I&rsquo;ve played a good number of the top public courses,&rdquo; Stasko said of his experience with courses on the list. &ldquo;I&rsquo;m always looking to play more of these. Augusta National is the true bucket list No. 1 that I dream about playing one day.&rdquo;</p><p>Kulas came across Stasko in a class during the Spring 2015 semester. Searching for an extracurricular project, Kulas discovered data visualization through a conversation with the professor. Having played golf rather seriously in high school, Kulas quickly noticed the golf course background on Stasko&rsquo;s computer during class one day.</p><p>&ldquo;I figured that would be a fun place to start,&rdquo; Kulas said.</p><p>The visualization itself illustrates a composite ranking, pinpoints locations, specifies whether a course is public or private, and lists number of courses by architect, among other features.</p><p>&ldquo;The visualization consolidates a lot of information and gives historical and comparative angles on the data that are difficult to get otherwise,&rdquo; Stasko said. &ldquo;I&rsquo;d actually say that the strength of the visualization is not necessarily in illuminating unexpected information or insights. It&rsquo;s more of a browsing and exploratory aid.&rdquo;</p><p>&ldquo;I wanted to make a tool that could answer a variety of questions,&rdquo; Kulas said. &ldquo;What is the highest-ranked course I have played? Where do my favorites stack up? Are there any highly ranked courses near me I can play? How has this course changed in ranking over time? For me, I find it fascinating how things change, or don&rsquo;t change, if you filter for courses built further in the past.&rdquo;</p><p>All of these questions can be answered by the easy-to-use visualization (Kulas has played 20 of the 385 courses listed in the visualization; Stasko has played 40, though a potential trip with some friends to Myrtle Beach, S.C., could add fouR&nbsp;more).</p><p>As are other questions golf fans may be eager to know:</p><p>The graphic shows, for example, that only one course in the top 10 of the composite rankings was built after 1935. Adjusting the time frame for the display indicates an influx in public courses, as compared to private, in the late 1990s and early 2000s. A list of top architects shows that Tom Fazio, and not Jack Nicklaus, has designed the most ranked courses (46).</p><p>The current top-ranked course is Pine Valley, which has been ranked among the top two every year this century. That private course was designed by George Crump in 1918, the only Crump course among the 380 listed.</p><p>The recently completed 117<sup>th</sup> United States Open tournament was played at Erin Hills in Hartford, Wisc. The relatively new course, which has been open since only 2006, is rated the No. 9 public course in <em>Golf Digest</em>&rsquo;s 2017 ranking, information easily gleaned from the visualization.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1498153728</created>  <gmt_created>2017-06-22 17:48:48</gmt_created>  <changed>1498153728</changed>  <gmt_changed>2017-06-22 17:48:48</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Professor John Stasko and former student Josh Kulas created an interactive visualization filled with information about top courses to satisfy thirst for golf knowledge.]]></teaser>  <type>news</type>  <sentence><![CDATA[Professor John Stasko and former student Josh Kulas created an interactive visualization filled with information about top courses to satisfy thirst for golf knowledge.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-06-22T00:00:00-04:00</dateline>  <iso_dateline>2017-06-22T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-06-22 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[david.mitchell@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>592889</item>      </media>  <hg_media>          <item>          <nid>592889</nid>          <type>image</type>          <title><![CDATA[Golf viz 1]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Screen Shot 2017-06-22 at 1.35.45 PM.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Screen%20Shot%202017-06-22%20at%201.35.45%20PM.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Screen%20Shot%202017-06-22%20at%201.35.45%20PM.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Screen%2520Shot%25202017-06-22%2520at%25201.35.45%2520PM.png?itok=j9JXs0_I]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[An interactive golf visualization developed by members of the Information Interfaces lab.]]></image_alt>                    <created>1498153032</created>          <gmt_created>2017-06-22 17:37:12</gmt_created>          <changed>1498153032</changed>          <gmt_changed>2017-06-22 17:37:12</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://www.cc.gatech.edu/gvu/ii/sportvis/golfcourses/]]></url>        <title><![CDATA[Top 100 Golf Courses in the U.S.]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="134"><![CDATA[Student and Faculty]]></category>          <category tid="8862"><![CDATA[Student Research]]></category>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>      </categories>  <news_terms>          <term tid="134"><![CDATA[Student and Faculty]]></term>          <term tid="8862"><![CDATA[Student Research]]></term>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>      </news_terms>  <keywords>          <keyword tid="172919"><![CDATA[Information Interfaces Group]]></keyword>          <keyword tid="11632"><![CDATA[john stasko]]></keyword>          <keyword tid="174751"><![CDATA[john thompson]]></keyword>          <keyword tid="174752"><![CDATA[josh kulas]]></keyword>          <keyword tid="125811"><![CDATA[golf courses]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="172922"><![CDATA[information visualization]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="592693">  <title><![CDATA[Assistant Professor Devi Parikh Earns IJCAI Computers and Thought Award]]></title>  <uid>33939</uid>  <body><![CDATA[<p>School of Interactive Computing Assistant Professor <strong>Devi Parikh</strong> was named recipient of the 2017 <a href="https://www.ijcai.org/awards">International Joint Conferences on Artificial Intelligence Computers and Thought Award</a>, which is considered to be the premier award for artificial intelligence researchers under the age of 35.</p><p>She was selected by the IJCAI-17 Awards Selection Committee for her contributions at the intersection of words, pictures, and common sense. This includes semantic image understanding, the use of visual attributes for human-machine collaboration and visual abstractions for learning common sense, and enabling humans to interact with visual content via natural language.</p><p>Parikh joins a particularly exclusive list of 27 AI visionaries who have been awarded since 1971, including Terry Winograd, David Marr and Tom Mitchell in the early days and Stuart Russell, Daphne Koller, Carlos Guestrin, and Andrew Ng more recently.</p><p>Parikh said that she is excited about the recognition that her lab&rsquo;s work in visual question answering (VQA) is getting.</p><p>&ldquo;Through making our large datasets and systems publicly available, we have enabled research groups around the world to make significant progress on building machines that can automatically answer questions about visual content,&rdquo; Parikh said. &ldquo;This has applications in any scenario where it is difficult, if not impossible, for someone to sift through visual data to elicit the information they need, be it aiding visually-impaired users, users on low-bandwidth networks that cannot support visual data, or assisting analysts in making decisions based on large quantities of visual feeds.</p><p>&ldquo;It has been rewarding to play a role in the creation of an entirely new sub-field of scientific endeavor in artificial intelligence and witness the research community rally around VQA.&rdquo;</p><p>This is one of a number of awards that Parikh has earned in recent months. She earned a <a href="http://www.cc.gatech.edu/news/588083/pair-ic-assistant-professors-earn-awards-research-explainable-intelligent-systems-and">Google Research Faculty Award</a>, an <a href="http://www.cc.gatech.edu/news/586463/amazon-research-awards-fund-computer-vision-and-machine-learning-projects">Amazon Academic Research Award</a>, and was featured last week in <em>Forbes</em> magazine as one of a handful of <a href="https://www.forbes.com/sites/mariyayao/2017/05/18/meet-20-incredible-women-advancing-a-i-research/2/#cee2a6e4edee">women advancing artificial intelligence research</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1497448656</created>  <gmt_created>2017-06-14 13:57:36</gmt_created>  <changed>1497448656</changed>  <gmt_changed>2017-06-14 13:57:36</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The Computers and Thought Award is considered to be the premier award for AI researchers under the age of 35.]]></teaser>  <type>news</type>  <sentence><![CDATA[The Computers and Thought Award is considered to be the premier award for AI researchers under the age of 35.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-06-14T00:00:00-04:00</dateline>  <iso_dateline>2017-06-14T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-06-14 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[david.mitchell@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>586462</item>      </media>  <hg_media>          <item>          <nid>586462</nid>          <type>image</type>          <title><![CDATA[Devi Parikh]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Devi Parikh.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Devi%20Parikh.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Devi%20Parikh.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Devi%2520Parikh.jpg?itok=MhB23tJ7]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1485377735</created>          <gmt_created>2017-01-25 20:55:35</gmt_created>          <changed>1485377735</changed>          <gmt_changed>2017-01-25 20:55:35</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.ijcai.org/awards]]></url>        <title><![CDATA[IJCAI Computers and Thought Award]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>      </news_terms>  <keywords>          <keyword tid="174685"><![CDATA[computers and thought award]]></keyword>          <keyword tid="173616"><![CDATA[devi parikh]]></keyword>          <keyword tid="2556"><![CDATA[artificial intelligence]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="592692">  <title><![CDATA[IC's Lauren Wilcox and Neha Kumar Selected For ACM Future of Computing Academy]]></title>  <uid>33939</uid>  <body><![CDATA[<p><a href="http://ic.gatech.edu/">School of Interactive Computing</a> Assistant Professors <strong>Lauren Wilcox</strong> and <strong>Neha Kumar</strong> were both selected to the inaugural class of the <a href="https://www.acm.org/fca">Association for Computing Machinery Future of Computing Academy</a> (ACM-FCA).</p><p>The ACM-FCA is a new initiative created by ACM to support and foster the next generation of computing professionals. It enables young researchers, practitioners, educators, and entrepreneurs to develop a coherent and influential voice that addresses challenging issues facing the field and society in general.</p><p>The ACM website characterizes selection to the ACM-FCA as a commitment, not an award. It notes that &ldquo;members of the Academy are expected to engage in activity for the benefit of the next generation of computing professionals.&rdquo;</p><p>ACM-FCA members are invited to attend ACM&rsquo;s celebration of 50 years of the ACM Turing Award on June 23-24 at the Westin St. Francis in San Francisco. The inaugural meeting of the ACM-FCA will be on June 25, also in San Francisco.</p><p>Wilcox, whose research focuses on enabling people to cultivate a more informed relationship with their health through human-centered technology, said she is thrilled to join the Academy.</p><p>&ldquo;It provides an opportunity to work with other researchers, scholars, and computing professionals in many different areas of computing,&rdquo; she said. &ldquo;We have different ideas about what the future of computing looks like, but a common goal of creating a shared vision for that future.&rdquo;</p><p>Kumar, who shares a joint appointment with the <a href="https://inta.gatech.edu/">Sam Nunn School of International Affairs</a> and conducts research at the intersection of human-computer interaction and global development, said she is excited to be a part of this group.</p><p>&ldquo;I&rsquo;m most excited about joining the Academy to learn the incredible things that all the other inaugural members are working towards,&rdquo; she said. &ldquo;Even from the one conference call we&rsquo;ve had, it&rsquo;s clear that everyone is driven and ready to change the world in their own unique ways.&rdquo;</p><p>Members of the inaugural class represent 19 different countries, including Morocco, Pakistan, India, the United States, the Netherlands, Egypt, Germany, Colombia, the United Kingdom, Italy, Canada, China, Denmark, Bangladesh, Turkey, Republic of Korea, Vietnam, Israel, and the Ukraine.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1497447181</created>  <gmt_created>2017-06-14 13:33:01</gmt_created>  <changed>1497447181</changed>  <gmt_changed>2017-06-14 13:33:01</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The IC assistant professors were named to the inaugural class of the ACM-FCA, which aims to foster the next generation of computing professionals.]]></teaser>  <type>news</type>  <sentence><![CDATA[The IC assistant professors were named to the inaugural class of the ACM-FCA, which aims to foster the next generation of computing professionals.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-06-14T00:00:00-04:00</dateline>  <iso_dateline>2017-06-14T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-06-14 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[david.mitchell@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>507851</item>          <item>356651</item>      </media>  <hg_media>          <item>          <nid>507851</nid>          <type>image</type>          <title><![CDATA[Neha Kumar]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[neha.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/neha_0.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/neha_0.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/neha_0.jpeg?itok=7IV4SSE7]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Neha Kumar]]></image_alt>                    <created>1457114400</created>          <gmt_created>2016-03-04 18:00:00</gmt_created>          <changed>1475895270</changed>          <gmt_changed>2016-10-08 02:54:30</gmt_changed>      </item>          <item>          <nid>356651</nid>          <type>image</type>          <title><![CDATA[Lauren Wilcox compressed]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[lauren-wilcox.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/lauren-wilcox_0.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/lauren-wilcox_0.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/lauren-wilcox_0.jpg?itok=WeuRwv1F]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Lauren Wilcox compressed]]></image_alt>                    <created>1449245762</created>          <gmt_created>2015-12-04 16:16:02</gmt_created>          <changed>1475895089</changed>          <gmt_changed>2016-10-08 02:51:29</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.acm.org/fca]]></url>        <title><![CDATA[ACM Future of Computing Academy]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>      </news_terms>  <keywords>          <keyword tid="138871"><![CDATA[Neha Kumar]]></keyword>          <keyword tid="109121"><![CDATA[Lauren Wilcox]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="3047"><![CDATA[ACM]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="592661">  <title><![CDATA[Autonomous Driving Research Collaboration gets a Boost from Qualcomm]]></title>  <uid>33939</uid>  <body><![CDATA[<p>A team of Georgia Tech researchers headed up by School of Aerospace Engineering professor&nbsp;<strong>Evangelos Theodorou</strong>&nbsp;and School of Interactive Computing professor&nbsp;<strong>James Rehg</strong>&nbsp;has been awarded a $100,000 fellowship by&nbsp;<strong><a href="https://www.qualcomm.com/invention/research/university-relations/innovation-fellowship/2017-us">Qualcomm</a></strong>&nbsp;for its proposal,&nbsp;<em><strong>&ldquo;Autonomous Racing Using Deep Learning and Game Theoretic Optimization.&rdquo;</strong></em></p><p>The GT proposal is one of eight nationwide that were chosen for the 2017 fellowship, which also includes a one-year mentorship by Qualcomm engineers.</p><p>Theodorou says the innovation fellowship will help him, Rehg, and graduate students&nbsp;<strong>Grady Williams&nbsp;</strong>(College of Computing)<strong>&nbsp;</strong>and&nbsp;<strong>Paul Drews</strong>&nbsp;(School of Electrical and Computer Engineering) to bring their research to place where it will have a transformative impact in the transportation industry.</p><p>&ldquo;Autonomous driving is one of the most important sub-fields in robotics,&rdquo; said Theodorou. &ldquo;However, autonomous vehicles driving hundreds of millions of miles are likely to get into situations where it is necessary for them to perform aggressive maneuvers to avoid collision. Our work can have an impact on that.&rdquo;</p><p>The team&rsquo;s work focuses on the problems faced by two or more autonomous racing vehicles in an environment that has not been previously mapped out. Potholes, bumps, and other irregularities are expected, but cannot be precisely predicted at the onset. Any system seeking to travel over such terrain must be able navigate new decisions on the fly. Each racing vehicle is necessarily pushed to its handling/acceleration limits, a condition that requires even more simultaneous sensing of the environment and other intelligent agents.<br /><br />&ldquo;There is only a small margin of error on both the control and perception side when racing against a capable adversary,&rdquo; said Theodorou. &ldquo;This research will address fundamental questions in autonomy by&nbsp;bringing together concepts on&nbsp;stochastic optimal control, game theory and deep learning. &quot;&nbsp;</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1497375863</created>  <gmt_created>2017-06-13 17:44:23</gmt_created>  <changed>1497375863</changed>  <gmt_changed>2017-06-13 17:44:23</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A team of researchers from the Schools of Aerospace Engineering and Interactive Computing has received a $100K grant to further its work on autonomous driving]]></teaser>  <type>news</type>  <sentence><![CDATA[A team of researchers from the Schools of Aerospace Engineering and Interactive Computing has received a $100K grant to further its work on autonomous driving]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-06-13T00:00:00-04:00</dateline>  <iso_dateline>2017-06-13T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-06-13 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>592616</item>          <item>349611</item>      </media>  <hg_media>          <item>          <nid>592616</nid>          <type>image</type>          <title><![CDATA[Prof. Evangelos Theodorou]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Theodoru-300.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Theodoru-300_0.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Theodoru-300_0.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Theodoru-300_0.jpg?itok=Gu6wEn5S]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Prof. Evangelos Theodorou]]></image_alt>                    <created>1497282894</created>          <gmt_created>2017-06-12 15:54:54</gmt_created>          <changed>1497282894</changed>          <gmt_changed>2017-06-12 15:54:54</gmt_changed>      </item>          <item>          <nid>349611</nid>          <type>image</type>          <title><![CDATA[James Rehg compressed]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[james-rehg.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/james-rehg_0.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/james-rehg_0.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/james-rehg_0.jpg?itok=LS7kR394]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[James Rehg compressed]]></image_alt>                    <created>1449245696</created>          <gmt_created>2015-12-04 16:14:56</gmt_created>          <changed>1475895073</changed>          <gmt_changed>2016-10-08 02:51:13</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.qualcomm.com/invention/research/university-relations/innovation-fellowship/2017-us]]></url>        <title><![CDATA[Qualcomm]]></title>      </link>          <link>        <url><![CDATA[http://acds-lab.gatech.edu/]]></url>        <title><![CDATA[Autonomous Control & Decisions Systems Lab]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="174666"><![CDATA[autonomous driving]]></keyword>          <keyword tid="667"><![CDATA[robotics]]></keyword>          <keyword tid="2082"><![CDATA[aerospace engineering]]></keyword>          <keyword tid="133251"><![CDATA[Evangelos Theodorou]]></keyword>          <keyword tid="14419"><![CDATA[jim rehg]]></keyword>      </keywords>  <core_research_areas>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="591571">  <title><![CDATA[New Georgia Tech Research May Help Combat Abusive Online Comments]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Researchers at the Georgia Institute of Technology&rsquo;s <a href="http://ic.gatech.edu/">School of Interactive Computing</a> have come up with a novel computational approach that could provide a more cost- and resource-effective way for internet communities to moderate abusive content.</p><p>They call it the <em>Bag of Communities </em>(BoC), a technique that leverages large-scale, preexisting data from other internet communities to train an algorithm to identify abusive behavior within a separate target community.</p><p>Specifically, they identified nine different communities. Five, such as the free-for-all of internet communities 4chan, are rife with abusive behavior from commenters; four, like the heavily moderated MetaFilter, are helpful, positive, and supportive.</p><p>Using linguistic characteristics from these two types of communities, researchers built an algorithm that can learn from the comments and, when a new post is generated within a target community, it can make a prediction of whether or not it is abusive.</p><p>&ldquo;MetaFilter is known around the internet as a good, helpful, supportive community,&rdquo; said <a href="http://www.cc.gatech.edu/people/eric-gilbert">Eric Gilbert</a>, an associate professor in the School of Interactive Computing and a member of the team of researchers on the project. &ldquo;That&rsquo;s an example of how, if your post is closer to that, it&rsquo;s more likely that it should stay on the site. Conversely, if your post is closer to 4chan, then maybe it should come off.&rdquo;</p><p>The researchers provide two algorithms. One is a static model, off the shelf with no training examples from the target community, and can achieve roughly 75 percent accuracy. In other words, with access only to posts from the other nine communities, the algorithm can accurately predict abusive posts in the target community roughly three quarters of the time.</p><p>&ldquo;A new community that does not have enough resources to actually build automated algorithms to detect abusive content could use the static model,&rdquo; said Georgia Tech doctoral student <a href="http://www.cc.gatech.edu/~eshwar3/">Eshwar Chandrasekharan</a>, who led the team.</p><p>A dynamic model, one that mimics scenarios in which newly moderated data arrives in batches, learns over time and can achieve 91.18 percent accuracy after seeing 100,000 human-moderated posts.</p><p>&ldquo;Over time, as new moderator labels come in, when it has seen examples of things that have been moderated from the site, it can learn more site-specific information,&rdquo; Chandrasekharan said. &ldquo;It can learn the type of comments that get moderated, and if there is a level of tolerance that is different from what you see in the static model, it could learn that over time.&rdquo;</p><p>Both the static and dynamic models outperformed a solely in-domain model from a major internet community.</p><p>Anyone who has managed an online community has encountered problems with abusive content from users. From social media to message boards to comments sections in online news publications, regulating what is and isn&rsquo;t allowed has become overly costly and taxing on existing human moderators.</p><p>Founders at social media startup Yik Yak spent months of their early time removing hate speech, and Twitter has stated publicly that dealing with abusive behavior remains its most pressing challenge. A number of major news agencies are buried under the demands of strict moderation, and many have shut down comments sections altogether.</p><p>Prior research into abuse detection and online content moderation has focused on in-domain methods &ndash; using data collected from within your own community &ndash; but those face challenges in obtaining enough data to build and evaluate algorithms. In a BoC-based method, algorithms would leverage out-of-domain data from other existing online communities.</p><p>Gilbert said that the applications from such a model could be widespread.</p><p>&ldquo;This is a core internet problem,&rdquo; he said. &ldquo;So many places struggle with this, and many are shutting comments off because they just don&rsquo;t want to deal with the trouble they cause.&rdquo;</p><p>This research is presented in a paper (<em>The Bag of Communities: Identifying Abusive Behavior Online with Preexisting Internet Data</em>) at the <a href="https://chi2017.acm.org/papers.html">Association for Computing Machinery CHI Conference on Human Factors in Computing Systems 2017</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1494350766</created>  <gmt_created>2017-05-09 17:26:06</gmt_created>  <changed>1494363783</changed>  <gmt_changed>2017-05-09 21:03:03</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Researchers at Georgia Tech have found a more cost-effective way for internet communities to moderate abusive content.]]></teaser>  <type>news</type>  <sentence><![CDATA[Researchers at Georgia Tech have found a more cost-effective way for internet communities to moderate abusive content.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-05-09T00:00:00-04:00</dateline>  <iso_dateline>2017-05-09T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-05-09 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[david.mitchell@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>591570</item>      </media>  <hg_media>          <item>          <nid>591570</nid>          <type>image</type>          <title><![CDATA[Cyberabuse hand]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[CHI.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/CHI.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/CHI.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/CHI.png?itok=vIP7kxd3]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[New Georgia Tech Research May Help Combat Abusive Online Comments]]></image_alt>                    <created>1494350522</created>          <gmt_created>2017-05-09 17:22:02</gmt_created>          <changed>1494350522</changed>          <gmt_changed>2017-05-09 17:22:02</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://www.chi.gatech.edu/2017/]]></url>        <title><![CDATA[Georgia Tech @ CHI 2017]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="8862"><![CDATA[Student Research]]></category>          <category tid="135"><![CDATA[Research]]></category>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>      </categories>  <news_terms>          <term tid="8862"><![CDATA[Student Research]]></term>          <term tid="135"><![CDATA[Research]]></term>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>      </news_terms>  <keywords>          <keyword tid="174386"><![CDATA[cyberabuse]]></keyword>          <keyword tid="174387"><![CDATA[online abuse]]></keyword>          <keyword tid="174388"><![CDATA[cyberbullying]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="174389"><![CDATA[conference on human factors in computing systems]]></keyword>          <keyword tid="1027"><![CDATA[chi]]></keyword>          <keyword tid="13342"><![CDATA[Eric Gilbert]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71881"><![CDATA[Science and Technology]]></topic>          <topic tid="71901"><![CDATA[Society and Culture]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="591306">  <title><![CDATA[IPaT In-Depth Spotlight: Gheric Speiginer]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Gheric Speiginer is a Ph.D. student in Human-Centered Computing at Georgia Tech, advised by Blair MacIntyre, professor in the School of Interactive Computing. The southern California native received his undergraduate degree in Computer Science at Hampton University. Speiginer is interested in exploring novel user interfaces and interaction techniques, particularly those that exploit the unique capabilities of augmented reality.<br /><br /><strong>What are you currently researching?</strong><br /><br />My focus is in augmented reality (AR). One aspect of my research is developing the software tools and semantics necessary to express the rich AR content that is envisioned by AR content designers. The other aspect of it is developing software abstractions and architectures that enable the use of multiple AR apps at the same time in the same space.<br /><br /><strong>How did you become interested in augmented reality?</strong><br /><br />I sort of stumbled into it. In undergrad, I did an internship at Brown University with a professor in the robotics department. I had noticed these strange black and white images that were placed around the room, which I&rsquo;d never seen before. So I asked about them, and I found out they were &quot;markers&quot; which were used as part of a computer vision tracking system, and then I started researching more about computer vision on my own. I found out that these kinds of &quot;markers&quot; were also used in certain augmented reality toolkits, and that led me to start researching more into augmented reality. Eventually I decided to start experimenting with AR in my dorm room, just for fun. I had an idea to combine several projects I learned about, and I didn&rsquo;t have all of the same equipment, but I basically found a different way to do it using some open source computer vision software. Through that, I ended up learning more about augmented reality, and every time I would research stuff online I kept seeing Georgia Tech over and over again, especially papers by Blair MacIntyre. It was at that point that I realized Georgia Tech would be a great fit for grad school.</p><p><strong>How has your experience been at Georgia Tech?</strong><br /><br />It&rsquo;s been really cool being exposed to all sorts of interesting projects here at Georgia Tech. Everybody&rsquo;s brilliant and I&rsquo;ve been able to have all sorts of opportunities with some of the leading researchers in the field. It&rsquo;s just been really amazing.<br /><br /><strong>What are your plans after graduation?</strong><br /><br />I&rsquo;ve considered academia, but I&rsquo;m definitely leaning more towards industry. On the one hand I do enjoy teaching and tutoring and I&rsquo;ve done a lot of that in the past, so I could see myself doing some part time teaching. But I&rsquo;ll probably go into industry first and perhaps eventually become a consultant and do something more entrepreneurial. I&#39;ve also become increasingly interested in alternate (post-scarcity) economic systems in the last several years, so another thing I will definitely want to explore after I graduate is how we can use technology to introduce and facilitate new ways of living and working together as a society.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1493843162</created>  <gmt_created>2017-05-03 20:26:02</gmt_created>  <changed>1493843162</changed>  <gmt_changed>2017-05-03 20:26:02</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Gheric Speiginer is interested in exploring novel user interfaces and interaction techniques, particularly those that exploit the unique capabilities of augmented reality.]]></teaser>  <type>news</type>  <sentence><![CDATA[Gheric Speiginer is interested in exploring novel user interfaces and interaction techniques, particularly those that exploit the unique capabilities of augmented reality.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-05-03T00:00:00-04:00</dateline>  <iso_dateline>2017-05-03T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-05-03 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Alyson Powell</p><p>Communications Officer</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>591305</item>      </media>  <hg_media>          <item>          <nid>591305</nid>          <type>image</type>          <title><![CDATA[Gheric Speiginer]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[gheric_0.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/gheric_0.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/gheric_0.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/gheric_0.png?itok=hbnMIk-t]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Gheric Speiginer displays his augmented reality interface that shows how robots are functioning.]]></image_alt>                    <created>1493842827</created>          <gmt_created>2017-05-03 20:20:27</gmt_created>          <changed>1493842827</changed>          <gmt_changed>2017-05-03 20:20:27</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="174330"><![CDATA[Gheric Speiginer]]></keyword>          <keyword tid="1597"><![CDATA[Augmented Reality]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="1600"><![CDATA[Blair MacIntrye]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="591108">  <title><![CDATA[How Do You Perform CPR? This Device Will Teach You]]></title>  <uid>28466</uid>  <body><![CDATA[]]></body>  <author>Meghana Melkote</author>  <status>1</status>  <created>1493399942</created>  <gmt_created>2017-04-28 17:19:02</gmt_created>  <changed>1493399942</changed>  <gmt_changed>2017-04-28 17:19:02</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech students built CPR+, a CPR mask with LED lights that offers user feedback throughout the resuscitation process, in the GVU Prototyping Lab. The device is one of six inventions is competing for Georgia Tech’s 2017 InVenture Prize.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech students built CPR+, a CPR mask with LED lights that offers user feedback throughout the resuscitation process, in the GVU Prototyping Lab. The device is one of six inventions is competing for Georgia Tech’s 2017 InVenture Prize.]]></sentence>  <summary><![CDATA[<p>Read more here:&nbsp;<a href="http://www.news.gatech.edu/2017/03/14/how-do-you-perform-cpr-device-will-teach-you">http://www.news.gatech.edu/2017/03/14/how-do-you-perform-cpr-device-will-teach-you</a></p>]]></summary>  <dateline>2017-03-14T00:00:00-04:00</dateline>  <iso_dateline>2017-03-14T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-03-14 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>      </media>  <hg_media>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="591088">  <title><![CDATA[Controlling a Robot is Now as Simple as Point and Click]]></title>  <uid>28466</uid>  <body><![CDATA[<p>The traditional interface for remotely operating robots works just fine for roboticists. They use a computer screen and mouse to independently control six degrees of freedom, turning three virtual rings and adjusting arrows to get the robot into position to grab items or perform a specific task.</p><p>But for someone who isn&rsquo;t an expert, the ring-and-arrow system is cumbersome and error-prone. It&rsquo;s not ideal, for example, for older people trying to control assistive robots at home.</p><p>A new interface designed by Georgia Institute of Technology researchers is much simpler, more efficient and doesn&rsquo;t require significant training time. The user simply points and clicks on an item, then chooses a grasp. The robot does the rest of the work.</p><p>&ldquo;Instead of a series of rotations, lowering and raising arrows, adjusting the grip and guessing the correct depth of field, we&rsquo;ve shortened the process to just two clicks,&rdquo; said Sonia Chernova, the Georgia Tech assistant professor in robotics who advised the research effort.</p><p>Her team tested college students on both systems, and found that the point-and-click method resulted in significantly fewer errors, allowing participants to perform tasks more quickly and reliably than using the traditional method.</p><p>&ldquo;Roboticists design machines for specific tasks, then often turn them over to people who know less about how to control them,&rdquo; said David Kent, the Georgia Tech Ph.D. robotics student who led the project. &ldquo;Most people would have a hard time turning virtual dials if they needed a robot to grab their medicine. But pointing and clicking on the bottle? That&rsquo;s much easier.&rdquo;</p><p>The traditional ring-and-arrow-system is a split-screen method. The first screen shows the robot and the scene; the second is a 3-D, interactive view where the user adjusts the virtual gripper and tells the robot exactly where to go and grab. This technique makes no use of scene information, giving operators a maximum level of control and flexibility. But this freedom and the size of the workspace can become a burden and increase the number of errors.</p><p>The point-and-click format doesn&rsquo;t include 3-D mapping. It only provides the camera view, resulting in a simpler interface for the user. After a person clicks on a region of an item, the robot&rsquo;s perception algorithm analyzes the object&rsquo;s 3-D surface geometry to determine where the gripper should be placed. It&rsquo;s similar to what we do when we put our fingers in the correct locations to grab something. The computer then suggests a few grasps. The user decides, putting the robot to work.</p><p>&ldquo;The robot can analyze the geometry of shapes, including making assumptions about small regions where the camera can&rsquo;t see, such as the back of a bottle,&rdquo; said Chernova. &ldquo;Our brains do this on their own &mdash; we correctly predict that the back of a bottle cap is as round as what we can see in the front. In this work, we are leveraging the robot&rsquo;s ability to do the same thing to make it possible to simply tell the robot which object you want to be picked up.&rdquo;</p><p>By analyzing data and recommending where to place the gripper, the burden shifts from the user to the algorithm, which reduces mistakes. During a study, college students performed a task about two minutes faster using the new method vs. the traditional interface. The point-and-click method also resulted in approximately one mistake per task, compared to nearly four for the ring-and-arrow technique.</p><p>In addition to assistive robots in homes, the researchers see applications in search-and-rescue operations and space exploration. The&nbsp;<a href="https://github.com/gt-rail/remote_manipulation_markers">interface has been released</a>&nbsp;as&nbsp;<a href="https://github.com/gt-rail/rail_agile_grasp">open-source software</a>&nbsp;and was presented in Vienna, Austria, March 6-9 at the&nbsp;<a href="http://humanrobotinteraction.org/2017/">2017 Conference on Human-Robot</a>&nbsp;Interaction (HRI2017).</p><p><em>The study is partially supported by National Science Foundation Fellowship (</em>IIS 13-17775<em>) and and the Office of Naval Research (N000141410795). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors.</em></p>]]></body>  <author>Meghana Melkote</author>  <status>1</status>  <created>1493398313</created>  <gmt_created>2017-04-28 16:51:53</gmt_created>  <changed>1493398348</changed>  <gmt_changed>2017-04-28 16:52:28</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A new interface designed by Georgia Institute of Technology researchers is much simpler, more efficient and doesn’t require significant training time. The user simply points and clicks on an item, then chooses a grasp. The robot does the rest of the work.]]></teaser>  <type>news</type>  <sentence><![CDATA[A new interface designed by Georgia Institute of Technology researchers is much simpler, more efficient and doesn’t require significant training time. The user simply points and clicks on an item, then chooses a grasp. The robot does the rest of the work.]]></sentence>  <summary><![CDATA[<p>The traditional interface for remotely operating robots works just fine for roboticists. They use a computer screen and mouse to independently control six degrees of freedom, turning three virtual rings and adjusting arrows to get the robot into position to grab items or perform a specific task.But for someone who isn&rsquo;t an expert, the ring-and-arrow system is cumbersome and error-prone. It&rsquo;s not ideal, for example, for older people trying to control assistive robots at home.&nbsp;</p><p>A new interface designed by Georgia Institute of Technology researchers is much simpler, more efficient and doesn&rsquo;t require significant training time. The user simply points and clicks on an item, then chooses a grasp. The robot does the rest of the work.</p><p>&nbsp;</p><p>Read more here:&nbsp;<a href="http://www.cc.gatech.edu/news/590819/controlling-robot-now-simple-point-and-click">http://www.cc.gatech.edu/news/590819/controlling-robot-now-simple-point-and-click</a></p>]]></summary>  <dateline>2017-04-24T00:00:00-04:00</dateline>  <iso_dateline>2017-04-24T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-04-24 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>      </media>  <hg_media>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="590637">  <title><![CDATA[Interactive Visualization Illustrates Uncertainty of NFL Draft]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Next week, 253 players will hear their names called over the course of three days in the 2017 NFL Draft. For many, it will be the beginning of a long and lucrative career in professional football. For most, it will be the highlight in an increasingly competitive business.</p><p>An <a href="http://www.cc.gatech.edu/gvu/ii/sportvis/nfldraft/run/">interactive visualization</a> created by a team of researchers in the Georgia Institute of Technology&rsquo;s School of Interactive Computing illustrates just how fleeting the career of a professional football player can be and how difficult it can be for teams to differentiate between the superstars and the busts.</p><p>The visualization, which catalogues each of the 32 teams&rsquo; draft picks from 2007-16, indicates with a green icon a player who is currently active on the team that drafted him. A blue icon indicates a player still in the league, but playing on a different team, and a red icon indicates a player that is no longer active in the NFL.</p><p>A quick glance at all 32 teams&rsquo; charts presents a healthy dose of red in comparison to the green and blue, illustrating the brevity of the average NFL career. An analysis has shown that the average length of a career decreased by about two years, from 4.99 years to 2.66, from 2008-14.</p><p>Only one team, the Carolina Panthers, have more than one player still active on their roster from their 2007 draft.</p><p>From a team perspective, the ebb and flow of a given franchise&rsquo;s success can be traced within the colors of the visualization.</p><p>The Atlanta Falcons, owners of an 11-5 record and a near Super Bowl championship this past season, have experienced their fair share. After a 13-3 season in 2012, their third straight season of double-digit wins, they surprised many by slipping to four, six, and eight victories over the next three years, missing the playoffs in each.</p><p>The visualization, however, shows why it probably shouldn&rsquo;t have come as such a surprise. The Falcons drafted just three players from 2007-12 that are currently on their roster. Since 2013, however, the mass of red icons has turned to green, as the team has hit on 21 of 30 picks.</p><h2>Check out&nbsp;highlights from a <a href="http://www.cc.gatech.edu/content/highlights-information-interfaces-nfl-draft-visualization">handful of other teams</a> in the NFL.</h2><p>Also evident in the visualization is which teams have seen relative success in the draft in comparison to others, as well as how that draft success has correlated to improved returns on the field.</p><p>On one hand, there are the Houston Texans, who have seen their average wins per season increase from 5.33 over the course of their first seven years of existence to 8.22 in the nine years since. That increase in wins coincides with a string of nine straight hits in the first round of the draft, shown in the visualization by a green icon in the first-round column for each year from 2008-16.</p><p>Comparatively, the consistently unsuccessful Cleveland Browns display just five first-round green icons since 2007, three of which have come in the past two years. They have just three in the second round, none coming before 2014.</p><p>In addition to the player&rsquo;s league status, active or inactive, the visualization allows the user to toggle to two other categories: Games started and approximate value.</p><p>The &ldquo;games started&rdquo; option indicates much of what you would expect &ndash; that players taken earlier in the draft see the field more often &ndash; but also indicates which teams have had the most success in finding the proverbial diamonds in the rough.</p><p>The Seattle Seahawks, for example, found much of the talent that led it to back-to-back Super Bowl appearances in 2013-14 in the later rounds of the 2010-11 drafts. Kam Chancellor, K.J. Wright, and Richard Sherman, who help form the nucleus of Seattle&rsquo;s stingy defense, were taken in the fourth and fifth rounds but are colored orange to indicate 65-99 NFL starts.</p><p>The team of researchers includes undergraduate student Se Yeon Kim, graduate student Sakshi Pratap, and School of Interactive Computing Professor John Stasko. Stasko is the director of the <a href="http://www.cc.gatech.edu/gvu/ii/">Information Interfaces Research Group</a>, whose mission is to help people take advantage of information to enrich their lives by creating information visualizations and visual analytics tools to help analyze and understand large data sets.</p><p>Information for the visualization was compiled from NFL.com&#39;s <a href="http://www.nfl.com/draft/history/fulldraft?type=team">draft history</a> and <a href="http://www.nfl.com/players">player</a> pages. Games started and approximate value was taken from <a href="http://www.pro-football-reference.com/">Pro Football Reference</a>.</p><p>Follow the link for a look at the <a href="http://www.cc.gatech.edu/gvu/ii/sportvis/nfldraft/">project page</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1492620985</created>  <gmt_created>2017-04-19 16:56:25</gmt_created>  <changed>1493215427</changed>  <gmt_changed>2017-04-26 14:03:47</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[An interactive visualization created by the Information Interfaces Research Groups shows just how few draftees make it long-term in the NFL.]]></teaser>  <type>news</type>  <sentence><![CDATA[An interactive visualization created by the Information Interfaces Research Groups shows just how few draftees make it long-term in the NFL.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-04-19T00:00:00-04:00</dateline>  <iso_dateline>2017-04-19T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-04-19 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[david.mitchell@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>590625</item>      </media>  <hg_media>          <item>          <nid>590625</nid>          <type>image</type>          <title><![CDATA[Atlanta Falcons Interactive Draft Vis]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Atlanta Falcons.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Atlanta%20Falcons.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Atlanta%20Falcons.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Atlanta%2520Falcons.png?itok=TQPt2qAW]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Atlanta Falcons Interactive Draft Visualization]]></image_alt>                    <created>1492616251</created>          <gmt_created>2017-04-19 15:37:31</gmt_created>          <changed>1492616251</changed>          <gmt_changed>2017-04-19 15:37:31</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="8862"><![CDATA[Student Research]]></category>      </categories>  <news_terms>          <term tid="8862"><![CDATA[Student Research]]></term>      </news_terms>  <keywords>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="11632"><![CDATA[john stasko]]></keyword>          <keyword tid="172919"><![CDATA[Information Interfaces Group]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="174093"><![CDATA[NFL Draft]]></keyword>          <keyword tid="12397"><![CDATA[Atlanta Falcons]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71901"><![CDATA[Society and Culture]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="590849">  <title><![CDATA[GT Computing Moving the Needle Forward in Autism Research]]></title>  <uid>33939</uid>  <body><![CDATA[<p>When neuroimaging took gigantic leaps forward in the 1970s and 80s with the introduction of magnetic resonance imaging (MRI) and computed tomography (CT), it was a sign of just how closely advances in medicine or diagnostics correlate to the technological advances within the field.</p><p>Suddenly, researchers were able to more safely observe and document the brain in live subjects, opening them to a world of study that was previously unattainable. There was a massive increase in understanding about things like medical conditions or effects of alcohol and drugs on the brain.</p><p>That kind of technological advancement, one that drastically moves the needle of study in the field forward, hasn&rsquo;t been as prominent in the field of behavioral psychology. It&rsquo;s a challenge that many researchers in the Georgia Institute of Technology&rsquo;s School of Interactive Computing (IC) are trying to overcome.</p><p>&ldquo;The tools that exist today in neuroimaging as compared to 50 or 60 years ago, there&rsquo;s just no comparison,&rdquo; IC Professor <strong>Jim Rehg</strong> said. &ldquo;People are looking at resolutions or structures in the brain that weren&rsquo;t even on the map 50 years ago. We&rsquo;re just trying to bring the behavioral measurement forward in the same way that is already happening for imaging and genetics.&rdquo;</p><p>Rehg is one of a number of faculty focusing their efforts on developing new computational analysis tools to measure behavior. A key goal of the work is to improve understanding of Autism Spectrum Disorder, a complex group of disorders of brain development characterized by repetitive behaviors and difficulties in social interaction and communication.</p><p>He began collaborating with Professor <strong>Gregory Abowd</strong> and senior research scientist <strong>Agata Rozga</strong>, among others, during a five-year National Science Foundation Expeditions in Computing grant and has subsequently continued his work with Rozga under a grant from the Simons Foundation. The former grant was instrumental in setting up the <a href="http://www.childstudylab.gatech.edu/">Child Study Lab</a> at Georgia Tech, which studies early social, communication, and play behavior in children, including those with autism.</p><h2>Tracking Problem Behaviors With Technology</h2><p>More recently, Rozga, the director of the lab, received &ndash; along with Associate Professor <strong>Thomas Ploetz</strong> and Dr. <strong>Nathan Call</strong> of the <a href="http://www.marcus.org/">Marcus Autism Center</a> &ndash; an NIH R21 grant for a project titled <em>Objective Measurement of Challenging Behaviors in Individuals with Autism Spectrum Disorder</em>.</p><p>The latter research deals in problem behaviors as exhibited by individuals with autism. Call, who is the director of <a href="http://www.marcus.org/About-Us/For-Professionals/~/media/Marcus/Documents/About-Marcus/FactSheet-BehaviorTreatment.pdf">Behavior Treatment Clinics</a> at the Marcus Autism Center, described the challenge the research is aiming to address.</p><p>&ldquo;Individuals with autism and other developmental disorders are more likely to exhibit problem behaviors like self-injury, pica, or property destruction,&rdquo; he said. &ldquo;Behavioral interventions exist, and can be very effective, but there are a few barriers. Data collection on the behavior is a key ingredient, but is most often done by a human observer, which is expensive, has the potential for reactivity, doesn&rsquo;t work for covert behaviors, cannot always provide a good estimate of severity, and may not always be accurate.&rdquo;</p><p>The project involves the use of accelerometers and machine learning to develop a measurement system that will detect and differentiate between different types of problem behavior in a way that addresses each of those challenges.</p><p>In the best of circumstances, such as a research or clinical setting, videos can be recorded and research assistants can go through the videos to find moments where the child engages in some type of behavior. Currently, that is the standard. As Rozga said, though, that is not something that scales to large samples or allows you to study behaviors outside the strictures of a research setting.</p><p>The approach, then, is to combine currently available wearable technology with computational analysis to see whether that might be used to advance the state of the art.</p><p>Using sensors attached to the wrists and ankles, the team records movement data from the individual.</p><p>&ldquo;From a technical point of view, we want to know whether we can see when an activity starts, when it ends, and of what nature that activity actually was,&rdquo; said Ploetz, who has worked with Rozga in the past and joined the IC faculty in February of this year. &ldquo;An automated recognition of problem behaviors is a substantial challenge that involves capturing through sensors and analyzing through machine learning-based assessment techniques.&rdquo;</p><p>The hope is that they can build statistical models that can analyze data streams and automatically pick out which kinds of activities or problem behaviors an individual engages at a given time, as well as their frequency and intensity.</p><p>&ldquo;One of the things that Dr. Call said was a clinically-relevant measure they have not been able to gather is severity of the problem behavior,&rdquo; Rozga said. &ldquo;It&rsquo;s hard to get two people to agree on any rating scale. We had this moment where we said, &lsquo;You know, that information is already in the signal.&rsquo; If you look at the amplitude at the moment of impact, we have potentially a signal there that can speak to the intensity, or severity, of the behavior. What other things can you measure if you had access to this new measure?&rdquo;</p><p>Further, and most importantly in the early stages, can these models measure with comparable accuracy to &ldquo;ground truth&rdquo; &ndash; labor-intensive, frame-by-frame coding &ndash; in the strict clinical setting.</p><p>If so, the long-term goal is to then deploy these behavior monitors into the home, a much less structured environment.</p><p>&ldquo;Can we use this for treatment follow-up, or to understand how these behaviors manifest in the home or school?&rdquo; Rozga said. &ldquo;Does this work beyond just the clinical setting?&rdquo;</p><h2>Measuring Social-Communication Behaviors</h2><p>Rozga&rsquo;s work with Rehg is similar in that it attempts to take advantage of the vast availability of sensor technologies to improve measurement of social-communication behaviors in young children, such as eye contact, shifts of attention between objects and faces, and gestures.</p><p>&ldquo;Historically, the problem was that our tools for getting information were very limited,&rdquo; Rehg said. &ldquo;What&rsquo;s really changed is our ability to collect large-scale data. Generally speaking, this is the best moment in time as far as sensor tools go. Cameras, microphones, accelerometers, inertial measurement units &ndash; these are the sensors we&rsquo;re most interested in.&rdquo;</p><p>With them, they can continuously track individuals&rsquo; eyes, heads, limbs, posture, and many other movements associated with the production of relevant social behaviors, increasing the overall pool of available data.</p><p>&ldquo;Large amounts of data from kids is what it takes to characterize behavior and how it changes over time,&rdquo; Rehg said. &ldquo;This is something that scales. You can replicate it in other settings, other labs. You can demonstrate that this approach works well across different data sets. We want to show that this is something that can be generalized.&rdquo;</p><p>If they can, a more accurate picture of childhood development, as well as the response to treatment in behavioral problems, could emerge.</p><p>There are other important contributors to the research, Rozga said. <strong>Audrey Southerland</strong> is the lab coordinator at the Child Study Lab, where she has helped with research for over six years. She began as an undergraduate research assistant under the Expeditions in Computing grant in 2011 and joined the staff full time as the lab coordinator after graduating with a Bachelor&rsquo;s degree in psychology in 2012. In this role, she oversees the lab, including data collection and current undergraduate research assistants, on a daily basis.</p><p><strong>Dr. Mindy Scheithauer</strong> has also been a key collaborator at the Marcus Autism Center, where she works in the Severe Behavior Program.</p><h2>Learn the Signs, Act Early</h2><p>Others throughout the College of Computing have pursued other extensive research surrounding autism. Senior research scientist and developmental psychologist <strong>Rosa Arriaga</strong> is leading a team that has developed <a href="http://www.cc.gatech.edu/news/584605/actearly-app-helps-parents-track-childhood-developmental-milestones">ActEarly</a>, a mobile Android app that gives parents and caregivers a comprehensive and convenient way to track developmental milestones for children.</p><p>The app is designed to support kids &ndash; newborns to age five &ndash; by providing information on social, language, cognitive, and physical milestones children should achieve at each age.</p><p>&ldquo;Parents may be unaware that a child is failing to meet important developmental milestones and this might put the child at risk,&rdquo; Arriaga said.</p><p>Working with <strong>Laurel Warrell</strong>, a Master&rsquo;s of Science Candidate in <a href="http://www.cc.gatech.edu/academics/degree-programs/masters/ms-hci">Human-Computer Interaction</a>, they are working to deploy and conduct usability studies with the app, which leverages expertise from the Centers for Disease Control and Prevention (CDC) and is part of a broader &ldquo;<a href="https://www.cdc.gov/ncbddd/actearly/">Learn the Signs, Act Early</a>&rdquo; campaign. This initiative seeks to identify developmental disabilities in young children and provide families with needed services.</p><p>Arriaga and her team are seeking parents to participate in their studies. They are asking that parents of children between one month old and 5 years old, who have an Android phone, to download the ActEarly mobile app and provide feedback. Interested individuals can follow the <a href="http://ipat.gatech.edu/study-recruitment">link</a> for more information.</p><p>Additionally, Arriaga&rsquo;s team is currently developing with the CDC an interactive e-book that will allow parents to track their 3-year-old child&rsquo;s milestones while they read. She is also working with undergraduates to develop toddler games to help inform parents about what their child can do. A demo of the latter project can be viewed in a video <a href="https://www.youtube.com/watch?v=7nfrFV5M2z4&amp;feature=youtu.be">here</a>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1493062488</created>  <gmt_created>2017-04-24 19:34:48</gmt_created>  <changed>1493062488</changed>  <gmt_changed>2017-04-24 19:34:48</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Multiple grants have helped develop fields of research into how technology can assist detection and, perhaps, treatment of problem behaviors associated with autism.]]></teaser>  <type>news</type>  <sentence><![CDATA[Multiple grants have helped develop fields of research into how technology can assist detection and, perhaps, treatment of problem behaviors associated with autism.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-04-24T00:00:00-04:00</dateline>  <iso_dateline>2017-04-24T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-04-24 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[david.mitchell@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>590844</item>      </media>  <hg_media>          <item>          <nid>590844</nid>          <type>image</type>          <title><![CDATA[Child Study Lab Autism Research]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Autism5.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Autism5.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Autism5.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Autism5.jpg?itok=JYiOtuat]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Lab coordinator Audrey Southerland, along with undergraduate assistants, leads data collection at the Child Study Lab.]]></image_alt>                    <created>1493061979</created>          <gmt_created>2017-04-24 19:26:19</gmt_created>          <changed>1493061979</changed>          <gmt_changed>2017-04-24 19:26:19</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://www.childstudylab.gatech.edu/]]></url>        <title><![CDATA[Child Study Lab]]></title>      </link>          <link>        <url><![CDATA[http://www.marcus.org/]]></url>        <title><![CDATA[Marcus Autism Center]]></title>      </link>          <link>        <url><![CDATA[http://ipat.gatech.edu/study-recruitment]]></url>        <title><![CDATA[ActEarly]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>      </news_terms>  <keywords>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="6053"><![CDATA[Autism]]></keyword>          <keyword tid="108751"><![CDATA[Autism Spectrum Disorder]]></keyword>          <keyword tid="11172"><![CDATA[Agata Rozga]]></keyword>          <keyword tid="14419"><![CDATA[jim rehg]]></keyword>          <keyword tid="11178"><![CDATA[Rosa Arriaga]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="590531">  <title><![CDATA[Professor Amy Bruckman to Serve as School of Interactive Computing Interim Chair]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Georgia Institute of Technology Professor <strong>Amy Bruckman</strong> will serve as interim chair of the <a href="http://www.ic.gatech.edu/">School of Interactive Computing</a> beginning on July 1, after current Chair <strong>Annie Ant&oacute;n&rsquo;s</strong> term comes to an end. Bruckman, who currently serves as associate chair of the school, will serve until the school hires its new chair at the completion of an international search process.</p><p>&ldquo;It&rsquo;s been a pleasure working with Annie these past three years, and I&rsquo;m excited about the candidates in our chair search,&rdquo; Bruckman said. &ldquo;I don&rsquo;t aspire to a bigger administrative role myself, but I&rsquo;m happy to fill in during this time of transition.&rdquo;</p><p><a href="http://www.cc.gatech.edu/fac/Amy.Bruckman/">Bruckman</a> has been a faculty member in Georgia Tech&rsquo;s College of Computing since 1997, when she was brought on as an assistant professor. She became an associate professor in 2003, a professor in 2012, and began serving as associate chair of the School of Interactive Computing in 2014.</p><p>As a researcher, she and her students focus on social computing and online collaboration. Current projects include studying the introduction of the internet to Cuba, and trying to understand online harassment. She also studies how social media can support social movements, and is currently doing action research with the&nbsp;organization Science for the People.</p><p>Bruckman received her Ph.D. from the Massachusetts Institute of Technology (MIT) Media Lab&rsquo;s Epistemology and Learning group in 1997, her Master&rsquo;s from the MIT Media Lab&rsquo;s Interactive Cinema Group in 1991, and her Bachelor&rsquo;s in physics from Harvard University in 1987.</p><p>Professor <a href="http://www.cc.gatech.edu/people/annie-anton">Annie Ant&oacute;n</a> began her five years of service as chair of the School of Interactive Computing in 2012, joining Georgia Tech&rsquo;s faculty ranks after 14 years at North Carolina State University&rsquo;s College of Engineering.</p><p>Ant&oacute;n earned each of her Bachelor&rsquo;s, Master&rsquo;s, and Ph.D. from Georgia Tech in 1990, 1992, and 1997, respectively.</p><p>The search for a new chair is being conducted by a committee of 11 faculty, staff, and students. It is chaired by School of Computer Science Professor Ellen Zegura. Other members include School of Interactive Computing Professors Ian Bogost, Ashok Goel, and John Stasko, Associate Professors Mark Riedl and James Hays, Assistant Professors Betsy DiSalvo and Jacob Eisenstein, research scientist Agata Rozga, financial administrator Connie Irish, and Ph.D. student Maia Jacobs.</p><p>Click the link for a full description of the <a href="http://www.ic.gatech.edu/chair-school-interactive-computing">open chair position</a> and background on the School of Interactive Computing.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1492458356</created>  <gmt_created>2017-04-17 19:45:56</gmt_created>  <changed>1492458356</changed>  <gmt_changed>2017-04-17 19:45:56</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[School of Interactive Professor and Associate Chair Amy Bruckman will serve as interim chair upon the completion of Annie Antón's five years of service.]]></teaser>  <type>news</type>  <sentence><![CDATA[School of Interactive Professor and Associate Chair Amy Bruckman will serve as interim chair upon the completion of Annie Antón's five years of service.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-04-17T00:00:00-04:00</dateline>  <iso_dateline>2017-04-17T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-04-17 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[david.mitchell@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>590524</item>      </media>  <hg_media>          <item>          <nid>590524</nid>          <type>image</type>          <title><![CDATA[Amy Bruckman]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[asb_full.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/asb_full.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/asb_full.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/asb_full.jpg?itok=DmmaSSsY]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Professor Amy Bruckman to serve as School of Interactive Computing Interim Chair]]></image_alt>                    <created>1492457925</created>          <gmt_created>2017-04-17 19:38:45</gmt_created>          <changed>1492457925</changed>          <gmt_changed>2017-04-17 19:38:45</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://www.ic.gatech.edu/chair-school-interactive-computing]]></url>        <title><![CDATA[Chair, School of Interactive Computing]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="8472"><![CDATA[amy bruckman]]></keyword>          <keyword tid="27641"><![CDATA[annie anton]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="590161">  <title><![CDATA[RoboJackets Providing Opportunity for Both Competition and Outreach]]></title>  <uid>33939</uid>  <body><![CDATA[<p>The origins of Georgia Tech&rsquo;s <a href="https://robojackets.org/"><em>RoboJackets</em></a> organization can be traced back to 1999, when a BattleBots team was founded for the first time within the School of Mechanical Engineering.</p><p>Back then, there were just a few members working on projects in their spare time. The school&rsquo;s focus on co-curricular involvement was not as widespread as it has become today, so members had to be more resourceful in their pursuit of knowledge and competition.</p><p>It&rsquo;s a far cry from what the popular student group has become.</p><p>Today, there are over 200 members representing at least nine different degrees, from mechanical engineering, electrical engineering and computer science, to computational engineering and aerospace engineering, among others.</p><p>&ldquo;It&rsquo;s become such an active organization,&rdquo; said <em>RoboJackets</em> president Ryan Strat, a fourth-year computer science major nearing the end of his one-year term. &ldquo;And our members are dedicated to improving on every facet.&rdquo;</p><p>There are currently five teams within the organization, sub-groups that work and compete in varying capacities. The original team, BattleBots, has maintained a continued presence since the group&rsquo;s inception nearly two decades ago. There is also RoboCup, a robotic soccer league, RoboRacing, the youngest of the five groups, the Intelligent Ground Vehicle Competition (IGVC), and Outreach. It is this latter group, Strat said, that sets the <em>RoboJackets</em> apart from many other organizations across Georgia Tech&rsquo;s campus.</p><p><strong>Outreach</strong></p><p>The Outreach team was created in 2001 to fulfill a need the organization felt was being overlooked at the time. Building robots was great, they said, but members felt that they had a valuable skill that should be shared.</p><p>Partnering with FIRST Robotics, a partnership that is still growing today, the <em>RoboJackets</em> began a mentorship program for high school teams in the Atlanta area. Teams are invited in once a week to a presentation by the <em>RoboJackets</em> on things they need to be a successful team &ndash; how to manage resources, how to recruit team members, sessions on vital subjects like computer vision, for example.</p><p>The <em>RoboJackets</em> are currently affiliated with Toaster Tech, a team of high school students in the Atlanta area. Past affiliations include Westlake Roarbotics, Reboot, Tech High School, Georgia Robotics Alliance SOUP, Wheeler High CircuitRunners, and Roswell High Chimera.</p><p>&ldquo;Service is a core component of being an organization, and I think that&rsquo;s what sets us apart from others on campus,&rdquo; Strat said. &ldquo;The fact that it&rsquo;s a combination of hands-on engineering practicum as well as a public service is very unique. I think that&rsquo;s what helps us produce such well-rounded students.&rdquo;</p><p>The group maintains a YouTube channel with an archive of learning resources for teens. Recently, for the in-person presentations, they invited some of the high school students to submit their own presentations, assisted them in crafting it, and allowed them to present themselves.</p><p>Volunteering at high school competitions and assorted events has been a growing component, as well. The <em>RoboJackets</em> provide highly-skilled volunteers that can handle tasks like officiating and audio/video assistance, among others.</p><p>&ldquo;We have a dedicated base in the state, and <em>RoboJackets</em> is helping to grow that footprint,&rdquo; Strat said.</p><p>The <em>RoboJackets </em>help put on events like the FIRST Robotics Competition Kickoff each January, which reveals games and begins to league&rsquo;s season. The event is held each year at the Ferst Theater and welcomes around 1,400 people to campus. Also, the Robotics Symposium was a new event for Fall 2016 that brought in speakers from various parts of Georgia FIRST and industry partners to give over 30 talks to Georgia middle and high school students.</p><h2>VIDEO: To see more from the RoboJackets, including both instruction and competition, visit their YouTube channel <a href="https://www.youtube.com/user/RoboJackets">here</a>.</h2><p><strong>BattleBots</strong></p><p>The BattleBots have long been a pop-culture phenomenon, earning spots on popular television networks as they fight to the death.</p><p>The <em>RoboJackets</em> version has been around since 1999 and comprises a number of different facets. There are the small editions, the 3-lb. robots that are relatively inexpensive and can be designed and manufactured within a couple of months.</p><p>Newer members of the <em>RoboJackets</em> start here in groups of 4-6 and, working with more experienced mentors, create the BattleBot from scratch.</p><p>&ldquo;It&rsquo;s an art in many ways,&rdquo; Strat said. &ldquo;You have to learn what can actually be manufactured and what can&rsquo;t. You can make something in any shape on a computer, but that doesn&rsquo;t mean you can actually make it.&rdquo;</p><p>After the 3-lb. program, members step up in size for other larger competitions. Strat said the team has created robots in the 60- and 120-lb. weight classes.</p><p><strong>RoboCup</strong></p><p>Originally a project within the Institute for Robotics and Intelligent Machines (IRIM), the RoboCup team is in a small-size league, part of the RoboCup Federation, for robotic soccer competition. The federation is a research group dedicated to building humanoid robotic soccer players capable of beating the World Cup champions by the year 2050.</p><p>The league the <em>RoboJackets</em> participate in is 6-on-6, utilizing small wheeled robots about the size of a coffee can.</p><p>Currently, the team is focusing on soccer strategy.</p><p>&ldquo;We&rsquo;re trying to solve the multi-agent problem,&rdquo; Strat explained. &ldquo;You have <em>n </em>players on the field &ndash; how do you decide who does what? How do you plan things like aggression?&rdquo;</p><p>The <em>RoboJackets</em> team participates in international events, which often take place in the same location as the World Cup. Strat said the team will send 10-11 students in July to Japan to compete. Last year, they competed in Germany.</p><p>&ldquo;It&rsquo;s one of the more research-focused competitions,&rdquo; Strat said. &ldquo;BattleBots is more fun and concerned with winning or losing. This one, everyone competing is writing a research paper, and your prize for winning is another research paper.&rdquo;</p><p><strong>IGVC</strong></p><p>The Intelligent Ground Vehicle Competition is tasked with the construction of an autonomous robot capable of navigating an off-road obstacle course. Essentially, Strat said, it is an autonomous all-terrain vehicle.</p><p>The IGVC is held by the Association of Unmanned Vehicle Systems International. Each year, the <em>RoboJackets </em>send a team to Michigan to compete in mapping and navigation challenges. Given certain GPS waypoints, the vehicles must travel to each location on the course while hauling a payload.</p><p>&ldquo;The robot itself is very similar structurally to an ATV,&rdquo; Strat said. &ldquo;It is loaded with a few cameras, and this year we&rsquo;ll be loading Intel real-sense cameras, which are more or less a Kinect. It&rsquo;s a depth camera to give more information about where things are.&rdquo;</p><p>Teams are scored on performance in the autonomous challenge, presentation, and the design of the robot.</p><p><strong>RoboRacing</strong></p><p>RoboRacing is the youngest of the five teams, having been established just four years ago. Despite its youth, it is already one of the most successful of all the <em>RoboJackets&rsquo;</em> groups.</p><p>It has won gold two of the three years it has competed, sweeping the competition at least once with design awards, circuit racing, and drag racing at the International Autonomous Robot Racing Challenge in Waterloo, Canada.</p><p>Last year, they added the Sparkfun Autonomous Vehicle Challenge, which involves the same car but more challenging vision targets. Instead of looking at cones, which are easier to identify, the Sparkfun course is marked with things like chain-linked fences or bales pine straw.</p><p>&ldquo;It&rsquo;s much more difficult from a computer vision standpoint,&rdquo; Strat said.</p><p>Beyond those competitions, there is also an autonomous Power Wheels racing series.</p><p>Yes, Power Wheels &ndash; the same small car you drove around in as a toddler is used in a hobbyist community for racing.</p><p>&ldquo;You sit in them as an adult, and it is comical,&rdquo; Strat said. &ldquo;That competition is in October. At this point, we&rsquo;re very much in the design phase.&rdquo;</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1491853196</created>  <gmt_created>2017-04-10 19:39:56</gmt_created>  <changed>1491853196</changed>  <gmt_changed>2017-04-10 19:39:56</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The RoboJackets are a five-team robotics organization with membership over 200 at Georgia Tech.]]></teaser>  <type>news</type>  <sentence><![CDATA[The RoboJackets are a five-team robotics organization with membership over 200 at Georgia Tech.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-04-10T00:00:00-04:00</dateline>  <iso_dateline>2017-04-10T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-04-10 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[david.mitchell@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>590156</item>      </media>  <hg_media>          <item>          <nid>590156</nid>          <type>image</type>          <title><![CDATA[RoboJackets 3]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[3_DSC_0105.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/3_DSC_0105_0.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/3_DSC_0105_0.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/3_DSC_0105_0.jpg?itok=aBU0I93X]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1491852627</created>          <gmt_created>2017-04-10 19:30:27</gmt_created>          <changed>1491852627</changed>          <gmt_changed>2017-04-10 19:30:27</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="134"><![CDATA[Student and Faculty]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="134"><![CDATA[Student and Faculty]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="11489"><![CDATA[RoboJackets]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="594"><![CDATA[college of engineering]]></keyword>          <keyword tid="79181"><![CDATA[national robotics week]]></keyword>      </keywords>  <core_research_areas>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="590057">  <title><![CDATA[Guthman Musical Instrument Competition 2017 Winners]]></title>  <uid>28466</uid>  <body><![CDATA[<p>Read more about the 2017 Winners of Guthman Musical Instrument Competition here:&nbsp;https://guthman.gatech.edu/2017-winners</p>]]></body>  <author>Meghana Melkote</author>  <status>1</status>  <created>1491596646</created>  <gmt_created>2017-04-07 20:24:06</gmt_created>  <changed>1491596956</changed>  <gmt_changed>2017-04-07 20:29:16</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The 2017 Winners of the Guthman Musical Instrument Competition]]></teaser>  <type>news</type>  <sentence><![CDATA[The 2017 Winners of the Guthman Musical Instrument Competition]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-03-15T00:00:00-04:00</dateline>  <iso_dateline>2017-03-15T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-03-15 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>      </media>  <hg_media>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="590058">  <title><![CDATA[How Do You Perform CPR? This Device Will Teach You]]></title>  <uid>28466</uid>  <body><![CDATA[<p>CPR+ is a CPR mask with LED lights that offers user feedback throughout the resuscitation process. The device is one of six inventions is competing for Georgia Tech&rsquo;s 2017 InVenture Prize.</p><p>The other inventors are: Dave Ehrlich, a computer engineering major; Samuel Clarke, a mechanical engineering and computer science major; and Ryan Williams, a computer engineering major.</p><p>Read more here:&nbsp;<a href="http://www.news.gatech.edu/2017/03/14/how-do-you-perform-cpr-device-will-teach-you" id="LPlnk260928" target="_blank">http://www.news.gatech.edu/2017/03/14/how-do-you-perform-cpr-device-will-teach-you</a></p>]]></body>  <author>Meghana Melkote</author>  <status>1</status>  <created>1491596823</created>  <gmt_created>2017-04-07 20:27:03</gmt_created>  <changed>1491596823</changed>  <gmt_changed>2017-04-07 20:27:03</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[CPR+ is one of six finalists for the 2017 InVenture Prize]]></teaser>  <type>news</type>  <sentence><![CDATA[CPR+ is one of six finalists for the 2017 InVenture Prize]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-03-14T00:00:00-04:00</dateline>  <iso_dateline>2017-03-14T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-03-14 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>      </media>  <hg_media>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="589909">  <title><![CDATA[Vivian Chu Working to Provide Robots Basic Building Blocks for Cognition]]></title>  <uid>33939</uid>  <body><![CDATA[<p>Georgia Institute of Technology robotics student <strong>Vivian Chu</strong> shares a familiar path to computer science with plenty of other students:</p><p>As a child, she loved engineering and computer science, taking things apart and putting them back together. Both of her parents were software engineers, so her road to STEM was paved long before she had the means to travel it.</p><p>She attended the University of California, Berkeley, for her undergraduate degree, where she earned a Bachelor&rsquo;s degree in Electrical Engineering and Computer Science. She focused on Embedded Software, and, by and large, she enjoyed the experience.</p><p>But something was always missing when she took her computer science or electrical engineering classes. In them, she might design an algorithm or program a circuit, but she wasn&rsquo;t seeing visual representation of her work in the way she wanted.</p><p>&ldquo;You could put these things together, but there wasn&rsquo;t a lot that you could actually see happen,&rdquo; she said. &ldquo;Then I took this one class where we got to program a Roomba to climb ramps or do other actions with an accelerometer. That was the first time things kind of clicked.&rdquo;</p><p>She began to see and appreciate how a robot could understand how to interact with the world and also how it processed the information it gathered.</p><p>She got another taste a year later as a senior while working on an autonomous helicopter project. The helicopter didn&rsquo;t do much &ndash; just hovering a few feet off the ground &ndash; but she realized during her work that she could sit in the lab for 12 hours without realizing it and come back excited to work the next day.</p><p>&ldquo;That drove it home,&rdquo; she said.</p><p>Now a Ph.D. student in robotics at Georgia Tech, Chu is interested in how to advance robotics to a point where robots could be deployed in care facilities or the home. Specifically, she is taking an approach of teaching robots the basic building blocks of cognition.</p><p>There are certain things humans learn as children that help them develop an understanding of the material world around them. A cup is a cup because it is fully containable, able to hold something like water inside; a spoon is a spoon because it can scoop other materials and hold them within its concave structure.</p><p>&ldquo;If you could teach these robots these basic components, these basic building blocks, then when they go into your home, they could better reason how to perform other tasks,&rdquo; she said.</p><p>Like making pasta. If a robot knows it needs something containable to hold something, heat to cook, and a spoon to stir, it could carry out that and other similar jobs.</p><p>Her inspiration came when she was working on her Master&rsquo;s degree in robotics at the University of Pennsylvania. She attended a guest lecture by Georgia Tech alum <strong>Alex Stoytchev</strong>, who is now an assistant professor at Iowa State University. In the talk, Stoytchev discussed developmental psychology in children, how they explore basic actions and movements.</p><p>&ldquo;A lot of my research is similar in that I want to teach these building blocks by having robots play with objects the way children play,&rdquo; Chu said. &ldquo;Adults give a child a nudge in the right direction here or there. Rather than having a robot do it blindly, we can have someone in the room and give it a bump here or there.</p><p>&ldquo;It presents something that is much faster than a robot doing it on its own.&rdquo;</p><p>The ideal goal is for a robot to truly understand its different sensory inputs. People use touch, sight, and sound, for example, to accomplish a task like turning on a lamp. Currently, robots are either very visual, which is the majority of the research, or incorporate touch.</p><p>&ldquo;There&rsquo;s very little being done to sort of merge these senses,&rdquo; she said. &ldquo;Audio is almost unheard of.&rdquo;</p><p>Chu would like to achieve a scenario where the robot could understand that to turn on a lamp there is a touch component (learning the correct force with which to pull the rope), a visual component (to see where to pull, as well as whether the light turns on or not), and an auditory component (to hear the click as it pulls the rope).</p><p>&ldquo;Those are all things I&rsquo;m trying to research for my thesis,&rdquo; she said.</p><p>The applications for this research are wide-ranging, but the enormous potential for the aging population is one of the aspects that interests Chu the most.</p><p>&ldquo;As people get older, how do I make sure they could retire and have a dignified lifestyle toward the end of their life?&rdquo; she asked.</p><p>Although she is still pursuing answers to these questions and is yet to defend her thesis, she was already recognized in the robotics community by <em>Robohub</em>&rsquo;s 2016 list <em><a href="http://robohub.org/25-women-in-robotics-you-need-to-know-about-2016/">25 Women in Robotics You Need to Know About</a></em>.</p><p>The inclusion on the list took Chu by surprise, but she said it was rewarding because it acknowledges the importance of the work she is doing.</p><p>&ldquo;As a Ph.D. student, the biggest fear is that you are going to write your thesis and no one is going to know about it,&rdquo; she said. &ldquo;That it&rsquo;s just a document that gets tossed aside and doesn&rsquo;t have an impact. It&rsquo;s nice to know that, on a high level, there&rsquo;s acknowledgement of what I&rsquo;m working on.&rdquo;</p><p>She plans to complete her degree within the next year, and still has plenty of goals she&rsquo;d like to achieve going forward. While she is undecided whether she&rsquo;ll pursue a career in academia or one working with a startup &ndash; a lifelong goal of hers &ndash; she knows she ultimately wants to impact society as a whole in any way she can.</p><p>&ldquo;I often joke with my wife about the ways in which we can try to save the world,&rdquo; she said. &ldquo;But all jokes aside, for me, technology for just technology&rsquo;s sake isn&rsquo;t enough. The goal really is: How can the things I&rsquo;m working on help improve the lives of those around us?&rdquo;</p><p><strong>National Robotics Week is April 8-16. Follow the <a href="http://www.cc.gatech.edu/">College of Computing</a> and <a href="http://www.gatech.edu/">Georgia Tech</a> pages for additional content throughout the week.</strong></p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1491490052</created>  <gmt_created>2017-04-06 14:47:32</gmt_created>  <changed>1491490052</changed>  <gmt_changed>2017-04-06 14:47:32</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[As National Robotics Week is set to begin, one of Georgia Tech's Ph.D. students is helping teach robots vital reasoning skills.]]></teaser>  <type>news</type>  <sentence><![CDATA[As National Robotics Week is set to begin, one of Georgia Tech's Ph.D. students is helping teach robots vital reasoning skills.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-04-06T00:00:00-04:00</dateline>  <iso_dateline>2017-04-06T00:00:00-04:00</iso_dateline>  <gmt_dateline>2017-04-06 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[david.mitchell@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>589904</item>      </media>  <hg_media>          <item>          <nid>589904</nid>          <type>image</type>          <title><![CDATA[Vivian Chu 1]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Vivian Chu Main.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Vivian%20Chu%20Main.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Vivian%20Chu%20Main.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Vivian%2520Chu%2520Main.jpg?itok=O-SMN8NV]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Vivian Chu poses with the robot Curi, which she works with in her lab.]]></image_alt>                    <created>1491489769</created>          <gmt_created>2017-04-06 14:42:49</gmt_created>          <changed>1491489769</changed>          <gmt_changed>2017-04-06 14:42:49</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="8862"><![CDATA[Student Research]]></category>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="8862"><![CDATA[Student Research]]></term>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="172726"><![CDATA[Vivian Chu]]></keyword>          <keyword tid="106591"><![CDATA[25 Women in Robotics You Need to Know About]]></keyword>          <keyword tid="667"><![CDATA[robotics]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="79181"><![CDATA[national robotics week]]></keyword>      </keywords>  <core_research_areas>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="588532">  <title><![CDATA[CS Minor Providing Versatility for GT Alum Jon Eisen]]></title>  <uid>33939</uid>  <body><![CDATA[<p>For <strong>Jon Eisen</strong>, everything has always been about numbers.</p><p>The path that led him to speak on behalf of the prominent video game hub Activision, publishers of the popular <em>Call of Duty</em> franchise, at last week&rsquo;s <a href="http://gvu.gatech.edu/">GVU</a> Brown Bag event has been paved with them.</p><p>He majored in Applied Mathematics at the <a href="http://gatech.edu">Georgia Institute of Technology</a>, graduating with his degree in 2009 and carrying along a Computer Science minor for good measure. He spent time designing RADAR algorithms for Northrop Grumman Corporation in Baltimore, Md., and then worked as an application developer for a short period at Under Armour.</p><p>Even hobbies in his free time are unique because of specific numbers associated with them. Take the number 50, for example: The number of miles he plans to run in his first ultra-marathon, the Quad Rock 50, in May.</p><p>&lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/ZPISUOrgzYI&quot; frameborder=&quot;0&quot; allowfullscreen&gt;&lt;/iframe&gt;</p><p>While his focus has always been on numbers and equations, though, Eisen said it has been his versatility &ndash; merging his background in math and computer science &ndash; that has helped him establish a career he&rsquo;s excited to pursue on a daily basis.</p><p>He&rsquo;s worked at Activision for just over a year, where he combines his fascination with raw numbers with a background in video games.</p><p>As a data analyst, he works to answer questions. For example, does the game play fast?</p><p>&ldquo;Well, that&rsquo;s a broad question,&rdquo; he explained. &ldquo;Answering that might involve asking more questions. It&rsquo;s very research-oriented. You might look at map size or how players play the game or the way different elements are designed.&rdquo;</p><p>It&rsquo;s a familiar process for Eisen, who has been a sports fan for years. Growing up a fan of the Atlanta Braves and eventually delving deeper into the world of fantasy sports, Eisen learned unique ways to look at the long list of available statistics.</p><p>&ldquo;I started getting into sabermetrics, advanced analytics in baseball,&rdquo; he said. &ldquo;I began to understand that there&rsquo;s a better way to look at stats than just at the typical ones. They help provide answers to questions like whether you should always intentionally walk Barry Bonds. That&rsquo;s an interesting question. The numbers help answer it. I got really into those question-answer analytics, and at Activision I had the opportunity to go deeper into this stuff.&rdquo;</p><p>He looks at win probability, value metrics, and any number of additional stats that help answer the question: Are you good?</p><p>Eisen doesn&rsquo;t work exclusively in programming, but his understanding of the development side has been a boon to his career, as well.</p><p>&lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/mU1BcvoFjgw&quot; frameborder=&quot;0&quot; allowfullscreen&gt;&lt;/iframe&gt;</p><p>He earned a minor in computer science at Georgia Tech after realizing he was on track to graduate with his degree in Applied Mathematics too early. In his major, he needed only 120 credit hours, and he carried a fair portion with him from high school.</p><p>He had already pursued a working knowledge in computer science beginning in his freshman year of high school, working with Flash and building websites, including one for rush for his fraternity, Alpha Epsilon Pi, in college.</p><p>He didn&rsquo;t pursue a major in the field because, he said, he wanted to learn it all on his own.</p><p>&ldquo;I was a kid,&rdquo; he said, laughing, by way of explanation.</p><p>With his extra time, though, he focused on computer science courses that filled gaps in his knowledge. He was glad that he did.</p><p>&ldquo;Some of those classes helped me get my first job,&rdquo; he said. &ldquo;When I was working on the RADAR stuff, I had this unique ability to merge two key disciplines. They had a lot of math people, and they had a lot of CS people. They had to take these algorithms done by the math people and put them into systems. At some point, I found that I was good at that. That helped me take interesting math algorithms and put them into scalable code.&rdquo;</p><p>It&rsquo;s something he said he has gotten back to doing at Activision.</p><p>&ldquo;Computing is taking over the world,&rdquo; he said. &ldquo;If you like your discipline, whatever that is, learning a bit about how to program with it is going to be very beneficial in creating your career.&rdquo;</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1489088325</created>  <gmt_created>2017-03-09 19:38:45</gmt_created>  <changed>1489088325</changed>  <gmt_changed>2017-03-09 19:38:45</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Jon Eisen graduated with a degree in Applied Mathematics, but a minor in Computer Science has helped improve his versatility.]]></teaser>  <type>news</type>  <sentence><![CDATA[Jon Eisen graduated with a degree in Applied Mathematics, but a minor in Computer Science has helped improve his versatility.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-03-09T00:00:00-05:00</dateline>  <iso_dateline>2017-03-09T00:00:00-05:00</iso_dateline>  <gmt_dateline>2017-03-09 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[david.mitchell@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p><a href="mailto:david.mitchell@cc.gatech.edu">david.mitchell@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>588525</item>      </media>  <hg_media>          <item>          <nid>588525</nid>          <type>image</type>          <title><![CDATA[Jon Eisen]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Eisen1.JPG]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Eisen1.JPG]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Eisen1.JPG]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Eisen1.JPG?itok=hZ8NBM6i]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Jon Eisen speaks to a gathered audience at a GVU Brown Bag session.]]></image_alt>                    <created>1489086375</created>          <gmt_created>2017-03-09 19:06:15</gmt_created>          <changed>1489086375</changed>          <gmt_changed>2017-03-09 19:06:15</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://www.cc.gatech.edu/academics/degree-programs/minors]]></url>        <title><![CDATA[Minors - College of Computing]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="130"><![CDATA[Alumni]]></category>      </categories>  <news_terms>          <term tid="130"><![CDATA[Alumni]]></term>      </news_terms>  <keywords>          <keyword tid="1051"><![CDATA[Computer Science]]></keyword>          <keyword tid="2449"><![CDATA[video games]]></keyword>          <keyword tid="8586"><![CDATA[applied mathematics]]></keyword>          <keyword tid="171795"><![CDATA[data engineering]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="588223">  <title><![CDATA[IC Associate Professor Karen Liu Earns Google Research Faculty Award]]></title>  <uid>33939</uid>  <body><![CDATA[<p>School of Interactive Computing Associate Professor <strong>Karen Liu</strong> earned a <a href="https://research.googleblog.com/2017/02/google-research-awards-2016.html?m=1">Google Research Faculty Award</a> for her research titled <em>Closing the &ldquo;Reality Gap&rdquo;: A Machine Learning Approach to Contact Modeling</em>.</p><p>The research addresses the problem that robotic applications working in a simulation setting often struggle to learn motor skills. As a result, they can often perform poorly on physical hardware due to inaccurate parameters, idealized dynamic and contact models, or other un-modelled factors. Her research proposes to accurately compute contact states &ndash; like sticking, sliding, or breaking &ndash; and contact forces such that the simulated results will match the real-world phenomena.</p><p>&ldquo;Our approach constructs a data-driven model that utilizes real-world observations to improve the accuracy of simulation,&rdquo; Liu wrote in the abstract of her research proposal. &ldquo;The key insight is that the contact problem can be broken down to two steps: predicting the next state of each contact point and calculating contact forces based on the prediction and current dynamic state.&rdquo;</p><p>As a proof-of-concept demonstration, Liu plans to show that a humanoid can perform tasks involving whole-body dynamic balance in the real world using the control policy trained by the improved simulator.</p><p>The award will fund one graduate student for one year. Liu is one of two recipients of the Google Research Faculty Award at the Georgia Institute of Technology, the other being fellow IC faculty member <strong><a href="http://www.ic.gatech.edu/news/588083/pair-ic-assistant-professors-earn-awards-research-visual-question-answering">Devi Parikh</a></strong>.</p>]]></body>  <author>David Mitchell</author>  <status>1</status>  <created>1488558304</created>  <gmt_created>2017-03-03 16:25:04</gmt_created>  <changed>1488558304</changed>  <gmt_changed>2017-03-03 16:25:04</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[School of Interactive Computing Associate Professor Karen Liu is the second faculty member to earn a Google Research Faculty Award.]]></teaser>  <type>news</type>  <sentence><![CDATA[School of Interactive Computing Associate Professor Karen Liu is the second faculty member to earn a Google Research Faculty Award.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2017-03-03T00:00:00-05:00</dateline>  <iso_dateline>2017-03-03T00:00:00-05:00</iso_dateline>  <gmt_dateline>2017-03-03 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[david.mitchell@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>David Mitchell</p><p>Communications Officer</p><p>david.mitchell@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>588222</item>      </media>  <hg_media>          <item>          <nid>588222</nid>          <type>image</type>          <title><![CDATA[Karen Liu new]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[karen-liu.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/karen-liu.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/karen-liu.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/karen-liu.jpg?itok=r1ByPG4E]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1488558126</created>          <gmt_created>2017-03-03 16:22:06</gmt_created>          <changed>1488558126</changed>          <gmt_changed>2017-03-03 16:22:06</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>      </news_terms>  <keywords>          <keyword tid="667"><![CDATA[robotics]]></keyword>          <keyword tid="2296"><![CDATA[Karen Liu]]></keyword>          <keyword tid="166848"><![CDATA[School of Interactive Computing]]></keyword>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>      </keywords>  <core_research_areas>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="587980">  <title><![CDATA[Georgia Tech Shapes Research in Computer-Supported Cooperative Work as ACM Conference Turns 20]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Georgia Tech computing faculty,&nbsp;students and alumni&nbsp;will play a central part in the Association for Computing Machinery&rsquo;s&nbsp;Conference on Computer-Supported Cooperative Work and Social Computing in Portland, Ore., where the main program runs Feb. 27 &ndash; March 1.</p><p><a href="https://public.tableau.com/views/CSCW2017_GeorgiaTech/DashboardAll?:embed=y&amp;:display_count=no&amp;:showVizHome=no" target="_blank">Six faculty from the School of Interactive Computing</a> have a combined eight papers accepted at CSCW 2017, including two of six best papers at the conference. These Atlanta-based researchers&rsquo; work covers a <a href="http://www.cscw.gatech.edu/2017/" target="_blank">range of challenge areas</a>, including privacy for social media, fake news, online movements, health tracking and digital self-harm.</p><p><a href="https://public.tableau.com/views/CSCW2017_GeorgiaTechAlumni/DashboardAll?:embed=y&amp;:display_count=no&amp;:showVizHome=no" target="_blank">Georgia Tech alumni</a> are also making considerable contributions to the field, with 17 papers, including 3 honorable mention papers, by 13 authors.</p><p>CSCW convenes its 20th&nbsp;conference this year &ndash; it took&nbsp;place biannually from 1986-2010 and annually since 2010 &ndash; having become the premier venue for research in the design and use of technologies that affect groups, organizations, communities, and networks. The conference explores the technical, social, material, and theoretical challenges of designing technology to support collaborative work and life activities.</p><h2>&nbsp;</h2><h2>Research Highlights</h2><p><strong>Likelihood of Dieting Success Lies Within Your Tweets</strong><strong> </strong></p><p>There is a direct link between a person&rsquo;s attitude on social media and the likelihood that their dieting efforts will succeed.</p><p>In fact, Georgia Institute of Technology researchers have determined that dieting success &shy;&ndash; or failure &ndash; can be predicted with an accuracy rate of 77 percent based on the sentiment of the words and phrases one uses on Twitter.</p><p>&ldquo;We see that those who are more successful at sticking to their daily dieting goals express more positive sentiments and have a greater sense of achievement in their social interactions,&rdquo; said Assistant Professor <strong>Munmun De Choudhury</strong>, who is lead researcher on the project. &ldquo;They are focused on the future, generally more social and have larger social networks.&rdquo;</p><p><a href="http://www.news.gatech.edu/2017/02/21/likelihood-dieting-success-lies-within-your-tweets" target="_blank">Read More</a></p><p><strong>Finding Credibility Clues on Twitter</strong></p><p>By scanning 66 million tweets linked to nearly 1,400 real-world events, Georgia Institute of Technology researchers have built a language model that identifies words and phrases that lead to strong or weak perceived levels of credibility on Twitter.&nbsp; Their findings suggest that the words of millions of people on social media have considerable information about an event&rsquo;s credibility &ndash; even when an event is still ongoing.</p><p>&ldquo;There have been many studies about social media credibility in recent years, but very little is known about what types of words or phrases create credibility perceptions during rapidly unfolding events,&rdquo; said Tanushree Mitra, the Georgia Tech Ph.D. candidate who led the research.</p><p>The team looked at tweets surrounding events in 2014 and 2015, including the emergence of Ebola in West Africa, the Charlie Hebdo attack in Paris and the death of Eric Garner in New York City. They asked people to judge the posts on their credibility (from &ldquo;certainly accurate&rdquo; to &ldquo;certainly inaccurate&rdquo;). Then the team fed the words into a model that split them into 15 different linguistic categories. The classifications included positive and negative emotions, hedges and boosters, and anxiety.</p><p><a href="http://www.news.gatech.edu/2017/01/26/finding-credibility-clues-twitter" target="_blank">Read More</a></p><p><strong>Most of Facebook is &lsquo;Friends Only,&rsquo; But Public and Private Posts are Likely Similar</strong></p><p>Social media content, while driving a sizable portion of today&rsquo;s web traffic, is not all public, and according to a new study, about 75 percent of Facebook posts, or three in four, are shared only with friends or subsets of friends. This translates into billions of daily online conversations that are seen by only a few.</p><p><a href="http://www.munmund.net/pubs/CSCW17_PubPvt.pdf" target="_blank">Researchers from the Georgia Institute of Technology</a> enlisted almost 2,000 Facebook users &ndash; who shared their most recent posts &ndash; and used machine learning methods as well as qualitative hand coding to determine content types and topics for roughly 11,000 public and private posts. They analyzed patterns of choices for privacy settings and found, contrary to expectations, that content type is not a significant predictor of privacy settings. They did find however that some demographics such as gender and age are predictive, suggesting that privacy choices may be driven more by the attributes of the person rather than by the content of the posts.</p><p>A full look at Georgia Tech&#39;s work at CSCW 2017 can be found at <a href="http://cscw.gatech.edu" target="_blank">http://cscw.gatech.edu</a>.</p><p>&nbsp;</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1488210512</created>  <gmt_created>2017-02-27 15:48:32</gmt_created>  <changed>1488290412</changed>  <gmt_changed>2017-02-28 14:00:12</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech computing faculty, students and alumni will play a central part in the Association for Computing Machinery’s Conference on Computer-Supported Cooperative Work and Social Computing in Portland, Ore., Feb. 27 – March 1.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech computing faculty, students and alumni will play a central part in the Association for Computing Machinery’s Conference on Computer-Supported Cooperative Work and Social Computing in Portland, Ore., Feb. 27 – March 1.]]></sentence>  <summary><![CDATA[<p>Georgia Tech computing faculty,&nbsp;students and alumni&nbsp;will play a central part in the Association for Computing Machinery&rsquo;s&nbsp;Conference on Computer-Supported Cooperative Work and Social Computing in Portland, Ore., where the main program runs Feb. 27 &ndash; March 1.</p>]]></summary>  <dateline>2017-02-27T00:00:00-05:00</dateline>  <iso_dateline>2017-02-27T00:00:00-05:00</iso_dateline>  <gmt_dateline>2017-02-27 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>587989</item>      </media>  <hg_media>          <item>          <nid>587989</nid>          <type>image</type>          <title><![CDATA[CSCW 2017 faculty authors]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Faculty authors.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Faculty%20authors.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Faculty%20authors.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Faculty%2520authors.png?itok=PvfmeK2v]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1488217831</created>          <gmt_created>2017-02-27 17:50:31</gmt_created>          <changed>1488217831</changed>          <gmt_changed>2017-02-27 17:50:31</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="585617">  <title><![CDATA[Jill Watson, Round Three]]></title>  <uid>27560</uid>  <body><![CDATA[<p>Georgia Tech is beginning its third semester using virtual teaching assistants (TAs) in an online course about artificial intelligence (AI). The new term comes one year after Jill Watson was introduced during Knowledge Based Artificial Intelligence (KBAI), a core course of the College of Computing&rsquo;s Master of Science in Computer Science degree program.</p><p>Jill, which is implemented on IBM&rsquo;s Watson platform, was first used during the spring 2016 semester to successfully answer particular types of frequently asked questions without the help of humans. The students weren&rsquo;t told her identity until the final day of the class.</p><p>Professor Ashok Goel then introduced two &ldquo;Jill Watsons&rdquo; this past fall to work alongside 13 human TAs. With Jill no longer a secret, Goel gave 14 of 15 TAs pseudonyms (only the head assistant kept his real identity). Jill Watson became Stacy Sisko and Ian Braun. Stacy interacted with the 400 enrolled students during class introductions and posted weekly updates; Ian answered common questions.</p><p>&ldquo;I told the students at the beginning of the semester that some of their TAs may or may not be computers,&rdquo; said Goel, a professor of computer science. &ldquo;Then I watched the chat rooms for months as they tried to differentiate between human and artificial intelligence.&rdquo;</p><p>Stacy dove into the discussion forum first. All members of the class were encouraged to introduce themselves. She responded to about half of them, chiming in with short paragraphs and relevant details. She received no human assistance.</p><p>&ldquo;If a student mentioned that they lived in, say, Chicago and worked at a specific company, for example, Stacy might comment on the city or the workplace,&rdquo; Goel said. &ldquo;If a student mentioned they were taking another Georgia Tech course, she would sometimes make a comment about the instructor.&rdquo;</p><p>Goel said there were a few mistakes, but nothing alarming. Stacy also wrote her own weekly previews of the content, then summarized on Fridays. Sometimes her wrap-ups referenced conversations among students. For instance, if she noticed a helpful, engaging online discussion from a few days prior, she would highlight it during her summary and encourage students to check it out for added insight.<br /><br />The other non-human TA, Ian, wasn&rsquo;t much different from the original Jill Watson. He answered routine questions typically asked each semester, such as the allowed length and format of written assignments.</p><p>&ldquo;Ian wasn&rsquo;t as efficient in fall as Jill was in spring. He didn&rsquo;t answer as many questions as we had expected,&rdquo; Goel admitted. Ian only posted responses if he was 97 percent confident. &ldquo;We&rsquo;re still sorting through the data, but it looks like some students may have deliberately tried to outsmart the computer by asking questions in new ways.&rdquo;</p><p>And because Ian could only pull answers from his episodic memory of previous offerings of the class, Goel thinks the variety of the student questions may have been a bit overwhelming. So his research team has developed a new version of Jill based on semantic analysis that he will introduce to the incoming class this semester.</p><p>At the end of the term, the students were polled about who was human and what was AI. Slightly more than 50 percent of the students correctly guessed that Stacy was a computer. Sixteen percent figured out that Ian wasn&rsquo;t human. On the other hand, more than 10 percent mistakenly thought two of the human TAs weren&rsquo;t real.</p><p>&ldquo;We&rsquo;re seeing more engagement in the course. For instance, in fall of 2015 before Jill Watson, each student averaged 32 comments during the semester. This fall it was close to 38 comments per student, on average,&rdquo; Goel said. &ldquo;I attribute this increased involvement partly to our AI TAs. They&rsquo;re able to respond to inquiries more quickly than us.&rdquo;</p><p>This isn&rsquo;t something Goel expected when he began the Jill Watson project. He just wanted to free up more time for his staff so they could concentrate on tasks computers can&rsquo;t do.</p><p>Also in the fall, approximately 40 students built chatbots (their own avatars of Jill Watson) that could converse about the course. This allowed the students to operationalize some of the techniques they were learning in the class.</p><p>&ldquo;When we started, I had no idea that this would blossom into a project with so many dimensions. It&rsquo;s been a bonanza of low-hanging fruit we&rsquo;re just starting to pluck.&rdquo;</p><p>Virtual teaching assistants as illustrated by Jill were recently recognized as <a href="http://www.chronicle.com/interactives/50-years-of-technology">one of the most transformative technologies to impact college</a> within the past 50 years by the Chronicle of Higher Education. &nbsp;&nbsp;&nbsp;</p>]]></body>  <author>Jason Maderer</author>  <status>1</status>  <created>1483973525</created>  <gmt_created>2017-01-09 14:52:05</gmt_created>  <changed>1483973525</changed>  <gmt_changed>2017-01-09 14:52:05</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A class on artificial intelligence will again include non-human teaching assistants.]]></teaser>  <type>news</type>  <sentence><![CDATA[A class on artificial intelligence will again include non-human teaching assistants.]]></sentence>  <summary><![CDATA[<p>Georgia Tech is beginning its third semester using virtual teaching assistants (TAs) in an online course about artificial intelligence (AI). The new term comes one year after Jill Watson was introduced during Knowledge Based Artificial Intelligence (KBAI), a core course of the College of Computing&rsquo;s Master of Science in Computer Science degree program.</p>]]></summary>  <dateline>2017-01-09T00:00:00-05:00</dateline>  <iso_dateline>2017-01-09T00:00:00-05:00</iso_dateline>  <gmt_dateline>2017-01-09 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[Georgia Tech course prepares for third semester with virtual teaching assistants]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[maderer@gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Jason Maderer<br />National Media Relations<br />maderer@gatech.edu<br />404-660-2926</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>558051</item>          <item>487761</item>      </media>  <hg_media>          <item>          <nid>558051</nid>          <type>image</type>          <title><![CDATA[Jill Watson]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[original_0.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/original_0_0.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/original_0_0.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/original_0_0.jpeg?itok=GNBQ2nga]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Jill Watson]]></image_alt>                    <created>1470163198</created>          <gmt_created>2016-08-02 18:39:58</gmt_created>          <changed>1475895361</changed>          <gmt_changed>2016-10-08 02:56:01</gmt_changed>      </item>          <item>          <nid>487761</nid>          <type>image</type>          <title><![CDATA[Ashok Goel in the Classroom]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[16c10303-p20-005.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/16c10303-p20-005_0.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/16c10303-p20-005_0.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/16c10303-p20-005_0.jpg?itok=cGprVbU5]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1453233601</created>          <gmt_created>2016-01-19 20:00:01</gmt_created>          <changed>1475895242</changed>          <gmt_changed>2016-10-08 02:54:02</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://www.omscs.gatech.edu/]]></url>        <title><![CDATA[Online Master of Science in Computer Science Program]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="1214"><![CDATA[News Room]]></group>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>          <category tid="135"><![CDATA[Research]]></category>      </categories>  <news_terms>          <term tid="135"><![CDATA[Research]]></term>      </news_terms>  <keywords>          <keyword tid="169183"><![CDATA[Jill Watson]]></keyword>          <keyword tid="112431"><![CDATA[ashok goel]]></keyword>          <keyword tid="2556"><![CDATA[artificial intelligence]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71871"><![CDATA[Campus and Community]]></topic>          <topic tid="71881"><![CDATA[Science and Technology]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="584775">  <title><![CDATA[Social Media Could Take Only a Fraction of Users’ Time With New Georgia Tech Method]]></title>  <uid>27592</uid>  <body><![CDATA[<p>A new visualization technique from the Georgia Institute of Technology could help users end the time-consuming habit of continually checking social media streams and endless updates. Where users might now commit minutes or hours on a single topic spanning thousands of posts, the Georgia Tech technique produces a <a href="https://mengdieh.github.io/SentenTreeDemo/app/demo.html" target="_blank">single compiled social post</a> that reads almost like a headline. Users are able to immediately understand the conversation and interact with the words and ideas that are being talked about the most, whether they are from an election, major sporting event, or latest product release.</p><p>&ldquo;The technique seeks a balance between showing the most frequent words and preserving sentence structure,&rdquo; says lead researcher Mengdie Hu, a Ph.D. student in Human-Centered Computing. &ldquo;It gives people a high-level overview of the most common expressions in a document collection and how they are connected to each other.&rdquo;</p><p>Implemented in a web browser, the visualization tool, called SentenTree (short for Sentence Tree), has been used to take almost a quarter of a million tweets shared in a 15-minute window of time during the 2014 World Cup and filter the conversation. The resulting single 100-word social post revealed that Brazil scored a goal in its own net, putting them down 0-1 in their match against Croatia. In the example post, &ldquo;World Cup&rdquo; and &ldquo;own goal&rdquo; are larger than other words, signaling that they appear more frequently. In the middle of and connecting these two phrases are &ldquo;2014,&rdquo; &ldquo;bad,&rdquo; and &ldquo;Brazil,&rdquo; which together give an idea of the larger social conversation. If users want more context, SentenTree allows them to hover over any word and drill down to see more details, including the number of times the phrases appear along with the original tweets.</p><p>&ldquo;Even if you don&rsquo;t know anything about soccer, there are visual cues to help users connect the concepts and play with the data,&rdquo; Hu says. &ldquo;The central idea behind SentenTree is to take a large social media dataset, find the most frequent sequences of words, and build a visualization out of them that mirrors the real-time conversation.&rdquo;</p><p>The researchers say that while there are numerous analytical tools for social media data that highlight concept relationships, topical changes, or physical locations, less common are tools that visualize the actual text content itself. SentenTree is designed to remedy this by consolidating, finding patterns in, and delivering useful content from many sources into one simple interactive view.</p><p>The algorithms developed for SentenTree analyze the unstructured text data &mdash; developing a baseline sequential pattern of similar ideas and sentiments, all the while keeping a sentence-like structure &mdash; then incrementally add new words that build on the pattern as the algorithms search the text and kick out duplicate language. This allows the visualization to be a concise, readable representation of multiple thousands of threads. The visualization is even modified in length, based on the size of the screen, and is usually between 100-200 words.&nbsp;</p><p>&ldquo;There is an unwieldy volume of unstructured text on the web that continues to grow&nbsp;explosively,&rdquo; says John Stasko, professor of Interactive Computing at Georgia Tech and part of the research team. &ldquo;Social media text includes rich information on the public&rsquo;s interests and opinions, and we hope this technique can start to uncover important patterns and ideas that exist in this data.&rdquo;</p><p>The Georgia Tech researchers are developing their tool to allow for a broader cross section of ideas to surface on the social web &ndash; anywhere from YouTube to Facebook to Reddit &ndash; instead of simply relying on what social media influencers, such as celebrities or prominent public figures, post on their channels.</p><p>SentenTree eventually will be available online for users to upload their own datasets to visualize. The work, presented in October at the IEEE Vis 2016 conference in Baltimore, Maryland, is published in the paper &ldquo;Visualizing Social Media Content with SentenTree.&rdquo;</p><p>###</p><p><em>This research is supported in part by the DARPA XDATA program and the National Science Foundation, Award IIS-1320537. The views and opinions expressed are those of the authors and do not necessarily represent the funding partners.</em></p><p>&nbsp;</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1481132272</created>  <gmt_created>2016-12-07 17:37:52</gmt_created>  <changed>1481221262</changed>  <gmt_changed>2016-12-08 18:21:02</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A new visualization technique from the Georgia Institute of Technology could help users end the time-consuming habit of continually checking social media streams and endless updates.]]></teaser>  <type>news</type>  <sentence><![CDATA[A new visualization technique from the Georgia Institute of Technology could help users end the time-consuming habit of continually checking social media streams and endless updates.]]></sentence>  <summary><![CDATA[<p>A new visualization technique from the Georgia Institute of Technology could help users end the time-consuming habit of continually checking social media streams and endless updates. Where users might now commit minutes or hours on a single topic spanning thousands of posts, the Georgia Tech technique produces a single compiled social post that reads almost like a headline.</p>]]></summary>  <dateline>2016-12-07T00:00:00-05:00</dateline>  <iso_dateline>2016-12-07T00:00:00-05:00</iso_dateline>  <gmt_dateline>2016-12-07 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jpreston@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu">Joshua Preston</a><br />678.231.0787<br />Communications Officer<br />GVU Center and College of Computing</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>584774</item>          <item>394731</item>          <item>584780</item>      </media>  <hg_media>          <item>          <nid>584774</nid>          <type>image</type>          <title><![CDATA[Sententree Visualization - Information Interfaces Group]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Sententree.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Sententree.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Sententree.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Sententree.jpg?itok=JA2KUCjS]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1481131947</created>          <gmt_created>2016-12-07 17:32:27</gmt_created>          <changed>1481131947</changed>          <gmt_changed>2016-12-07 17:32:27</gmt_changed>      </item>          <item>          <nid>394731</nid>          <type>image</type>          <title><![CDATA[John Stasko]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[stasko14.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/stasko14.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/stasko14.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/stasko14.jpg?itok=v8FF8dAB]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[John Stasko]]></image_alt>                    <created>1449246346</created>          <gmt_created>2015-12-04 16:25:46</gmt_created>          <changed>1475895089</changed>          <gmt_changed>2016-10-08 02:51:29</gmt_changed>      </item>          <item>          <nid>584780</nid>          <type>image</type>          <title><![CDATA[Mengdie Hu]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Hu, Mengdie.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/Hu%2C%20Mengdie.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Hu%2C%20Mengdie.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/Hu%252C%2520Mengdie.jpg?itok=frv9A5Uq]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1481136357</created>          <gmt_created>2016-12-07 18:45:57</gmt_created>          <changed>1481136357</changed>          <gmt_changed>2016-12-07 18:45:57</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://www.cc.gatech.edu/gvu/ii/]]></url>        <title><![CDATA[Information Interfaces Group]]></title>      </link>          <link>        <url><![CDATA[http://www.news.gatech.edu/2012/04/26/how-twitter-broke-its-biggest-story-wegotbinladen]]></url>        <title><![CDATA[How Twitter Broke Its Biggest Story, #WeGotBinLaden]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="11632"><![CDATA[john stasko]]></keyword>          <keyword tid="172916"><![CDATA[Mengdie Hu]]></keyword>          <keyword tid="7257"><![CDATA[visualization]]></keyword>          <keyword tid="172922"><![CDATA[information visualization]]></keyword>          <keyword tid="314"><![CDATA[twitter]]></keyword>          <keyword tid="172918"><![CDATA[world cup 2014]]></keyword>          <keyword tid="172921"><![CDATA[infoviz]]></keyword>          <keyword tid="4887"><![CDATA[GVU Center]]></keyword>          <keyword tid="172917"><![CDATA[sententree]]></keyword>          <keyword tid="172919"><![CDATA[Information Interfaces Group]]></keyword>      </keywords>  <core_research_areas>          <term tid="39431"><![CDATA[Data Engineering and Science]]></term>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71901"><![CDATA[Society and Culture]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="584765">  <title><![CDATA[Analysis of 2016 AP Computer Science Testing Reveals Ongoing Need for Qualified High School Teachers]]></title>  <uid>32045</uid>  <body><![CDATA[<p>According to recently released analysis from the Georgia Institute of Technology, 54,379 students&nbsp;took the&nbsp;Advanced Placement (AP) Computer Science (CS) A exam in the United States in&nbsp;2016, which sets a new record.</p><p>This&nbsp;is a 17.3 percent increase over the previous year and great news said&nbsp;<a href="http://www.cc.gatech.edu/people/barbara-ericson"><strong>Barbara Ericson</strong></a>, director of computing outreach for the&nbsp;<a href="http://coweb.cc.gatech.edu/ice-gt/">Institute for Computing Education</a>&nbsp;(ICE) at Georgia Tech.</p><p>&ldquo;In 2012 fewer than 25,000 students took the exam. So, more than doubling in five years is pretty good growth,&rdquo; said Ericson.&nbsp;</p><p>Despite positive overall growth however, a closer loook at the data reveals mixed results for 2016.</p><p>While the number of female high school students taking the AP CS A exam last year increased by 25 percent over 2015, females still only account for 23 percent of exam takers. In eight states fewer than 10 females took the exam. Mississippi and Montana had no females take the exam.</p><h5><strong>Take an interactive look at <a href="https://public.tableau.com/views/APCSexamfemaletesttakers2016/Dashboard1?:embed=y&amp;:display_count=yes&amp;:showVizHome=no#9" target="_blank">female test takers by state</a></strong></h5><p>The number of African American students taking the AP CS A exam also increased by 14 percent this year, but the overall pass rate for these students decreased from 38 percent in the previous year to 33 percent in 2016. The top five states for the percentage of African Americans taking the exam in 2016 were: the District of Columbia, Maryland, Georgia, Oklahoma, and Louisiana. Nearly half of all states had less than 10 black students take the AP CS A exam.</p><p>Hispanic participation in the exam grew by 46 percent in 2016 with 6,256 students taking the test. The pass rate for this group increased just one percentage point to 42 percent during the same period. The top five states for the percentage of Hispanic students taking the exam were: New Mexico, Florida, Texas, Wyoming, and California. In all, 15 states had fewer than 10 Hispanics take the exam.</p><h5><strong>Moving the needle forward</strong></h5><p>&ldquo;We&rsquo;ve had positive overall growth, but we are still way below where we should be in general and especially for&nbsp;underrepresented groups,&rdquo; said Ericson. &ldquo;We need to be where AP calculus is, which had nearly 300,000 students taking the course this year.&rdquo;</p><p>To move the needle forward on this goal, Ericson said more qualified teachers are needed. &ldquo;The biggest bottleneck right now to achieving more participation and more diversity is that there are not nearly enough trained educators who can effectively teach and prepare students to succeed on the AP CS A exam,&rdquo; said Ericson.</p><h5><strong>Examine the <a href="http://home.cc.gatech.edu/ice-gt/595" target="_blank">complete results</a> of the 2016 analysis</strong>&nbsp;</h5><p>Although she doesn&rsquo;t see an immediate solution to the problem, Ericson is optimistic that the new AP CSP (computer&nbsp;science&nbsp;principles) course launched this year by the College Board will help bring more qualified teachers to the table. With more of a focus on problem solving, creativity, and the impact of computing innovations, the CSP course is primarily intended for non-CS majors.</p><p>&ldquo;Along with paving the way for more diversity in the A course,&rdquo; said Ericson, &ldquo;CSP is an easier place for teachers to get started if they don&rsquo;t have any prior experience. We&rsquo;ve developed several free interactive e-books intended to help teachers, especially with programming because that&rsquo;s the part they don&rsquo;t know or are afraid of. Once they&rsquo;ve mastered the CSP course, our hope is that they will move on to become qualified for the A&nbsp;course.&rdquo;</p>]]></body>  <author>Ben Snedeker</author>  <status>1</status>  <created>1481124169</created>  <gmt_created>2016-12-07 15:22:49</gmt_created>  <changed>1481216542</changed>  <gmt_changed>2016-12-08 17:02:22</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Despite improvement, results of the 2016 AP CS A test show more high school teachers are needed.]]></teaser>  <type>news</type>  <sentence><![CDATA[Despite improvement, results of the 2016 AP CS A test show more high school teachers are needed.]]></sentence>  <summary><![CDATA[]]></summary>  <dateline>2016-12-07T00:00:00-05:00</dateline>  <iso_dateline>2016-12-07T00:00:00-05:00</iso_dateline>  <gmt_dateline>2016-12-07 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[albert.snedeker@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Albert &quot;Ben&quot; Snedeker, Communications Manager</p><p>404-894-7253</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>584767</item>      </media>  <hg_media>          <item>          <nid>584767</nid>          <type>image</type>          <title><![CDATA[2016 AP CS A female participation by state]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[AP CS A exam 2016.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/AP%20CS%20A%20exam%202016.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/AP%20CS%20A%20exam%202016.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/AP%2520CS%2520A%2520exam%25202016.jpg?itok=sbviEr63]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[2016 AP CS A female participation by state]]></image_alt>                    <created>1481124528</created>          <gmt_created>2016-12-07 15:28:48</gmt_created>          <changed>1481124528</changed>          <gmt_changed>2016-12-07 15:28:48</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[http://home.cc.gatech.edu/ice-gt/595]]></url>        <title><![CDATA[2016 AP CS A Exam Results Analysis]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="50875"><![CDATA[School of Computer Science]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>          <category tid="42911"><![CDATA[Education]]></category>      </categories>  <news_terms>          <term tid="42911"><![CDATA[Education]]></term>      </news_terms>  <keywords>          <keyword tid="172913"><![CDATA[AP CS A]]></keyword>          <keyword tid="87891"><![CDATA[Barb Ericson; Barbara Ericson; CS; AP Computer Science; Women; Minorities; Computer Science Education]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="583212">  <title><![CDATA[Learning Morse Code without Trying]]></title>  <uid>27560</uid>  <body><![CDATA[<p>It&rsquo;s not exactly beating something into someone&rsquo;s head. More like tapping it into the side.</p><p>Researchers at the Georgia Institute of Technology have developed a system that teaches people Morse code within four hours using a series of vibrations felt near the ear. Participants wearing Google Glass learned it without paying attention to the signals &mdash;they played games while feeling the taps and hearing the corresponding letters. After those few hours, they were 94 percent accurate keying a sentence that included every letter of the alphabet and 98 percent accurate writing codes for every letter.</p><p>This is the latest chapter of passive haptic learning (PHL) studies at Georgia Tech. The same method &mdash; using vibrations while participants aren&rsquo;t paying attention &mdash; <a href="http://www.news.gatech.edu/2014/06/23/wearable-computing-gloves-can-teach-braille-even-if-you%E2%80%99re-not-paying-attention">has taught people braille</a>, <a href="http://www.news.gatech.edu/2008/11/07/reinventing-way-people-learn-play-piano">how to play the piano</a> and <a href="http://www.news.gatech.edu/hg/item/140221">improved hand sensation for those with partial spinal cord injury. </a></p><p>The PHL projects are all led by Georgia Tech Professor Thad Starner and his Ph.D. student Caitlyn Seim. The team decided to use Glass for this study because it has both a built-in speaker and tapper (Glass&rsquo;s bone-conduction transducer).</p><p>In the study, participants played a game while feeling vibration taps between their temple and ear. The taps represented the dots and dashes of Morse code and passively &ldquo;taught&rdquo; users through their tactile senses &mdash; even while they were distracted by the game.&nbsp;</p><p>The taps were created when researchers sent a very low-frequency signal to Glass&rsquo;s speaker system. At less than 15 Hz, the signal was below hearing range but, because it was played very slowly, the sound was felt as a vibration.&nbsp;</p><p>Half of the participants in the study felt the vibration taps and heads a voice prompt for each corresponding letter. The other half &mdash; the control group &mdash; felt no taps to help them learn.</p><p>Participants were tested throughout the study on their knowledge of Morse code and their ability to type it.&nbsp; After less than four hours of feeling every letter, everyone was challenged to type the alphabet in Morse code in a final test.</p><p>The control group was accurate only half the time.&nbsp; Those who felt the passive cues were nearly perfect.</p><p>The research was recently presented in Germany at the 20<sup>th</sup> International Symposium on Wearable Computers.</p><p>&ldquo;Does this new study mean that people will rush out to learn Morse code? Probably not,&rdquo; said Starner. &ldquo;It shows that PHL lowers the barrier to learn text-entry methods &mdash; something we need for smartwatches and any text-entry that doesn&rsquo;t require you to look at your device or keyboard.&rdquo;</p><p>Previous research on PHL used custom hardware to provide the tactile stimuli, but here researchers use an existing wearable device.&nbsp;</p><p>&ldquo;This research also shows that other common devices with an actuator could be used for passive haptic learning,&rdquo; he says. &ldquo;Your smartwatch, Bluetooth headset, fitness tracker or phone.&rdquo;</p><p>&ldquo;In our Braille and piano PHL studies, people felt vibrations on their fingers, then used their fingers for the task,&rdquo; said Seim. &ldquo;This study was different and surprising. People were tapped on their heads, but the skill they learned was using their finger.&rdquo;</p><p>Seim&rsquo;s next study will go a step further, investigating whether PHL can teach people how to type on the trusted QWERTY keyboard. That would mean several letters assigned to the same finger, rather than using only one finger like Morse code.</p><p><em>The work is supported in part by the National Science Foundation (Grant Number 1217473). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors. </em></p>]]></body>  <author>Jason Maderer</author>  <status>1</status>  <created>1477583378</created>  <gmt_created>2016-10-27 15:49:38</gmt_created>  <changed>1477583378</changed>  <gmt_changed>2016-10-27 15:49:38</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Researchers have developed a system that teaches people Morse code within four hours using a series of vibrations felt near the ear]]></teaser>  <type>news</type>  <sentence><![CDATA[Researchers have developed a system that teaches people Morse code within four hours using a series of vibrations felt near the ear]]></sentence>  <summary><![CDATA[<p>Researchers have developed a system that teaches people Morse code within four hours using a series of vibrations felt near the ear. Participants wearing Google Glass learned it without paying attention to the signals &mdash;they played games while feeling the taps and hearing the corresponding letters. After those few hours, they were 94 percent accurate keying a sentence that included every letter of the alphabet and 98 percent accurate writing codes for every letter.</p>]]></summary>  <dateline>2016-10-27T00:00:00-04:00</dateline>  <iso_dateline>2016-10-27T00:00:00-04:00</iso_dateline>  <gmt_dateline>2016-10-27 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[New study demonstrates silent, eyes-free text entry]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[maderer@gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Jason Maderer<br />National Media Relations<br />maderer@gatech.edu<br />404-660-2926</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>416531</item>          <item>583210</item>          <item>583209</item>      </media>  <hg_media>          <item>          <nid>416531</nid>          <type>image</type>          <title><![CDATA[Thad Starner]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[thad_starner_2.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/thad_starner_2_0.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/thad_starner_2_0.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/thad_starner_2_0.jpg?itok=rt_37qiZ]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Thad Starner]]></image_alt>                    <created>1449254258</created>          <gmt_created>2015-12-04 18:37:38</gmt_created>          <changed>1475895155</changed>          <gmt_changed>2016-10-08 02:52:35</gmt_changed>      </item>          <item>          <nid>583210</nid>          <type>image</type>          <title><![CDATA[Morse Code 2]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[InputTest2.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/InputTest2.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/InputTest2.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/InputTest2.jpeg?itok=pO1v3EdQ]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1477581892</created>          <gmt_created>2016-10-27 15:24:52</gmt_created>          <changed>1477581892</changed>          <gmt_changed>2016-10-27 15:24:52</gmt_changed>      </item>          <item>          <nid>583209</nid>          <type>image</type>          <title><![CDATA[Morse Code 1]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[tap2.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/tap2.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/tap2.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/tap2.jpeg?itok=SatWqAiA]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1477581798</created>          <gmt_created>2016-10-27 15:23:18</gmt_created>          <changed>1477585830</changed>          <gmt_changed>2016-10-27 16:30:30</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1183"><![CDATA[Home]]></group>          <group id="1214"><![CDATA[News Room]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>          <category tid="135"><![CDATA[Research]]></category>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>      </categories>  <news_terms>          <term tid="135"><![CDATA[Research]]></term>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>      </news_terms>  <keywords>          <keyword tid="1944"><![CDATA[Thad Starner]]></keyword>          <keyword tid="82341"><![CDATA[Google Glass]]></keyword>          <keyword tid="132141"><![CDATA[wearables]]></keyword>          <keyword tid="172604"><![CDATA[Morse Code]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71881"><![CDATA[Science and Technology]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="582752">  <title><![CDATA[Wireless, Freely Behaving Rodent Cage Helps Scientists Collect More Reliable Data]]></title>  <uid>28466</uid>  <body><![CDATA[]]></body>  <author>Meghana Melkote</author>  <status>1</status>  <created>1476818612</created>  <gmt_created>2016-10-18 19:23:32</gmt_created>  <changed>1476823934</changed>  <gmt_changed>2016-10-18 20:52:14</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[System uses video game technology to track lab animal behavior]]></teaser>  <type>news</type>  <sentence><![CDATA[System uses video game technology to track lab animal behavior]]></sentence>  <summary><![CDATA[<p>Instead of building a better mouse trap, Georgia Institute of Technology researchers have built a better mouse cage. They&rsquo;ve created a system called EnerCage (Energized Cage) for scientific experiments on awake, freely behaving small animals. It wirelessly powers electronic devices and sensors traditionally used during rodent research experiments, but without the use of interconnect wires or bulky batteries. Their goal is to create as natural an environment within the cage as possible for mice and rats in order for scientists to obtain consistent and reliable results. The EnerCage system also uses Microsoft&rsquo;s Kinect video game technology to track the animals and recognize their activities, automating a process that typically requires researchers to stand and directly observe the rodents or watch countless hours of recorded footage to determine how they react to experiments.&nbsp;</p><p>Read the rest of the article here:&nbsp;<a href="http://www.news.gatech.edu/2016/09/28/wireless-freely-behaving-rodent-cage-helps-scientists-collect-more-reliable-data">http://www.news.gatech.edu/2016/09/28/wireless-freely-behaving-rodent-cage-helps-scientists-collect-more-reliable-data</a></p>]]></summary>  <dateline>2016-09-28T00:00:00-04:00</dateline>  <iso_dateline>2016-09-28T00:00:00-04:00</iso_dateline>  <gmt_dateline>2016-09-28 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>      </media>  <hg_media>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="538611">  <title><![CDATA[New Technique Controls Autonomous Vehicles in Extreme Conditions]]></title>  <uid>27303</uid>  <body><![CDATA[<p>A Georgia Institute of Technology research team has devised a novel way to help keep a driverless vehicle under control as it maneuvers at the edge of its handling limits. The approach could help make self-driving cars of the future safer under hazardous road conditions.</p><p>Researchers from Georgia Tech’s Daniel Guggenheim School of Aerospace Engineering (AE) and the School of Interactive Computing (IC) have assessed the new technology by racing, sliding, and jumping one-fifth-scale, fully autonomous auto-rally cars at the equivalent of 90 mph. The technique uses advanced algorithms and onboard computing, in concert with installed sensing devices, to increase vehicular stability while maintaining performance.</p><p>The work, tested at the Georgia Tech Autonomous Racing Facility, is sponsored by the U.S. Army Research Office. A paper covering this research was presented at the recent International Conference on Robotics and Automation (ICRA), held May 16-21.</p><p>“An autonomous vehicle should be able to handle any condition, not just drive on the highway under normal conditions,” said Panagiotis Tsiotras, an AE professor who is an expert on the mathematics behind rally-car racing control. “One of our principal goals is to infuse some of the expert techniques of human drivers into the brains of these autonomous vehicles.”</p><p>Traditional robotic-vehicle techniques use the same control approach whether a vehicle is driving normally or at the edge of roadway adhesion, Tsiotras explained. The Georgia Tech method – known as model predictive path integral control (MPPI) – was developed specifically to address the non-linear dynamics involved in controlling a vehicle near its friction limits. <br /> <br /><strong>Utilizing Advanced Concepts</strong></p><p>“Aggressive driving in a robotic vehicle – maneuvering at the edge – is a unique control problem involving a highly complex system,” said Evangelos Theodorou, an AE assistant professor who is leading the project. “However, by merging statistical physics with control theory, and utilizing leading-edge computation, we can create a new perspective, a new framework, for control of autonomous systems.”</p><p>The Georgia Tech researchers used a stochastic trajectory-optimization capability, based on a path-integral approach, to create their MPPI control algorithm, Theodorou explained. Using statistical methods, the team integrated large amounts of handling-related information, together with data on the dynamics of the vehicular system, to compute the most stable trajectories from myriad possibilities.</p><p>Processed by the high-power graphics processing unit (GPU) that the vehicle carries, the MPPI control algorithm continuously samples data coming from global positioning system (GPS) hardware, inertial motion sensors, and other sensors. The onboard hardware-software system performs real-time analysis of a vast number of possible trajectories and relays optimal handling decisions to the vehicle moment by moment.</p><p>In essence, the MPPI approach combines both the planning and execution of optimized handling decisions into a single highly efficient phase. It’s regarded as the first technology to carry out this computationally demanding task; in the past, optimal- control data inputs could not be processed in real time.<br /> <br /><strong>Fully Autonomous Vehicles</strong></p><p>The researchers’ two auto-rally vehicles – custom built by the team – utilize special electric motors to achieve the right balance between weight and power. The cars carry a motherboard with a quad-core processor, a potent GPU, and a battery.</p><p>Each vehicle also has two forward-facing cameras, an inertial measurement unit, and a GPS receiver, along with sophisticated wheel-speed sensors. The power, navigation, and computation equipment is housed in a rugged aluminum enclosure able to withstand violent rollovers. Each vehicle weighs about 48 pounds and is about three feet long.</p><p>These rolling robots are able to test the team’s control algorithms without any need for off-vehicle devices or computation, except for a nearby GPS receiver. The onboard GPU lets the MPPI algorithm sample more than 2,500, 2.5-second-long trajectories in under 1/60 of a second.</p><p>An important aspect in the team’s autonomous-control approach centers on the concept of “costs” – key elements of system functionality. Several cost components must be carefully matched to achieve optimal performance.</p><p>In the case of the Georgia Tech vehicles, the costs consist of three main areas: the cost for staying on the track, the cost for achieving a desired velocity, and the cost of the control system. A sideslip-angle cost was also added to improve vehicle stability.</p><p>The cost approach is important to enabling a robotic vehicle to maximize speed while staying under control, explained James Rehg, a professor in the Georgia Tech School of Interactive Computing who is collaborating with Theodorou and Tsiotras.</p><p>It’s a complex balancing act, Rehg said. For example, when the researchers reduced one cost term to try to prevent vehicle sliding, they found they got increased drifting behavior.</p><p>“What we're talking about here is using the MPPI algorithm to achieve relative <br />entropy minimization – and adjusting costs in the most effective way is a big part of that,” he said. “To achieve the optimal combination of control and performance in an autonomous vehicle is definitely a non-trivial problem.”</p><p><strong>Research News</strong><br /><strong>Georgia Institute of Technology</strong><br /><strong>177 North Avenue</strong><br /><strong>Atlanta, Georgia 30332-0181 USA</strong></p><p><strong>Media Relations Contacts</strong>: Jason Maderer (<a href="mailto:jason.maderer@comm.gatech.edu">jason.maderer@comm.gatech.edu</a>) (404-385-2966) or John Toon (<a href="mailto:jtoon@gatech.edu">jtoon@gatech.edu</a>) (404-894-6986).</p><p><strong>Writer</strong>: Rick Robinson</p>]]></body>  <author>John Toon</author>  <status>1</status>  <created>1463997854</created>  <gmt_created>2016-05-23 10:04:14</gmt_created>  <changed>1475896902</changed>  <gmt_changed>2016-10-08 03:21:42</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech researchers have devised a novel way to help keep a driverless vehicle under control as it maneuvers at the edge of its handling limits.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech researchers have devised a novel way to help keep a driverless vehicle under control as it maneuvers at the edge of its handling limits.]]></sentence>  <summary><![CDATA[<p>A Georgia Institute of Technology research team has devised a novel way to help keep a driverless vehicle under control as it maneuvers at the edge of its handling limits. The approach could help make self-driving cars of the future safer under hazardous road conditions.&nbsp;</p>]]></summary>  <dateline>2016-05-23T00:00:00-04:00</dateline>  <iso_dateline>2016-05-23T00:00:00-04:00</iso_dateline>  <gmt_dateline>2016-05-23 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[MPPI strategy helps self-driving, robotic vehicles maintain control at edge of handling limits]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jason.maderer@comm.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Jason Maderer</p><p><a href="mailto:jason.maderer@comm.gatech.edu">jason.maderer@comm.gatech.edu</a></p><p>(404) 385-2966</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>538541</item>          <item>538561</item>          <item>538571</item>      </media>  <hg_media>          <item>          <nid>538541</nid>          <type>image</type>          <title><![CDATA[autonomous racing vehicle]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[autonomoous-racing1-horiz.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/autonomoous-racing1-horiz.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/autonomoous-racing1-horiz.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/autonomoous-racing1-horiz.jpg?itok=S3vKHyUj]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[autonomous racing vehicle]]></image_alt>                    <created>1464703200</created>          <gmt_created>2016-05-31 14:00:00</gmt_created>          <changed>1475895326</changed>          <gmt_changed>2016-10-08 02:55:26</gmt_changed>      </item>          <item>          <nid>538561</nid>          <type>image</type>          <title><![CDATA[Researchers with autonomous racing vehicle]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[autonomous-racing2.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/autonomous-racing2.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/autonomous-racing2.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/autonomous-racing2.jpg?itok=6wjNGZgU]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Researchers with autonomous racing vehicle]]></image_alt>                    <created>1464703200</created>          <gmt_created>2016-05-31 14:00:00</gmt_created>          <changed>1475895326</changed>          <gmt_changed>2016-10-08 02:55:26</gmt_changed>      </item>          <item>          <nid>538571</nid>          <type>image</type>          <title><![CDATA[autonomous racing vehicle2]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[autonomous-racing1.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/autonomous-racing1.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/autonomous-racing1.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/autonomous-racing1.jpg?itok=1D4XozQ1]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[autonomous racing vehicle2]]></image_alt>                    <created>1464703200</created>          <gmt_created>2016-05-31 14:00:00</gmt_created>          <changed>1475895326</changed>          <gmt_changed>2016-10-08 02:55:26</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>          <category tid="136"><![CDATA[Aerospace]]></category>          <category tid="145"><![CDATA[Engineering]]></category>          <category tid="147"><![CDATA[Military Technology]]></category>          <category tid="135"><![CDATA[Research]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="136"><![CDATA[Aerospace]]></term>          <term tid="145"><![CDATA[Engineering]]></term>          <term tid="147"><![CDATA[Military Technology]]></term>          <term tid="135"><![CDATA[Research]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="7264"><![CDATA[autonomous]]></keyword>          <keyword tid="97281"><![CDATA[autonomous vehicles]]></keyword>          <keyword tid="172051"><![CDATA[control system]]></keyword>          <keyword tid="170305"><![CDATA[driverless]]></keyword>          <keyword tid="205"><![CDATA[GPU]]></keyword>          <keyword tid="667"><![CDATA[robotics]]></keyword>      </keywords>  <core_research_areas>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71881"><![CDATA[Science and Technology]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="534651">  <title><![CDATA[Georgia Tech Research Finds Fan Communities Are Reshaping the Social Web for the Better]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Modern fan groups predate the Internet by more than half a century (think Star Trek conventions), and their shared interests include everything from science fiction to knitting. But replicating the connections fans make in person in a digital space has proved difficult. Instead, groups with special interests are often forced onto Facebook and other social media with a one-size-fits-all approach to interacting online.</p><p>In a new study, Georgia Institute of Technology researchers have found one group of fan fiction writers that has created a successful online community, which might serve as a model to help make the future social web markedly different from today’s landscape.</p><p>By adopting a user-centric approach to design, this community has created a rarity on the web, a “digital commons” without advertising where harassment is almost nonexistent, and a large installed audience enjoys a culture of genuine diversity.</p><p>The study, from Georgia Tech and University of Colorado-Boulder, is based on the website <a href="https://archiveofourown.org/">Archive of Our Own</a> (AO3), an 840,000 member community of fan fiction or “fanfic” writers who post and share user-generated content. The site was launched in 2008 and boasts nearly 2 million story posts to date. Its web traffic outpaces such heavyweights as CareerBuilder&nbsp;and FoxSports, among others, ranking number 418 in U.S. web metrics, according to alexa<a href="http://alexa.com/">.</a>com.</p><p>“AO3’s success demonstrates how beneficial it is to have a technology’s users as part of its development team,” said Casey Fiesler, lead researcher on the study while a Ph.D. candidate at Georgia Tech, and now assistant professor at University of Colorado-Boulder.</p><p>“This is particularly striking when users are mostly women, who are traditionally underrepresented in tech. Because there was no existing technology that reflected their values, they built their own and it has been massively successful.”</p><p>A small team of coders, coordinators and designers from the ranks of AO3 members took input from users and coupled it with the guiding values of the fan fiction community &shy;&shy;– which are accessibility and inclusivity – to create the basic structure of AO3. After more than eight years, this structure remains largely unchanged.</p><p>During interviews with users and developers, researchers discovered that AO3’s intentional design approach, which baked the ethos of the community right into the website, accounts for much of the site’s organic growth and success.</p><p>“What makes the rise of this online platform exceptional is that it was built primarily by its fans, some of whom started with little or no programming experience,” said Amy Bruckman, a professor of Interactive Computing at Georgia Tech and author on the study<strong>.</strong></p><p>She added, “Fanfic writers, mostly women, who felt exploited or that other platforms weren’t meeting their needs, started this open source project and invited the larger community of fanfic writers to provide input. AO3 is a case study in building a digital commons around a group of users and addressing nuanced technical issues in order to successfully engage the community.”</p><p>One of the technical issues AO3 faced early on, tag structure, has since become a favorite feature and essential to the website’s success. Designers did not limit what or how many tags can be used with published stories, but rather created an open-ended system. AO3 “tag wranglers,” member volunteers, manually combine tags submitted by users (such as “mermaid,” “merman,” and “merfolk”) into one meta tag (“merpeople”), allowing for a robust search of multiple terms.</p><p>This level of control allows users to find a wide cross section of relevant content, something that is often not possible on other platforms beyond giant search engines, according to the research. Fiesler notes that the tag system also gives writers more control over how to describe their work, and this contributes to the inclusiveness and diversity of the community.</p><p>But like any online space, there are competing values among users. Anonymity, like elsewhere on the web, can allow for more openness and sharing, but it can also invite harassment. To limit this, the AO3 site allows users to post comments anonymously, but it also allows users to turn off incoming anonymous comments so they do not have to see them. The site also prohibits the intentional “outing” (revealing real identities) of users, does not offer tier accounts and never collects personal data. All of this means the AO3 community can enjoy a high degree of privacy while respecting the rights of all of its users.</p><p>Although AO3 makes every effort to limit harassment, it does not censor or restrict content on the site, unless it is illegal. However, to ensure readers know they are reading content “at their own risk,” warning labels are required on mature content that is posted.</p><p>Another concern among users is how to preserve the entirety of the archive while also respecting users’ rights to erase their own work. AO3 again turned to its members for a solution. For writers wanting to remove their fiction, the site gives them the option to “orphan” their work. This removes their pseudonym or name from the work, but allows the content to remain in the community.</p><p>“Other sites would do well to understand their users as well as AO3 does in order to achieve long-term goals and address some of the emerging issues on the social web, such as those involving harassment, privacy, security and sustainability,” says Fiesler, the lead researcher.</p><p>The research, “<a href="https://cfiesler.files.wordpress.com/2016/02/chi2016_ao3_fiesler.pdf">An Archive of Their Own: A Case Study of Feminist HCI and Values in Design</a>,” co-authored by Fiesler, Bruckman and Shannon Morrison (a former visiting undergraduate at Georgia Tech), will be presented at CHI 2016, the Association for Computing Machinery’s Conference on Human Factors in Computing Systems, taking place May 7-12 in San Jose, Calif. The conference is the largest gathering of human-computer interaction researchers worldwide, with more than 2,000 authors in this year’s technical program.</p><p>&nbsp;</p><p><em>&nbsp;</em><em>Research was funded by NSF IIS Award #1216347. The views expressed are those of the researchers and do not necessarily represent those of the National Science Foundation.</em></p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1462794268</created>  <gmt_created>2016-05-09 11:44:28</gmt_created>  <changed>1475896899</changed>  <gmt_changed>2016-10-08 03:21:39</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[In a new study, Georgia Institute of Technology researchers have found that one successful online community could serve as a model to help make the future social web a safer, more inclusive space.]]></teaser>  <type>news</type>  <sentence><![CDATA[In a new study, Georgia Institute of Technology researchers have found that one successful online community could serve as a model to help make the future social web a safer, more inclusive space.]]></sentence>  <summary><![CDATA[<p>In a new study, Georgia Institute of Technology researchers have found that one successful online community could serve as a model to help make the future social web a safer, more inclusive space.</p>]]></summary>  <dateline>2016-05-09T00:00:00-04:00</dateline>  <iso_dateline>2016-05-09T00:00:00-04:00</iso_dateline>  <gmt_dateline>2016-05-09 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jpreston@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu">Joshua Preston</a><br />College of Computing, GVU Center<br />678.231.0787</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>534751</item>          <item>534561</item>      </media>  <hg_media>          <item>          <nid>534751</nid>          <type>image</type>          <title><![CDATA[Casey Fiesler and Amy Bruckman]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[casey_and_amy_web.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/casey_and_amy_web_0.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/casey_and_amy_web_0.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/casey_and_amy_web_0.jpg?itok=D4G9enmI]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Casey Fiesler and Amy Bruckman]]></image_alt>                    <created>1462910400</created>          <gmt_created>2016-05-10 20:00:00</gmt_created>          <changed>1475895319</changed>          <gmt_changed>2016-10-08 02:55:19</gmt_changed>      </item>          <item>          <nid>534561</nid>          <type>image</type>          <title><![CDATA[CHI 2016 - Web Culture Research, Archive of Our Own]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[a03_merpeople_screenshot.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/a03_merpeople_screenshot.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/a03_merpeople_screenshot.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/a03_merpeople_screenshot.jpg?itok=xFs0CB-o]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[CHI 2016 - Web Culture Research, Archive of Our Own]]></image_alt>                    <created>1462892400</created>          <gmt_created>2016-05-10 15:00:00</gmt_created>          <changed>1475895319</changed>          <gmt_changed>2016-10-08 02:55:19</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>          <category tid="143"><![CDATA[Digital Media and Entertainment]]></category>      </categories>  <news_terms>          <term tid="143"><![CDATA[Digital Media and Entertainment]]></term>      </news_terms>  <keywords>          <keyword tid="167543"><![CDATA[social media]]></keyword>          <keyword tid="172017"><![CDATA[web culture]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71881"><![CDATA[Science and Technology]]></topic>          <topic tid="71901"><![CDATA[Society and Culture]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="513541">  <title><![CDATA[‘Civic Computing’ workshop leads an unlikely group of youth to help advance metro city’s vision]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Defining community in the digital age is often a nuanced exercise that involves looking at social connections far beyond where one works and lives. But even in an age of tweets, texts, and video chats, young people are willing to use their voice to support and shape the communities in which they live.&nbsp;</p><p>One such group of College Park students—who are completing their high school credits at The Bridge Academy—recently participated in the Georgia Tech design[ED] Lab workshop where they reviewed College Park’s 20-year Comprehensive Plan (2011 – 2031) to identify community issues and create computing technology solutions that could enhance community engagement and increase career opportunities for young people. The students chose to address three key issues from the city’s policy guide: perception of crime, decreasing standardized test scores, and impact of crime on youth.</p><p>“For six weeks, the students were exposed to a design-thinking process and tools for creating user-focused technology that gave them an understanding of how to frame and tackle challenges within their community,” says Monet Spells, graduate student in the Master of Science in Human-Computer Interaction program and workshop organizer.</p><p>Students in the program brainstormed solutions, iterated on the prototypes, and critiqued their peers’ work to come up with three viable technology concepts. These concepts were displayed in February at a special event open to the public at the Museum of Design Atlanta.&nbsp;</p><p>“Giving students an authentic opportunity to present their work—such as at the MODA public exhibit—acts as a motivation for students to engage with learning, take ownership of their projects, and to see their efforts pay off,” says Betsy DiSalvo, assistant professor in Interactive Computing and Spells’ advisor.&nbsp;</p><h4>Results included:&nbsp;</h4><ul><li>A physical prototype and supporting mobile application wireframe to change the perception of crime for the benefit of College Park citizens and businesses by highlighting positive things happening in the community.</li><li>A customized test preparation system, using hip-hop music to motivate and prepare students to increase standardized test scores, which could otherwise limit post-secondary and future opportunities.</li><li>A social network to address the impact of crime on College Park youth, by providing tips for resisting peer pressure, sharing community events, and facilitating a healthy relationship with law enforcement.</li></ul><p>“It was very important that the students' solutions addressed practical and verified problems in the community,” says Spells. “The College Park Comprehensive Plan allowed us to pursue validated, researched, high-level problem spaces that the community’s elected officials are seeking to address over the next 20 years.”</p><p>Spells says the lab also aimed to expose underrepresented minorities to design-thinking as a method to solve important problems and empower young people with the tools to make a difference and inspire change. For example, some students&nbsp;on their own accord are learning the technical skills required to pursue their ideas beyond the workshop by refining their designs and apps.</p><p>“The public exhibit brought the work of these young people and their insights about the city to the attention of city council members, who have invited the students to present their ideas in other public forums,” says DiSalvo.</p><p>Spells, who will graduate in May, is part of the&nbsp;<a href="http://catlab.gatech.edu/" target="_blank">Culture and Technology Lab</a>&nbsp;at Georgia Tech, directed by DiSalvo, which aims to understand how culture impacts people’s practices with technology and designing new learning interventions with these understandings. Spells was also named the GVU Center’s inaugural&nbsp;<a href="http://gvu.gatech.edu/foley-scholar-finalistsgvu-dist-masters-student-2015-16-feature-story" target="_blank">Distinguished Master’s Student</a>&nbsp;this academic year for her work with underrepresented minorities and women in technology-enhanced dance performance.</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1458041741</created>  <gmt_created>2016-03-15 11:35:41</gmt_created>  <changed>1475896865</changed>  <gmt_changed>2016-10-08 03:21:05</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Even in an age of tweets, texts, and video chats, young people are willing to use their voice to support and shape the communities in which they live.]]></teaser>  <type>news</type>  <sentence><![CDATA[Even in an age of tweets, texts, and video chats, young people are willing to use their voice to support and shape the communities in which they live.]]></sentence>  <summary><![CDATA[<p>College Park students—who are completing their high school credits at The Bridge Academy—recently participated in the Georgia Tech design[ED] Lab workshop where they reviewed College Park’s 20-year Comprehensive Plan (2011 – 2031) to identify community issues and create computing technology solutions that could enhance community engagement and increase career opportunities for young people.</p>]]></summary>  <dateline>2016-03-15T00:00:00-04:00</dateline>  <iso_dateline>2016-03-15T00:00:00-04:00</iso_dateline>  <gmt_dateline>2016-03-15 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jpreston@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Joshua Preston<br />GVU Center and College of Computing<br /><a href="mailto:jpreston@cc.gatech.edu">jpreston@cc.gatech.edu</a><br />678.231.0787</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>513551</item>          <item>513561</item>          <item>355701</item>      </media>  <hg_media>          <item>          <nid>513551</nid>          <type>image</type>          <title><![CDATA[College Park students at MODA with MS HCI student Monet Spells]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[designedlab_workshop_for_college_park_students_cr.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/designedlab_workshop_for_college_park_students_cr_0.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/designedlab_workshop_for_college_park_students_cr_0.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/designedlab_workshop_for_college_park_students_cr_0.jpg?itok=5N71XnWM]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[College Park students at MODA with MS HCI student Monet Spells]]></image_alt>                    <created>1458923790</created>          <gmt_created>2016-03-25 16:36:30</gmt_created>          <changed>1475895277</changed>          <gmt_changed>2016-10-08 02:54:37</gmt_changed>      </item>          <item>          <nid>513561</nid>          <type>image</type>          <title><![CDATA[Monet Spells (MS HCI sudent)]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[pub_monet_spells.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/pub_monet_spells_0.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/pub_monet_spells_0.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/pub_monet_spells_0.jpg?itok=vtN2-60b]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Monet Spells (MS HCI sudent)]]></image_alt>                    <created>1458923790</created>          <gmt_created>2016-03-25 16:36:30</gmt_created>          <changed>1475895277</changed>          <gmt_changed>2016-10-08 02:54:37</gmt_changed>      </item>          <item>          <nid>355701</nid>          <type>image</type>          <title><![CDATA[Betsy DiSalvo - Compressed]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[betsy-disalvo.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/betsy-disalvo.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/betsy-disalvo.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/betsy-disalvo.jpg?itok=JtMqGFNg]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Betsy DiSalvo - Compressed]]></image_alt>                    <created>1449245756</created>          <gmt_created>2015-12-04 16:15:56</gmt_created>          <changed>1475895087</changed>          <gmt_changed>2016-10-08 02:51:27</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="469201">  <title><![CDATA[Georgia Tech trains Watson AI to 'chat,' spark more creativity in humans]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Georgia Institute of Technology researchers are exploring and pushing the boundaries of artificial intelligence (AI) by partnering with one&nbsp;of AI’s most&nbsp;notable citizens —&nbsp;IBM’s Watson — to&nbsp;advance how computers could help humans creatively solve problems in a wide variety of professions.</p><p>“Searching Google still requires a lot of search,” says Ashok Goel, professor at Georgia Tech’s School of Interactive Computing. “Imagine if you could ask Google a complicated question and it immediately responded with your answer — not just a list of links to manually open. That’s what we did with Watson.”</p><p>Watson was trained by student teams in a class at Georgia Tech using 1,200 question-answer pairs (200 for each of six teams), which allowed them to “chat” with Watson and seek out inspiration for big design challenges in areas such as engineering, architecture, systems, and computing. The teams worked with the AI to learn about solutions that could be replicated from the natural world —&nbsp;something known as biologically inspired design —&nbsp;after first feeding Watson several hundred&nbsp;biology articles&nbsp;from&nbsp;<em>Biologue</em>, an interactive biology repository. Teams then posed questions to Watson about the research it had learned.</p><p>Questions included, “How do you make a better desalination process for consuming sea water?” Animals, it turns out, have a variety of answers for this, such as how seagulls filter out seawater salt through special glands. Another question asked, “How can manufacturers develop better solar cells for long-term space travel?” One answer: Replicate how plants in harsh climates use high-temperature fibrous insulation material to regulate temperature. IBM’s Watson quickly culled answers for students from the <em>Biologue</em> articles in a fraction of a second.</p><p>Watson effectively acted as an intelligent sounding board to steer students through what would otherwise be a daunting task of parsing a wide volume of research that may fall outside their expertise. This approach to using Watson could assist professionals in a variety of fields by allowing them to ask questions and receive answers as quickly as in natural conversation to help with problem solving.</p><p>Georgia Tech discovered that Watson’s ability to retrieve natural language information would allow a novice to quickly “train up” about complex topics and better determine whether their idea or hypothesis is worth pursuing.</p><p>The students call their technique “GT-Watson Plus,” a moniker that implies the system’s advanced capabilities. In addition to the ability to “chat” on a topic, this version of Watson prompts users with alternate ways to ask questions for better results. Those results are packaged in an intuitive presentation — visualized as a “treetop” where each answer is a “leaf” that varies in size based on its weighted importance.&nbsp;This allows the average person to navigate results more easily on a given topic.</p><p>“Researchers are provided a quickly digestible visual map of the concepts relevant to the query and the degree to which they are relevant,” says Goel, who taught the course. “We were able to add more semantic and contextual meaning to Watson to give some notion of a conversation with the AI.”</p><p>Goel plans to investigate other areas with Watson such as online learning and healthcare.</p><p><em>The work will be presented at the Association for the Advancement of Artificial Intelligence (AAAI) 2015 Fall Symposium on Cognitive Assistance in Government, Nov. 12-14, in Arlington, Va.</em></p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1447326755</created>  <gmt_created>2015-11-12 11:12:35</gmt_created>  <changed>1475896798</changed>  <gmt_changed>2016-10-08 03:19:58</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Institute of Technology researchers are exploring and pushing the boundaries of artificial intelligence (AI) by partnering with one of AI’s most notable citizens — IBM’s Watson — to advance how computers help humans creatively solve problems.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Institute of Technology researchers are exploring and pushing the boundaries of artificial intelligence (AI) by partnering with one of AI’s most notable citizens — IBM’s Watson — to advance how computers help humans creatively solve problems.]]></sentence>  <summary><![CDATA[<p>Georgia Institute of Technology researchers are exploring and pushing the boundaries of artificial intelligence (AI) by partnering with one of AI’s most notable citizens — IBM’s Watson — to advance how computers could help humans creatively solve problems in a wide variety of professions.</p>]]></summary>  <dateline>2015-11-12T00:00:00-05:00</dateline>  <iso_dateline>2015-11-12T00:00:00-05:00</iso_dateline>  <gmt_dateline>2015-11-12 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jpreston@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu">Joshua Preston</a><br />Research Communications Officer<br />GVU Center and College of Computing<br /> 678.231.0787</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>469491</item>          <item>469221</item>          <item>469251</item>      </media>  <hg_media>          <item>          <nid>469491</nid>          <type>image</type>          <title><![CDATA[Watson Screenshot]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[screen_shot_of_watson.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/screen_shot_of_watson_0.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/screen_shot_of_watson_0.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/screen_shot_of_watson_0.png?itok=1JnvdZ4F]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Watson Screenshot]]></image_alt>                    <created>1449257160</created>          <gmt_created>2015-12-04 19:26:00</gmt_created>          <changed>1475895218</changed>          <gmt_changed>2016-10-08 02:53:38</gmt_changed>      </item>          <item>          <nid>469221</nid>          <type>image</type>          <title><![CDATA[Ashok Goel]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[ashok_goel_teaching2_cr.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/ashok_goel_teaching2_cr_0.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/ashok_goel_teaching2_cr_0.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/ashok_goel_teaching2_cr_0.jpg?itok=-AgpxFkG]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Ashok Goel]]></image_alt>                    <created>1449257160</created>          <gmt_created>2015-12-04 19:26:00</gmt_created>          <changed>1475895218</changed>          <gmt_changed>2016-10-08 02:53:38</gmt_changed>      </item>          <item>          <nid>469251</nid>          <type>image</type>          <title><![CDATA[GT-Watson Plus Concept Results]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[watson_graphic_-_treemap_for_biology_concepts.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/watson_graphic_-_treemap_for_biology_concepts_0.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/watson_graphic_-_treemap_for_biology_concepts_0.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/watson_graphic_-_treemap_for_biology_concepts_0.png?itok=JMYzFfEK]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[GT-Watson Plus Concept Results]]></image_alt>                    <created>1449257160</created>          <gmt_created>2015-12-04 19:26:00</gmt_created>          <changed>1475895218</changed>          <gmt_changed>2016-10-08 02:53:38</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="2835"><![CDATA[ai]]></keyword>          <keyword tid="2556"><![CDATA[artificial intelligence]]></keyword>          <keyword tid="112431"><![CDATA[ashok goel]]></keyword>          <keyword tid="147691"><![CDATA[IBM Watson]]></keyword>          <keyword tid="12208"><![CDATA[watson]]></keyword>      </keywords>  <core_research_areas>          <term tid="39431"><![CDATA[Data Engineering and Science]]></term>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71881"><![CDATA[Science and Technology]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="456791">  <title><![CDATA[Georgia Tech alumni win Gold at Bio-Engineering Olympics]]></title>  <uid>27592</uid>  <body><![CDATA[<p class="p1">Blacki Migliozzi, MS HCI 12, was recently part of a team that won a top prize at the 2015 International Genetically Engineered Machines Competition (<a href="http://2015.igem.org/Giant_Jamboree" target="_blank">iGEM</a>)&nbsp;in Boston Mass., Sept. 24-28. His team Genspace brought home a gold medal for synthetic biology work on two devices and creating 11 new BioBrick&nbsp;parts,&nbsp;standardized DNA&nbsp;building blocks used to design and assemble synthetic biological circuits. The team, which included fellow alumna&nbsp;Christal&nbsp;Gordon,&nbsp;MS,&nbsp;PhD&nbsp;EE,&nbsp;also won an award for best community lab&nbsp;for the work centered around the&nbsp;<a href="http://2015.igem.org/Team:Genspace" target="_blank">Gowanus Canal</a>.&nbsp;</p><p class="p2">On his path to iGEM, Migliozzi studied&nbsp;human-computer interaction at Georgia Tech&nbsp;and was often found working with various research groups on campus to learn about&nbsp;biology-related work. He admits his HCI graduate thesis was a bit abnormal, being centered around bio-hobbyists growing mushrooms.&nbsp;</p><p class="p2">Researchers that had some of the biggest impact on the alum were in the Digital Media program - advisor Carl DiSalvo and Andrew Quitmeyer among them - and they encouraged him to&nbsp;explore his research connecting biology and technology.&nbsp;Migliozzi&nbsp;managed to start a DIY bio-lab in a corner of the Technology Square Research Building - not normally a building for fauna&nbsp;and petri dishes - and for a short time he commandeered the Digital Media program’s refrigerator&nbsp;as a research&nbsp;compost bin.</p><p class="p2">“I was very lucky to work with and learn from several groups around campus, namely&nbsp;<a href="http://www.arkfab.org/" target="_blank">ArkFab</a>, the&nbsp;<a href="http://www.astrobiology.gatech.edu/" target="_blank">Astrobiology group</a>&nbsp;and within Tucker Balch's&nbsp;<a href="http://www.bio-tracking.org/" target="_blank">Bio-Tracking project</a>” says Migliozzi, now a&nbsp;data visualization developer for Bloomberg News in NYC. “Those years were formative for me in my continued love of&nbsp;biology.”&nbsp;</p><p class="p2">He says that the iGEM award is one of the biggest accomplishments of his life and gives credit to his experience at Georgia Tech.&nbsp;</p><p class="p2">“I hope both the HCI and Digital Media programs continue to be as interdisciplinary as possible,” he says. “I encourage students to seek out research across campus and I hope other&nbsp;departments invite these students in with open arms.”</p><p class="p2">“Students like Blacki are wonderful — they challenge us to grow and learn. Blacki made an amazing contribution to the culture of the Public Design Workshop and I could not be more delighted by his successes,” says his former advisor Carl&nbsp;DiSalvo.</p><p class="p1">Among the&nbsp;280&nbsp;teams and 2,700 participants at&nbsp;iGEM&nbsp;2015, Georgia Tech also had a team, which placed with a bronze medal for its&nbsp;<a href="http://%28http//2015.igem.org/Team:GeorgiaTech)" target="_blank">project</a>.</p><br />]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1444218948</created>  <gmt_created>2015-10-07 11:55:48</gmt_created>  <changed>1475896783</changed>  <gmt_changed>2016-10-08 03:19:43</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Blacki Migliozzi, MS HCI 12, and Christal Gordon, MS, PhD EE, were recently part of a team that won the top prize at the 2015 International Genetically Engineered Machines Competition (iGEM) in Boston Mass., Sept. 24-28.]]></teaser>  <type>news</type>  <sentence><![CDATA[Blacki Migliozzi, MS HCI 12, and Christal Gordon, MS, PhD EE, were recently part of a team that won the top prize at the 2015 International Genetically Engineered Machines Competition (iGEM) in Boston Mass., Sept. 24-28.]]></sentence>  <summary><![CDATA[<p>Blacki Migliozzi, MS HCI 12, and Christal Gordon, MS, PhD EE,&nbsp;were recently part of a team that won a top prize at the 2015 International Genetically Engineered Machines Competition (<a href="http://2015.igem.org/Giant_Jamboree" target="_blank">iGEM</a>)&nbsp;in Boston Mass., Sept. 24-28.</p>]]></summary>  <dateline>2015-10-07T00:00:00-04:00</dateline>  <iso_dateline>2015-10-07T00:00:00-04:00</iso_dateline>  <gmt_dateline>2015-10-07 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[gvu@gatech.edu]]></email>  <location></location>  <contact><![CDATA[]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>456771</item>          <item>456781</item>      </media>  <hg_media>          <item>          <nid>456771</nid>          <type>image</type>          <title><![CDATA[Blacki Migliozzi]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[igem_-_blacki_migliozzi_ms_hci_12_single.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/igem_-_blacki_migliozzi_ms_hci_12_single_0.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/igem_-_blacki_migliozzi_ms_hci_12_single_0.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/igem_-_blacki_migliozzi_ms_hci_12_single_0.jpg?itok=pkrhYaZa]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Blacki Migliozzi]]></image_alt>                    <created>1449256334</created>          <gmt_created>2015-12-04 19:12:14</gmt_created>          <changed>1475895202</changed>          <gmt_changed>2016-10-08 02:53:22</gmt_changed>      </item>          <item>          <nid>456781</nid>          <type>image</type>          <title><![CDATA[Christal Gordon]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[igem_-_christal_gordon_ms_phd_ee_single.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/igem_-_christal_gordon_ms_phd_ee_single_0.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/igem_-_christal_gordon_ms_phd_ee_single_0.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/igem_-_christal_gordon_ms_phd_ee_single_0.jpg?itok=cBg-f_9t]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Christal Gordon]]></image_alt>                    <created>1449256334</created>          <gmt_created>2015-12-04 19:12:14</gmt_created>          <changed>1475895202</changed>          <gmt_changed>2016-10-08 02:53:22</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="454461">  <title><![CDATA[Bad Design Atlanta Contest sponsored by Georgia Tech student group]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Have you ever seen something around the city or on campus that is poorly designed to the point of absurdity? Have you dealt with impossible public transit signs or maybe your favorite coffee bistro has poorly designed chairs? Got an idea to improve things?&nbsp;The Georgia Tech chapter of the Human Factors and Ergonomics Society, sponsor of Bad Design Atlanta, is seeking submissions that&nbsp;bring some attention to designs that need a little rethinking.&nbsp;Submit an entry for your chance to win a cash prize (Deadline: Nov. 6)</p><p>Contest is open to any student or group of students at Georgia Tech. Examples of inspired bad design are at: <a href="http://www.baddesigns.com/examples.html" title="http://www.baddesigns.com/examples.html">http://www.baddesigns.com/examples.html</a></p><p><br /><strong>BAD DESIGN ATLANTA CONTEST PRIZES AND RULES:</strong></p><p>1st place: $75; 2nd place: $50; 3rd place: $25</p><p><strong>Submissions should include:</strong>&nbsp;</p><p>- Title of submission Name, email, area of study Problem description and Proposed Solution (no more than one page each)<br />- PDF or Word document (2-page and 500-word maximum); any pictures should fit on the 2-page document</p><p>- One submission per person/group<br />- Submissions are due on Friday, Nov. 6, 2015 at 5pm ET</p><p><em>Email submissions to: hfes@gatech.edu</em></p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1443705506</created>  <gmt_created>2015-10-01 13:18:26</gmt_created>  <changed>1475896780</changed>  <gmt_changed>2016-10-08 03:19:40</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The Bad Design Atlanta Contest, sponsored by the Georgia Tech chapter of the Human Factors and Ergonomics Society, is a chance to bring some attention to designs that need a little rethinking. The top three entries receive cash prizes. (Deadline: Nov. 6)]]></teaser>  <type>news</type>  <sentence><![CDATA[The Bad Design Atlanta Contest, sponsored by the Georgia Tech chapter of the Human Factors and Ergonomics Society, is a chance to bring some attention to designs that need a little rethinking. The top three entries receive cash prizes. (Deadline: Nov. 6)]]></sentence>  <summary><![CDATA[<p>The Bad Design Atlanta Contest, sponsored by the&nbsp;Georgia Tech chapter of the Human Factors and Ergonomics Society,&nbsp;is a chance to bring some attention to designs that need a little rethinking. Submit an entry for a chance to win a cash prize (Deadline: Nov. 6)</p>]]></summary>  <dateline>2015-10-01T00:00:00-04:00</dateline>  <iso_dateline>2015-10-01T00:00:00-04:00</iso_dateline>  <gmt_dateline>2015-10-01 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:darwei.chen@gatech.edu">Dar-Wei Chen</a> <br /> <a href="http://hfes.gatech.edu/" target="_blank">Georgia Tech chapter of the Human Factors and Ergonomics Society</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>      </media>  <hg_media>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="452221">  <title><![CDATA[‘On You’ Wearable Computing Exhibit draws over 30,000 attendees, closes with alumni receptions]]></title>  <uid>27592</uid>  <body><![CDATA[<p>From the ancient abacus to supercomputers, the Computer History Museum in Mountain View, Calif., offers an array of&nbsp;rare artifacts and milestones from 2,000 years of “computing” history. This summer at a special exhibition, a new breed of computer drew more than 30,000 visitors and showed how computing has become synonymous with daily life.&nbsp;</p><p>Georgia Tech’s “On You: A Story of Wearable Computing” exhibit curated more than 60 gadgets chronicling the history of making on-body technology a reality. The exhibit showed devices that have been envisioned for consumers and professionals and by “makers.” It showed four major challenges to a consumer wearable computer - power and heat, networking, mobile input, and displays - and the product categories that have resulted.</p><p>More than 100 alumni, family members and students gathered for the exhibit’s closing reception&nbsp;on Sept. 19, many from the College of Computing, including OMSCS students in the Bay Area.</p><p><a href="http://gvu.gatech.edu/you-wearable-computing-exhibit-alumni-receptions">Read More</a></p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1443175348</created>  <gmt_created>2015-09-25 10:02:28</gmt_created>  <changed>1475896776</changed>  <gmt_changed>2016-10-08 03:19:36</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech’s “On You: A Story of Wearable Computing” exhibit at the Computer History Museum curated more than 60 gadgets chronicling the history of making on-body technology a reality.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech’s “On You: A Story of Wearable Computing” exhibit at the Computer History Museum curated more than 60 gadgets chronicling the history of making on-body technology a reality.]]></sentence>  <summary><![CDATA[<p>Georgia Tech’s “On You: A Story of Wearable Computing” exhibit at the Computer History Museum curated more than 60 gadgets chronicling the history of making on-body technology a reality.&nbsp;</p><p>More than 100 alumni, family members and students gathered for the exhibit’s closing reception&nbsp;on Sept. 19, many from the College of Computing, including OMSCS students in the Bay Area.</p>]]></summary>  <dateline>2015-09-23T00:00:00-04:00</dateline>  <iso_dateline>2015-09-23T00:00:00-04:00</iso_dateline>  <gmt_dateline>2015-09-23 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jpreston@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu">Joshua Preston</a><br /> GVU Center, College of Computing</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>416531</item>      </media>  <hg_media>          <item>          <nid>416531</nid>          <type>image</type>          <title><![CDATA[Thad Starner]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[thad_starner_2.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/thad_starner_2_0.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/thad_starner_2_0.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/thad_starner_2_0.jpg?itok=rt_37qiZ]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Thad Starner]]></image_alt>                    <created>1449254258</created>          <gmt_created>2015-12-04 18:37:38</gmt_created>          <changed>1475895155</changed>          <gmt_changed>2016-10-08 02:52:35</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>          <keyword tid="9873"><![CDATA[clint zeagler]]></keyword>          <keyword tid="132111"><![CDATA[Computer History Museum]]></keyword>          <keyword tid="1944"><![CDATA[Thad Starner]]></keyword>          <keyword tid="10353"><![CDATA[wearable computing]]></keyword>          <keyword tid="115211"><![CDATA[wearable tech]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="433811">  <title><![CDATA[GT computing education experts present work on learning methods, broadening diversity and more]]></title>  <uid>27592</uid>  <body><![CDATA[<p class="p1">The ACM International Computing Education Research Conference, ICER 2015, and the first IEEE Broadening Participation in Computing Research Conference, RESPECT 2015, take place this week and include new research by Georgia Tech faculty and graduate students from three colleges, including computing, architecture, and liberal arts.</p><p class="p1">ICER - dedicated to the study of how people understand computational processes and devices - takes place in Omaha, Nebr., Aug. 9-13. RESPECT - focused on improving diversity in the computer science education community - follows immediately, Aug. 13-14, in Charlottesville, N.C..</p><p class="p1">Mark Guzdial, professor of Interactive Computing, is co-chair of the Doctoral Consortium at ICER 2015, which set a record for most participants in a computing education doctoral consortium anywhere in the world with 20 Ph.D. students, including Georgia Tech’s Barbara Ericson, Briana Morrison, and Miranda Parker. Students also represented countries such as Chile, Germany and the United Kingdom.&nbsp;</p><p class="p1">&nbsp;</p><h3 class="p3">Presenting at&nbsp;<a href="http://icer.hosting.acm.org/" target="_blank">ICER</a>:&nbsp;</h3><p class="p1"><em>Papers:</em></p><p class="p1"><strong>Subgoals, Context, and Worked Examples in Learning Computing Problem Solving&nbsp;</strong>(1 of 2 Best Papers)</p><p class="p1">Briana Morrison (Georgia Tech), Lauren Margulieux (Georgia Tech) and Mark Guzdial (Georgia Tech)</p><p>&nbsp;</p><p class="p2"><strong>Analysis of Interactive Features Designed to Enhance Learning in an Ebook</strong></p><p>Barbara Ericson (Georgia Tech), Mark Guzdial (Georgia Tech) and Briana Morrison (Georgia Tech)</p><p class="p1">&nbsp;</p><p class="p1"><em>Lightening Talks and Posters:</em></p><p class="p1"><strong>The MoveLab: Supporting Diversity through Self-Conceptions</strong></p><p class="p2">Kayla DesPortes<em>&nbsp;(Georgia Tech)</em></p><p class="p1">&nbsp;</p><p class="p1"><em>Doctoral Consortium:&nbsp;</em></p><p class="p1"><strong>Adaptive Parsons Problems with Discourse Rules</strong></p><p class="p2">Barbara Ericson, Ph.D. HCC student; Director of Computing Outreach, ICE</p><p class="p2">&nbsp;</p><p class="p2"><strong>Computer Science Is Different!</strong></p><p class="p1">Briana B. Morrison, Ph.D. HCC student</p><p class="p1">&nbsp;</p><p class="p1"><strong>Privilege and Computer Science Education: How Can we Level the Playing Field?</strong></p><p class="p1">Miranda Parker, Ph.D. HCC student</p><p class="p1">&nbsp;</p><h3 class="p3">Presenting at&nbsp;<a href="http://respect2015.stcbp.org/" target="_blank">RESPECT</a>:&nbsp;</h3><p class="p1"><em>Papers:&nbsp;</em></p><p class="p1"><strong>Helping African American Students Pass Advanced Placement Computer Science: A Tale of Two States&nbsp;</strong>(1 of 4 "Exemplary" papers)</p><p class="p2">Barbara Ericson and Tom McKlin</p><p class="p1">&nbsp;</p><p class="p2"><strong>A critical research synthesis of privilege in computing education&nbsp;</strong></p><p class="p2">Miranda Parker and Mark&nbsp;Guzdial (short paper)</p><p class="p1">&nbsp;</p><p class="p1"><em>Fireside Chat:&nbsp;</em></p><p class="p2"><strong>Broadening Participation in Computing&nbsp;</strong></p><p class="p2">Mark Guzdial</p><p class="p1">&nbsp;</p><p class="p1"><em>Lightening Talk:</em></p><p class="p1"><strong>EarSketch: a STEAM&nbsp;approach to broadening participation in Computer Science Principles</strong></p><p class="p2">Jason Freeman, Brian Magerko, Doug Edwards, Roxanne Moore, Tom McKlin and Anna Xambó.&nbsp;</p><p class="p1">&nbsp;</p><p class="p2"><strong>Exploring African-American Middle School Girls’ Perceptions of Themselves as Computational&nbsp;Algorithmic Thinkers and Game Designers Through Reality Confessionals</strong></p><p class="p1">Jakita Thomas, PhD CS 2006</p><p class="p1">&nbsp;</p><p class="p2"><strong>It’s All In The Mix: Leveraging food to increase Afrian-American women’s&nbsp;persistence in Computer Science</strong></p><p class="p1">Jakita Thomas and Yolanda Rankin</p><p class="p1">&nbsp;</p><p class="p1">For more details about research from the Contextualized Support for Learning group:</p><p class="p2"><a href="https://computinged.wordpress.com/2015/08/07/icer-2015-preview-subgoal-labeling-works-for-text-too/" target="_blank">ICER research</a></p><p class="p2"><a href="https://computinged.wordpress.com/2015/08/10/respect-2015-preview-the-role-of-privilege-in-cs-education/" target="_blank">RESPECT research</a></p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1439386142</created>  <gmt_created>2015-08-12 13:29:02</gmt_created>  <changed>1475896762</changed>  <gmt_changed>2016-10-08 03:19:22</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[ACM ICER 2015 and the IEEE RESPECT 2015 conferences take place this week and include new research by Georgia Tech faculty and graduate students in computing education.]]></teaser>  <type>news</type>  <sentence><![CDATA[ACM ICER 2015 and the IEEE RESPECT 2015 conferences take place this week and include new research by Georgia Tech faculty and graduate students in computing education.]]></sentence>  <summary><![CDATA[<p>The ACM International Computing Education Research Conference, ICER 2015, and the first IEEE Broadening Participation in Computing Research Conference, RESPECT 2015, take place this week and include new research by Georgia Tech faculty and graduate students from three colleges, including computing, architecture, and liberal arts.</p>]]></summary>  <dateline>2015-08-12T00:00:00-04:00</dateline>  <iso_dateline>2015-08-12T00:00:00-04:00</iso_dateline>  <gmt_dateline>2015-08-12 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jpreston@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Joshua Preston</p><p>GVU Center, College of Computing</p><p>678.231.0787</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>433821</item>      </media>  <hg_media>          <item>          <nid>433821</nid>          <type>image</type>          <title><![CDATA[Georgia Tech @ ICER 2015 alumni, faculty and students]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[]]></image_name>            <image_path><![CDATA[]]></image_path>            <image_full_path><![CDATA[]]></image_full_path>            <image_740><![CDATA[]]></image_740>            <image_mime></image_mime>            <image_alt><![CDATA[]]></image_alt>                    <created>1449256148</created>          <gmt_created>2015-12-04 19:09:08</gmt_created>          <changed>1475895171</changed>          <gmt_changed>2016-10-08 02:52:51</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="418041">  <title><![CDATA[Georgia Tech Researchers Train Computer to Create Games by Watching YouTube]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Georgia Institute of Technology researchers have developed a computing system that views gameplay video from streaming services like YouTube or Twitch, analyzes the footage and then is able to create original new sections of a game.</p><p>The team tested their discovery, the first of its kind, with the original Super Mario Brothers, a well-known two-dimensional platformer game that will allow the new automatic-level designer to replicate results across similar games.</p><p>The system focuses on the gaming terrain (not the playable character) and the positioning between elements on-screen – be it pipes, blocks, coins or Goombas – and it determines the required relationship or level design rule. For example, pipes in the Mario games tend to stick out of the ground, so the system learns this and prevents any pipes from being flush with grassy surfaces. It also prevents “breaks” by using spatial analysis – e.g. no impossibly long jumps for the hero.</p><p>“An initial evaluation of our approach indicates an ability to produce level sections that are both playable and close to the original without hand coding any design criteria,” says Matthew Guzdial, lead author and Ph.D. student in Computer Science at Georgia Tech.</p><p>Key to the process is watching the players in action to see where they actually spend most of their time in the game. After recording on-screen locations of sprites, Georgia Tech’s algorithms determine what are high-interaction areas – those spots where players spend more time to collect bonus items or master a challenge. The automatic-level designer specifically targets these areas to gain design information. The system is then able to build a new level section, element by element.</p><p>“Our system creates a model or template, and it’s able to produce level sections that have never been seen before, do not appear random and can be traversed by the player,” says Mark Riedl, the study's primary investigator and associate professor of Interactive Computing. “One could say that the system ‘studies’ the design of Mario levels until it is able to create new playable areas.”</p><p>The Georgia Tech system output 151 distinct level sections from 17 samples in the original game, controlling for overall playability and style variables. Output increased to 334 level sections as the system lessened the constraints. The new levels can be played easily by porting them into a game engine.</p><p>Riedl says this is the first time he is aware of a gameplay video being used to design levels for a Mario game. By applying the technique across a number of different platformer games, a system can theoretically learn genre knowledge, which can be beneficial for procedurally creating games of a given genre. The technique may also extend to other game genres beyond platformers. The researchers next plan to develop full-scale levels and evaluate how gamers interact in those levels compared to the original gameplay videos.</p><p>The research, “Toward Game Level Generation from Gameplay Videos,” is featured June 22-25 at the Foundations of Digital Games Conference in Pacific Grove, Calif.</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1435146073</created>  <gmt_created>2015-06-24 11:41:13</gmt_created>  <changed>1475896725</changed>  <gmt_changed>2016-10-08 03:18:45</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Institute of Technology researchers have developed a computing system that views gameplay video from streaming services like YouTube or Twitch, analyzes the footage and then is able to create original new sections of a game.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Institute of Technology researchers have developed a computing system that views gameplay video from streaming services like YouTube or Twitch, analyzes the footage and then is able to create original new sections of a game.]]></sentence>  <summary><![CDATA[<p>Georgia Institute of Technology researchers have developed a computing system that views gameplay video from streaming services like YouTube or Twitch, analyzes the footage and then is able to create original new sections of a game.</p><p>The team tested their discovery, the first of its kind, with the original Super Mario Brothers, a well-known two-dimensional platformer game that will allow the new automatic-level designer to replicate results across similar games.</p>]]></summary>  <dateline>2015-06-24T00:00:00-04:00</dateline>  <iso_dateline>2015-06-24T00:00:00-04:00</iso_dateline>  <gmt_dateline>2015-06-24 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jpreston@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu">Joshua Preston</a> <br />GVU Center, College of Computing <br />678.231.0787</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>418061</item>          <item>418101</item>          <item>50384</item>      </media>  <hg_media>          <item>          <nid>418061</nid>          <type>image</type>          <title><![CDATA[Automatic game level generator]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[generated_levels_square_set.png]]></image_name>            <image_path><![CDATA[/sites/default/files/images/generated_levels_square_set_0.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/generated_levels_square_set_0.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/generated_levels_square_set_0.png?itok=5arV5HCN]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Automatic game level generator]]></image_alt>                    <created>1449254269</created>          <gmt_created>2015-12-04 18:37:49</gmt_created>          <changed>1475895155</changed>          <gmt_changed>2016-10-08 02:52:35</gmt_changed>      </item>          <item>          <nid>418101</nid>          <type>image</type>          <title><![CDATA[Automatic Game Level Generator]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[overworldgif.gif]]></image_name>            <image_path><![CDATA[/sites/default/files/images/overworldgif_0.gif]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/overworldgif_0.gif]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/overworldgif_0.gif?itok=PrF80pgt]]></image_740>            <image_mime>image/gif</image_mime>            <image_alt><![CDATA[Automatic Game Level Generator]]></image_alt>                    <created>1449254269</created>          <gmt_created>2015-12-04 18:37:49</gmt_created>          <changed>1475895155</changed>          <gmt_changed>2016-10-08 02:52:35</gmt_changed>      </item>          <item>          <nid>50384</nid>          <type>image</type>          <title><![CDATA[Mark Riedl]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[mark-riedl.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/mark-riedl_1.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/mark-riedl_1.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/mark-riedl_1.jpg?itok=kutFsbAb]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Mark Riedl]]></image_alt>                    <created>1449175392</created>          <gmt_created>2015-12-03 20:43:12</gmt_created>          <changed>1475894458</changed>          <gmt_changed>2016-10-08 02:40:58</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71881"><![CDATA[Science and Technology]]></topic>          <topic tid="71901"><![CDATA[Society and Culture]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="396211">  <title><![CDATA[Research finds adolescents’ time online doubles, hyperlocal social media emerges]]></title>  <uid>27592</uid>  <body><![CDATA[<p>A <a href="http://www.chi.gatech.edu/2015/young-people-online/" target="_blank">four-year study</a> of adolescents’ use of technology shows that the average amount of time spent online daily by 10- to 14-year-olds jumped from 3.5 hours to more than eight during the study period of 2010-2013. Georgia Institute of Technology researchers say adolescents’ identities are being shaped through continuous online social activities – a phenomenon arising from the growth of mobile devices. The research also reveals that adolescents no longer distinguish between time online and offline, as well as how they deal with social pressure, identity, privacy and risky behavior online.</p><p>The study, one of the first of its kind to focus on low-income, middle school-aged students from a concentrated geographical area, sought to better understand motivations and behaviors for online social practices among them. Results came from survey responses and focus groups with 179 participants in three middle schools with high minority populations. Demographic representation was approximately 65 percent African American, 18 percent Asian, 9 percent Caucasian, and 8 percent Hispanic.</p><p>Social media use showed high levels of experimentation and rapid adoption of certain platforms in specific social contexts. During the four-year period, children’s social media habits became more opaque and nuanced through apps that allow private, anonymous sharing. The&nbsp;participants adopted new Facebook strategies when dealing with different social circles; some posted less for family to view or made second accounts only for friends. In general, video-based communication saw a significant rise in 2012 with 61 percent of participants using Oovoo, a video chat and instant messaging platform. Only Facebook and YouTube outpaced its use.</p><p>Harmful and risky behaviors, such as eating disorders and sexting, came up in the focus groups. One of the most alarming behaviors, according to researchers, was the use of websites or communities that promoted restrictive eating habits.</p><p>“With the rise of new social platforms that bring new capabilities, such as Snapchat and hyperlocal platforms, the potential for negative exploitation is real and already being observed within this population,” says Jessica Pater, lead researcher and Ph.D. Student in Human-Centered Computing.</p><p>In a cyber bullying incident, multiple platforms including Kik (for mobile instant messaging) and Keek (video-based social networking) were used to organize discussion around and single out a bully, who was using fake profiles to harass a classmate. Kids didn’t think of the online social tension as cyber bullying, but rude behavior that is simply part of life.</p><p>“The social app use we found in this population exemplifies how platforms can become truly engrained in the fabric of technology use within a group of users in a short period of time,” says Pater.</p><p>Researchers believe their approach can be replicated for understanding large-scale trends in social media in other populations, and that it could be critical for identifying opportunities for design and research on the platforms.</p><p>The paper, “This Digital Life: A Neighborhood-Based Study of Adolescents’ Lives Online,” will be presented at the ACM Conference on Human Factors in Computing Systems (CHI 2015) in Seoul, Korea, April 18-23.</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1429033748</created>  <gmt_created>2015-04-14 17:49:08</gmt_created>  <changed>1475896678</changed>  <gmt_changed>2016-10-08 03:17:58</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A four-year study of adolescents’ use of technology shows that the average amount of time spent online daily by 10- to 14-year-olds jumped from 3.5 hours to more than eight during the study period of 2010-2013.]]></teaser>  <type>news</type>  <sentence><![CDATA[A four-year study of adolescents’ use of technology shows that the average amount of time spent online daily by 10- to 14-year-olds jumped from 3.5 hours to more than eight during the study period of 2010-2013.]]></sentence>  <summary><![CDATA[<p>A&nbsp;<a href="http://www.chi.gatech.edu/2015/young-people-online/" target="_blank">four-year study</a>&nbsp;of adolescents’ use of technology shows that the average amount of time spent online daily by 10- to 14-year-olds jumped from 3.5 hours to more than eight during the study period of 2010-2013. Georgia Tech researchers say adolescents’ identities are being shaped through continuous online social activities – a phenomenon arising from the growth of mobile devices. The research also reveals that adolescents no longer distinguish between time online and offline, as well as how they deal with social pressure, identity, privacy and risky behavior online.</p>]]></summary>  <dateline>2015-04-14T00:00:00-04:00</dateline>  <iso_dateline>2015-04-14T00:00:00-04:00</iso_dateline>  <gmt_dateline>2015-04-14 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jpreston@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu">Joshua Preston</a><br />678.231.0787</p><p>GVU Center<br /> College of Computing</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>396221</item>      </media>  <hg_media>          <item>          <nid>396221</nid>          <type>image</type>          <title><![CDATA[http://chi.gatech.edu/2015/young-people-online]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[young_people_online_viz_large_thumbnail.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/young_people_online_viz_large_thumbnail.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/young_people_online_viz_large_thumbnail.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/young_people_online_viz_large_thumbnail.jpg?itok=X0kSWFD9]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[http://chi.gatech.edu/2015/young-people-online]]></image_alt>                    <created>1449246361</created>          <gmt_created>2015-12-04 16:26:01</gmt_created>          <changed>1475895112</changed>          <gmt_changed>2016-10-08 02:51:52</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="396701">  <title><![CDATA[Research identifies barriers to tracking meals and what foodies want]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Eating healthy is sometimes a challenge on its own, so technology should ease that burden – not increase it – according to new research from the Georgia Institute of Technology and University of Washington. Researchers studied how mobile-based food journals integrate into everyday life and specific challenges when using food journaling technology. Their research suggests how future designs might make it easier and more effective.</p><p>The research study uncovered three problem areas: barriers to reliable food entry, negative nudges in current food journal apps and challenges in social features. The findings resulted from data collected in a survey of 141 current and former food loggers as well as analysis of 5,526 public posts on the community forums of mobile-based MyFitnessPal, FatSecret and CalorieCount.</p><p>“Community contributions to the databases allow journalers to publish nutritional entries themselves and create a diverse food base from which to pick, but it also raises concerns about reliability,” says Edison Thomaz, a researcher on the study and Ph.D. candidate in Human-Centered Computing at Georgia Tech.</p><p>Some users said logging meals took too much effort and was time consuming. They sometimes loosely followed recipes or only ate partial portion sizes, making it difficult to log meals. Another issue was that food databases contained inaccuracies, common foods that were missing, or had multiple listings for a single food because of user-generated listings.</p><p>Researchers found that not all foods are created equal when it comes to logging them. On a seven-point Likert scale, packaged foods and fast food were a breeze to log (6.5 and 6.3 mean scores), while counting up finger foods at a friend’s house or party took dedication (3.2 and 2.9 mean scores).</p><p>This made the mobile journals themselves less effective, with some participants straying from their goals or eating the same thing every day to ease the logging ritual. As one respondent put it, it was easier to “scan a code on some processed stuff and be done with it.”</p><p>Participants also wanted to develop social connections around food goals. Encouragement of goal attainment and mutual support helped strengthen journaling habits. Conversely, when people received no comments, had online friends stop journaling, or had comparatively less progress than others, it negatively impacted their food-tracking goals.</p><p>The findings led to several recommendations, including one for designing goal-specific systems.</p><p>“Food journals are an important method for tracking food consumption and can support a variety of goals, including weight loss, healthier food choices, detecting deficiencies, identifying allergies and determining foods that trigger other symptoms,” says James Fogarty, a researcher on the study and associate professor of Computer Science &amp; Engineering at the University of Washington.</p><p>&nbsp;“Instead of attempting to capture the elusive ‘everything,’ the results suggest creating a diversity of journal designs to support specific goals,” says Fogarty.</p><p>Reputation systems were suggested to allow users to filter for specific needs (e.g. tracking sodium intake) or vote on accuracy of entries. Also a priority: streamlining databases with similar foods and providing context for food entry, such as indicating restaurant items or vegan meals.</p><p>The results have also led to separate research by team members to implement new journaling solutions. Georgia Tech researchers are testing the feasibility of using a mobile device’s built-in microphone to capture ambient sounds related to eating that, when recognized by the mobile device, nudge users to log their food. Washington researchers are using photo-based journaling to augment or replace methods focused on detailed nutritional input in an attempt to remove or reduce barriers to journaling.</p><p>The research paper “Barriers and Negative Nudges: Exploring Challenges in Food Journaling” will be presented at the ACM Conference on Human Factors in Computing Systems (CHI 2015) in Seoul, South Korea, April 18-23. The work is funded in part by the Intel Science and Technology Center for Pervasive Computing, the National Science Foundation (Awards OAI-1028195 and SCH-1344613) and the National Institutes of Health (Award 1U54EB020404-01). <em>Any conclusions or opinions are those of the authors and do not necessarily represent the official views of the sponsoring agencies.</em></p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1429107208</created>  <gmt_created>2015-04-15 14:13:28</gmt_created>  <changed>1475896678</changed>  <gmt_changed>2016-10-08 03:17:58</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Eating healthy is sometimes a challenge on its own, so technology should ease that burden – not increase it – according to new research from the Georgia Institute of Technology and University of Washington.]]></teaser>  <type>news</type>  <sentence><![CDATA[Eating healthy is sometimes a challenge on its own, so technology should ease that burden – not increase it – according to new research from the Georgia Institute of Technology and University of Washington.]]></sentence>  <summary><![CDATA[<p>Eating healthy is sometimes a challenge on its own, so technology should ease that burden – not increase it – according to new research from the Georgia Institute of Technology and University of Washington. Researchers studied how mobile-based food journals integrate into everyday life and specific challenges when using food journaling technology. Their research suggests how future designs might make it easier and more effective.</p>]]></summary>  <dateline>2015-04-16T00:00:00-04:00</dateline>  <iso_dateline>2015-04-16T00:00:00-04:00</iso_dateline>  <gmt_dateline>2015-04-16 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jpreston@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu">Joshua Preston</a><br />678-231-0787</p><p>GVU Center<br />College of Computing</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>396721</item>          <item>396711</item>      </media>  <hg_media>          <item>          <nid>396721</nid>          <type>image</type>          <title><![CDATA[Edison Thomaz]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[edison_thomaz_chi2015.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/edison_thomaz_chi2015.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/edison_thomaz_chi2015.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/edison_thomaz_chi2015.jpg?itok=NPGVIqDi]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Edison Thomaz]]></image_alt>                    <created>1449246361</created>          <gmt_created>2015-12-04 16:26:01</gmt_created>          <changed>1475895112</changed>          <gmt_changed>2016-10-08 02:51:52</gmt_changed>      </item>          <item>          <nid>396711</nid>          <type>image</type>          <title><![CDATA[Gregory Abowd]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[grregory_abowd_chi2015.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/grregory_abowd_chi2015.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/grregory_abowd_chi2015.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/grregory_abowd_chi2015.jpg?itok=66Dd3q-2]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Gregory Abowd]]></image_alt>                    <created>1449246361</created>          <gmt_created>2015-12-04 16:26:01</gmt_created>          <changed>1475895112</changed>          <gmt_changed>2016-10-08 02:51:52</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>      </news_terms>  <keywords>          <keyword tid="124021"><![CDATA[counting calories]]></keyword>          <keyword tid="123991"><![CDATA[food journaling]]></keyword>          <keyword tid="123981"><![CDATA[food logging]]></keyword>          <keyword tid="124011"><![CDATA[quantified self]]></keyword>          <keyword tid="124001"><![CDATA[quants]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="365591">  <title><![CDATA[People-Focused Computing Research in 2014 Took Users in New Directions]]></title>  <uid>27592</uid>  <body><![CDATA[<p>Computing technology research takes on many forms in the GVU Center, whether it's deciphering the social media stratosphere, putting Atlanta's wider public transit information at your fingertips, reimagining digital storytelling, improving sustainable urban farms, or a score of other high-concept applications and prototypes that are advancing how technology impacts our lives.</p><p>In 2014, our researchers broke new ground on how to get the most out of technology interactions. This snapshot of our community of researchers shows a small sample of computing possibilities becoming reality through the collaborative and dynamic environments at Georgia Tech and the GVU Center.&nbsp;</p><p>&nbsp;</p><p></p><p></p><h3><strong>Kickstarter phrases that pay (and don't)</strong></h3><p>Researchers at Georgia Tech studying the burgeoning phenomenon of crowdfunding have learned that the language used in online fundraising hold surprisingly predictive power about the success of such campaigns.&nbsp;As part of their study of more than 45,000 projects on Kickstarter, Assistant Professor&nbsp;<strong>Eric Gilbert</strong>&nbsp;and Computer Science doctoral candidate&nbsp;<strong>Tanushree Mitra</strong>&nbsp;reveal dozens of phrases that pay and a few dozen more that may signal the likely failure of a crowd-sourced effort.</p><p><a href="http://www.gvu.gatech.edu/news/georgia-tech-researchers-reveal-phrases-pay-kickstarter" target="_blank"><strong>Read More</strong></a></p><p>&nbsp;</p><p>&nbsp;</p><p></p><p>&nbsp;</p><p>&nbsp;</p><p><a href="http://www.cc.gatech.edu/~gte115v/wip0483-fieslerSC.pdf" target="_blank"></a></p><h3><strong>Do you read terms of service? Maybe you should.&nbsp;</strong></h3><p>A key usability problem for websites is the complexity of their terms and conditions. Within the HCI community, attention to this issue to date has primarily focused on privacy policies. Human-Centered Computing&nbsp;doctoral candidate&nbsp;<strong>Casey Fiesler</strong>&nbsp;and Professor&nbsp;<strong>Amy Bruckman</strong>&nbsp;begin to build on this work, extending it to copyright terms. With so many people posting everything from status updates to digital art online, intellectual property rights are increasingly important to the end user. The researchers conducted a content analysis of 30 different websites where users can share creative work, focusing on the licenses and usage rights that users grant to those websites.</p><p><a href="http://www.gvu.gatech.edu/news/do-you-read-terms-service-maybe-you-should-0" target="_blank"><strong>Read More</strong></a></p><p>&nbsp;</p><p>&nbsp;</p><p></p><p></p><h3><strong>Integrating real-time information for&nbsp;metro Atlanta public transit</strong></h3><p>The mobile app&nbsp;<a href="http://atlanta.onebusaway.org/" target="_blank">OneBusAway</a>, which tracks public transportation in real time,&nbsp;added&nbsp;arrival times for MARTA trains in 2014 in addition to the MARTA buses and Georgia Tech shuttles already featured in the app. The app&nbsp;also added the&nbsp;new Atlanta Streetcar route (which opened&nbsp;at the end of 2014),&nbsp;continuing to grow its network of&nbsp;real-time&nbsp;transit information.&nbsp;OneBusAway is being intregrated into Atlanta’s transit network by Georgia Tech researchers, led by Assistant Professor&nbsp;<strong>Kari Watkins</strong>. The app’s developers&nbsp;plan to add bus data for Georgia Regional Transportation Authority (GRTA) Xpress, Cobb Community Transit (CCT), Gwinnett County Transit, the Atlantic Station shuttle, other local university systems, and other systems equipped with GPS tracking. The research&nbsp;has a growing national footprint with the app being used in other major spots such as&nbsp;New York, the Seattle area, Tampa, and elsewhere.</p><p><a href="http://www.news.gatech.edu/2014/03/05/onebusaway-app-now-tracks-marta-trains-real-time" target="_blank"><strong>Read More</strong></a></p><p><a href="https://www.youtube.com/watch?v=0Onob10BwgA" target="_blank"><strong>Video</strong></a></p><p>&nbsp;</p><p>&nbsp;</p><p></p><p><a href="https://www.youtube.com/playlist?list=PLB7jAXT4DsfaX-d25uuj6N2CxnUJAdIxW" target="_blank"></a></p><h3><strong>CHI 2014 - One of a CHInd</strong></h3><p>Georgia Tech researchers delivered an incredible lineup of work in human-computer interaction at CHI 2014 showing the growing complexities in technology use and emerging needs of end users. Researchers talk about their work and the contributions Georgia Tech - a Top 10 institution with accepted research at CHI -&nbsp;is making to the field. Also, CHI 2014 saw the debut of the Georgia Tech-curated wearable computing exhibit "Meeting the Challenge: The Path Towards a Consumer Wearable Computer."</p><p><a href="http://www.chi.gatech.edu/2014/" target="_blank"><strong>Georgia Tech at CHI Website</strong></a></p><p>&nbsp;</p><p>&nbsp;</p><p><strong></strong></p><p><a href="https://www.youtube.com/watch?v=URWYhavIPOk" target="_blank"></a></p><h3><strong>Emerging app-based performance art for shared experiences</strong></h3><p>Choreographer and former ARTech resident artist<strong>&nbsp;Jonah Bokaer</strong>&nbsp;finished the first part of a two-year campus residency at Georgia Tech where he is creating “Applied Movement: App Development for Choreography.” Working with the Ferst Center, he is developing an app called Crowd Codes, a framework consisting of software components that enable groups to participate in a shared movement-based artistic and educational experience by using their mobile phones. He has conducted campus workshops and community outreach in addition to the mobile app collaboration, which is designed to explore crowd movement in public spaces on a large scale.</p><p><a href="http://jonahbokaer.net/apps/" target="_blank"><strong>Read More</strong></a></p><p>&nbsp;</p><p>&nbsp;</p><p></p><p></p><h3><strong>Wearables exhibit tour</strong></h3><p>Commercial products for wearable computing technology -&nbsp;Apple Watch and&nbsp;Google Glass being the most high profile -&nbsp;are now being widely announced and becoming a part of the public consciousness. Georgia Tech researchers, led by Professor&nbsp;<strong>Thad Starner</strong>&nbsp;and Research Scientist&nbsp;<strong>Clint Zeagler</strong>,&nbsp;curated a one-of-a-kind collection of wearable technology in 2014 to show the path that the technology has taken through the decades and in different industries. The exhibit - "Meeting the Challenge: The Path Towards a Consumer Wearable Computer" - was shown at&nbsp;several major international&nbsp;venues (starting at&nbsp;<a href="https://www.youtube.com/watch?v=9q7PCy28BvU" target="_blank">CHI 2014</a>)&nbsp;during the summer and fall. In Germany, it was featured&nbsp;at the&nbsp;<a href="http://www.clintzeagler.com/2014/06/16/meeting-berlin-wearable-computing-exhibition-at-the-factory/" target="_blank">Factory Berlin at the Berlin Wall</a>, the&nbsp;<a href="http://www.clintzeagler.com/2014/07/15/meeting-merkel-wearable-exhibition-travels-to-german-cdu-headquarters/" target="_blank">Christian Democratic Union Headquarters</a>, and the&nbsp;<a href="http://www.clintzeagler.com/2014/08/14/meeting-munich-deutsches-museum-exhibition-august-11-2014-september-26-2014/" target="_blank">Deutsches Museum</a>. Next, it made its way to the&nbsp;<a href="http://www.clintzeagler.com/2014/10/11/meeting-tianjin-world-economic-forum/" target="_blank">World Economic Forum</a>&nbsp;in China. The exhibit's&nbsp;public debut in the United States is at Georgia Tech this month&nbsp;<a href="http://wcc.gatech.edu/content/opening-reception-january-8th-2015" target="_blank">through Jan. 23</a>.&nbsp;</p><p><a href="http://www.news.gatech.edu/2014/05/29/future-and-history-wearable-computing" target="_blank"><strong>Read More</strong></a></p><p>&nbsp;</p><p>&nbsp;</p><p></p><p><a href="https://www.youtube.com/watch?v=-Dj_jtB368o" target="_blank"></a></p><h3><strong>Data science for social good</strong></h3><p>As part of the Data Science for Social Good internship program, sponsored by Georgia Tech and Oracle, GT&nbsp;students talked&nbsp;with farmers and volunteers over a 10-week period during the summer&nbsp;about&nbsp;crops, planting schedules, harvest requests, visitor demographics and other data crucial to&nbsp;daily operations.&nbsp;Urban agriculture, the students realized, is a complex undertaking. Their challenge was to create a streamlined data management system for the farm. Program Director and Professor&nbsp;<strong>Ellen Zegura</strong>&nbsp;said the program allowed&nbsp;students to solve real-world problems instead of relying on sample data sets, and&nbsp;it&nbsp;educated&nbsp;local non-profits on the need for better data systems.</p><p><a href="http://www.news.gatech.edu/2014/06/30/georgia-tech-uses-data-science-promote-social-good" target="_blank"><strong>Read More</strong></a></p><p>&nbsp;</p><p>&nbsp;</p><p></p><p><a href="http://artnotart.org/farnear/projet/projet.html" target="_blank"></a></p><h3><strong>Bending narratives for new digital&nbsp;experiences</strong></h3><p>Projet is a location-based story using a panoramic visual effect and narration to transport the viewer metaphorically to the French Massif Central. Professor&nbsp;<strong>Jay Bolter</strong>&nbsp;discusses the project, which is intended as the first in a series of such narratives to explore how panoramas can establish the visual counterpart to text narratives, creating a sense of space and location.</p><p><a href="https://www.youtube.com/watch?v=VfJPQSbQXXM" target="_blank"><strong>Video</strong></a></p><p>&nbsp;</p><p>&nbsp;</p><p></p><p></p><h3><strong>Wearable tech of many designs</strong></h3><p>Georgia Tech continues to advance several research innovations that are helping to shape a wearable computing future rich with applications. Among Georgia Tech’s accepted work at the International Symposium on Wearable Computers&nbsp;in September was&nbsp;<a href="https://www.youtube.com/watch?v=arqrxglMzIw" target="_blank">wearable dance technology</a>&nbsp;that garnered a Design Exhibition Jury Award, and&nbsp;<a href="http://www.news.gatech.edu/2014/06/23/wearable-computing-gloves-can-teach-braille-even-if-you%E2%80%99re-not-paying-attention" target="_blank">vibrating gloves</a>&nbsp;that allow users to learn braille by simply wearing the haptic-enhanced device. The gloves were nominated for a 2014 Smithsonian People’s Design Award.</p><p><strong><a href="http://gvu.gatech.edu/wearable-tech-innovations" target="_blank">Read More</a></strong></p><p>&nbsp;</p><p>&nbsp;</p><p><strong></strong></p><p></p><h3><strong>Research Showcase and Foley Scholars Dinner</strong></h3><p>The biannual GVU Center Research Showcase invited visitors in October&nbsp;to an alternate reality populated with artificial intelligences, devices to communicate with animals, augmented landscapes bending space and time, computer-embedded fashion garments, futuristic screen experiences, auditory technologies, and much more. Homecoming week also recognized the 2014-2015 Foley Scholars, whose work exemplifies&nbsp;computing-powered innovations that&nbsp;guide&nbsp;users through a rapidly shifting technology culture.</p><p><a href="http://gvu.gatech.edu/homecoming-2014" target="_blank"><strong>Read More</strong></a></p><p>&nbsp;</p><p>&nbsp;</p><p></p><p></p><h3><strong>Visualizing the world, one data set at a time</strong></h3><p>At VIS 2014 - consisting of&nbsp;IEEE's joint conferences on Visual Analytics Science and Technology, Information Visualization, and Scientific Visualization - Georgia Tech researchers played a leading role in the proceedings, which marked the 25th anniversary of academic research in the field. Professor<strong>&nbsp;John Stasko</strong>, co-chair of the VIS25 committee, says there is a growing ‘democratization’ of data visualization where more people and organizations can now create sophisticated interactive visualizations due to some of the tools and toolkits that the&nbsp;research community has created.&nbsp;Georgia Tech's contributions this year provided both new visualization techniques and case studies of visualization applied to real world problems from areas such as finance, network cybersecurity, pediatric asthma care, and marine biology.&nbsp;</p><p><strong><a href="http://gvu.gatech.edu/visualization-2014" target="_blank">Read More</a></strong></p><p>&nbsp;</p><p>&nbsp;</p><p></p><p></p><h3><strong>Graduate researchers discuss what drives them in their chosen fields</strong></h3><p>Human-Centered Computing doctoral candidates&nbsp;<strong>Alexander Zook</strong>&nbsp;and&nbsp;<strong>Deana Brown</strong>&nbsp;and Music Technology doctoral candidate&nbsp;<strong>Mason Bretan</strong>&nbsp;talk about what makes them passionate about their research and what it involves - the graduate&nbsp;work combines technical depth with a focus on human impact, at scales ranging from the individual to the societal.&nbsp;The researchers took time out at the end of the year to share their stories, which show not only insight into their research, but the collaborative nature of the community&nbsp;fostered by GVU founder and Professor<strong>&nbsp;James Foley.&nbsp;</strong></p><p><a href="http://gvu.gatech.edu/research/grants-and-scholarships/james-d-foley-gvu-center-endowment" target="_blank"><strong>Read More</strong></a></p><p>&nbsp;</p><p>- See more at: http://gvu.gatech.edu/2014-year-review#sthash.8WjBepAX.dpuf</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1421753912</created>  <gmt_created>2015-01-20 11:38:32</gmt_created>  <changed>1475896674</changed>  <gmt_changed>2016-10-08 03:17:54</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[In 2014, GVU Center researchers broke new ground on how to get the most out of technology interactions.]]></teaser>  <type>news</type>  <sentence><![CDATA[In 2014, GVU Center researchers broke new ground on how to get the most out of technology interactions.]]></sentence>  <summary><![CDATA[<p>In 2014, GVU Center researchers broke new ground on how to get the most out of technology interactions. This snapshot of our community of researchers shows a small sample of computing possibilities becoming reality through the collaborative and dynamic environments at Georgia Tech and the GVU Center.&nbsp;</p>]]></summary>  <dateline>2015-01-20T00:00:00-05:00</dateline>  <iso_dateline>2015-01-20T00:00:00-05:00</iso_dateline>  <gmt_dateline>2015-01-20 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jpreston@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Joshua Preston</p><p>678.231.0787</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>      </media>  <hg_media>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="349021">  <title><![CDATA[Ph.D. Candidate’s Barbie Book Remix Ties to Fair Use Research]]></title>  <uid>27592</uid>  <body><![CDATA[<p class="p1">In November, a&nbsp;<a href="http://www.dailydot.com/geek/sexist-barbie-book-stem-remix-engineer-gaming/" target="_blank">widely viewed</a>&nbsp;and&nbsp;<a href="http://www.npr.org/2014/11/22/365968465/after-backlash-computer-engineer-barbie-gets-new-set-of-skills">well-received</a>&nbsp;<a href="https://cfiesler.files.wordpress.com/2014/11/barbieremixed.pdf">digital&nbsp;remix</a>&nbsp;of the children’s book “Barbie: I Can Be a Computer Engineer” was the product of not only Casey Fiesler’s dislike of the original plot, but a practical application of her research into copyright in online communities.</p><p class="p1">Fiesler, a Ph.D. Candidate in Human-Centered Computing at Georgia Tech, took the narrative - which had little to do with contemporary issues in computing and has since been pulled from bookshelves - and rewrote it, with contributions from HCC student Miranda Parker. The Barbie remix directly applies to her research in "fair use," a part of U.S. copyright law that allows for the use of copyrighted material without permission from the owners in certain instances. &nbsp;</p><p class="p1">"One of the core reasons that fair use exists is for criticism," says Fiesler. "A noncommercial, transformative work that uses copyrighted material in order to critique the original content,&nbsp;particularly in parody,&nbsp;is a textbook example of fair use.&nbsp;"</p><p class="p1">According to her recent research, “the law around reuse and remix is particularly confusing, and this kind of creativity is really common: everything from remix videos on YouTube to image memes shared on Facebook.”&nbsp;</p><p class="p1">Conducting a large-scale qualitative analysis of public forum posts, Fiesler,&nbsp;Jessica Feuston, and advisor Amy S. Bruckman&nbsp;found most conversations related to copyright expressed some kind of "problem." The eight websites reviewed for the study, from&nbsp;earlier this year,&nbsp;included top communities for writing, video, music and art. The YouTube data set, which had more than 1 in 10 posts (13%) related to copyright, shows the highest level of discussion on the topic. The overall findings show a range of concerns from users, including avoiding trouble, dealing with accusations of copyright infringement and parsing incompletete or conflicting information.&nbsp;</p><p class="p1">Fiesler’s group saw evidence of a general chilling effect, with some content creators simply not publishing online because of the perceived hassle or changing the website where they chose to publish. The study also provides recommendations for online community designers and maintainers, including monitoring user concerns on copyright and rewriting policies on copyright in “plain English.” The&nbsp;<a href="https://cfiesler.files.wordpress.com/2014/10/fiesler_cscw2015.pdf" target="_blank">research study</a>&nbsp;is&nbsp;being presented in March at CSCW 2015.</p><p class="p1">"Unfortunately, fair use can be confusing and scary, especially with so much misinformation floating around," says Fiesler.&nbsp;"My advice would be to learn as much as you can, because the more aware of your legal rights you are, the more confident you'll be."</p><p class="p1">After publishing her Barbie Remix, Fiesler posted on her blog details about&nbsp;<a href="http://caseyfiesler.com/2014/11/24/fair-use-barbie/" target="_blank">misconceptions of fair use</a>. She says that anyone facing trouble for a creative work that they think is fair use should use public resources for help, including organizations such as the Electronic Frontier Foundation or the Organization for Transformative Works.</p>]]></body>  <author>Joshua Preston</author>  <status>1</status>  <created>1416923009</created>  <gmt_created>2014-11-25 13:43:29</gmt_created>  <changed>1475896654</changed>  <gmt_changed>2016-10-08 03:17:34</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A digital remix of the children’s book “Barbie: I Can Be a Computer Engineer” was the product of not only Casey Fiesler’s dislike of the original plot, but a practical application of the Ph.D. candidate's research into copyright in online communities]]></teaser>  <type>news</type>  <sentence><![CDATA[A digital remix of the children’s book “Barbie: I Can Be a Computer Engineer” was the product of not only Casey Fiesler’s dislike of the original plot, but a practical application of the Ph.D. candidate's research into copyright in online communities]]></sentence>  <summary><![CDATA[<p>A digital remix&nbsp;of the children’s book “Barbie: I Can Be a Computer Engineer” was the product of not only Casey Fiesler’s dislike of the original plot, but a practical application of the Ph.D. candidate's research into copyright in online communities.</p>]]></summary>  <dateline>2014-11-25T00:00:00-05:00</dateline>  <iso_dateline>2014-11-25T00:00:00-05:00</iso_dateline>  <gmt_dateline>2014-11-25 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jpreston@cc.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jpreston@cc.gatech.edu">Joshua Preston</a></p><p>GVU Center, College of Computing</p><p>678.231.0787</p><p>&nbsp;</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>349031</item>      </media>  <hg_media>          <item>          <nid>349031</nid>          <type>image</type>          <title><![CDATA[Barbie Remix]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[barbieremix1.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/images/barbieremix1_0.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/barbieremix1_0.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/images/barbieremix1_0.jpg?itok=ZhmyruVg]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Barbie Remix]]></image_alt>                    <created>1449245696</created>          <gmt_created>2015-12-04 16:14:56</gmt_created>          <changed>1475895073</changed>          <gmt_changed>2016-10-08 02:51:13</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1299"><![CDATA[GVU Center]]></group>      </groups>  <categories>      </categories>  <news_terms>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71871"><![CDATA[Campus and Community]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node></nodes>