{"649636":{"#nid":"649636","#data":{"type":"news","title":"Associate Professor Elected SIGCHI President","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing joint Associate Professor \u003Cstrong\u003ENeha Kumar\u003C\/strong\u003E was elected president of the \u003Ca href=\u0022https:\/\/sigchi.org\/\u0022\u003ESpecial Interest Group on Computer-Human Interaction\u003C\/a\u003E (SIGCHI) for 2021-22. She will serve a three-year term for the group, which is the premier international society for professionals and academics interested in human-computer interaction.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESIGCHI sponsors numerous conferences, publications, web sites, and other services that advance HCI through workshops and outreach. \u003Ca href=\u0022https:\/\/medium.com\/sigchi\/thank-you-sigchi-dae601d883bb\u0022\u003EIn a blog post for SIGCHI\u003C\/a\u003E, Kumar said that she and the other incoming executive committee members aim to continue the long history of advancing the group\u0026rsquo;s key missions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We hope to continue to expand the excellent work that our many colleagues in this (executive committee) have done, with their commitment (among other things) to accessibility, equity and inclusion, to the safety of our community, global community building, and a #SIGCHI4ALL,\u0026rdquo; she wrote. \u0026ldquo;Together the six of us represent a wide range of perspectives; our hope is that this representation with ensure that we remain answerable to our entire global membership as we work towards supporting and fostering participation and growth locally and globally.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKumar\u0026rsquo;s research at Georgia Tech lies at the intersection of human-centered computing and global development. She has produced research that improves technology design for historically underserved communities. Her \u003Ca href=\u0022http:\/\/www.tandem.gatech.edu\/\u0022\u003ETanDEm Lab\u003C\/a\u003E \u0026ndash; short for Technology and Design towards \u0026lsquo;Empowerment\u0026rsquo; \u0026ndash; has focused on health and wellbeing on the margins, centering topics such as gender, stigma, and knowledge production.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKumar has received other honors, such as the National Science Foundation\u0026rsquo;s CAREER Award, and also chairs the \u003Ca href=\u0022https:\/\/www.acm.org\/fca#:~:text=The%20ACM%20Future%20of%20Computing,next%20generation%20of%20computing%20professionals.\u0026amp;text=The%20ACM%20FCA%20aspires%20to,of%20computing%20into%20the%20future.\u0022\u003EAssociation of Computing Machinery\u0026rsquo;s Future of Computing Academy\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech Ph.D. graduate \u003Cstrong\u003ETamara Clegg\u003C\/strong\u003E is also on the SIGCHI executive committee, serving as the vice president of membership and communication.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Neha Kumar will serve a three-year term for the group, which is the premier international society for professionals and academics interested in human-computer interaction."}],"uid":"33939","created_gmt":"2021-08-12 16:50:31","changed_gmt":"2021-08-12 16:50:31","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-08-12T00:00:00-04:00","iso_date":"2021-08-12T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"507851":{"id":"507851","type":"image","title":"Neha Kumar","body":null,"created":"1457114400","gmt_created":"2016-03-04 18:00:00","changed":"1475895270","gmt_changed":"2016-10-08 02:54:30","alt":"Neha Kumar","file":{"fid":"204902","name":"neha.jpeg","image_path":"\/sites\/default\/files\/images\/neha_0.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/neha_0.jpeg","mime":"image\/jpeg","size":52721,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/neha_0.jpeg?itok=ay7TDLWk"}}},"media_ids":["507851"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"649635":{"#nid":"649635","#data":{"type":"news","title":"Assistant Professor Named 2021 Microsoft Research Faculty Fellow","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing Assistant Professor \u003Cstrong\u003EDiyi Yang\u003C\/strong\u003E was named one of five \u003Ca href=\u0022https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/faculty-fellowship\/#!fellows\u0022\u003E2021 Microsoft Research Faculty Fellows\u003C\/a\u003E earlier this summer. The two-year fellowship recognizes innovative and promising early-career professors in the Americas who are exploring breakthrough research in computer science or a related field.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EYang was recognized for her work leading the \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~dyang888\/group.html\u0022\u003ESocial and Language Technologies Lab\u003C\/a\u003E, concentrating on research across fields of natural language processing, machine learning, and computational social science. Yang\u0026rsquo;s research works to understand social aspects of language and build responsible NLP systems with social intelligence.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We live in an era where many aspects of our daily activities are recorded as textual data,\u0026rdquo; Yang said in her proposal to Microsoft Research. \u0026ldquo;Over the last few decades, NLP has dramatically improved performance and produced industrial applications like personal assistants. Despite being sufficient to enable these applications, current NLP systems largely ignore the social part of language.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis ignorance limits the functionality of the programs, Yang said. This research examines what is said, who says it, in what context and for what goals in hopes of developing systems to facilitate human-human and human-machine communication. So far, her team has produced projects on mitigating bias in text, detecting mental health issues, improving support in online support groups, and more.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAccording to Microsoft Research\u0026rsquo;s website, Yang is the first Georgia Tech faculty member to be named a Microsoft Research Faculty Fellow since 2011 and only the third overall. Yang has earned a number of other awards and recognitions, such as Forbes 30 Under 30 in Science and IEEE AI 10 to Watch.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The two-year fellowship recognizes innovative and promising early-career professors in the Americas who are exploring breakthrough research in computer science or a related field."}],"uid":"33939","created_gmt":"2021-08-12 16:44:15","changed_gmt":"2021-08-12 16:44:15","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-08-12T00:00:00-04:00","iso_date":"2021-08-12T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"630588":{"id":"630588","type":"image","title":"Diyi Yang 2020","body":null,"created":"1578338255","gmt_created":"2020-01-06 19:17:35","changed":"1578338255","gmt_changed":"2020-01-06 19:17:35","alt":"","file":{"fid":"240080","name":"Diyi_Yang.jpg","image_path":"\/sites\/default\/files\/images\/Diyi_Yang.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Diyi_Yang.jpg","mime":"image\/jpeg","size":194720,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Diyi_Yang.jpg?itok=T-Kv1Jqp"}}},"media_ids":["630588"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"649137":{"#nid":"649137","#data":{"type":"news","title":"Georgia Tech Will Help Bring Critical Advancements to Online Learning as Part of Multimillion Dollar NSF Grant","body":[{"value":"\u003Cp\u003EGeorgia Tech is a major partner in a new \u003Ca href=\u0022https:\/\/www.nsf.gov\/\u0022\u003ENational Science Foundation\u003C\/a\u003E (NSF) \u003Ca href=\u0022https:\/\/www.nsf.gov\/funding\/pgm_summ.jsp?pims_id=505686\u0022\u003EArtificial Intelligence Research Institute\u003C\/a\u003E focused on adult learning in online education, it was announced today. Led by the Georgia Research Alliance, the National AI Institute for Adult Learning in Online Education (ALOE) is one of 11 new NSF institutes created as part of an investment totaling $220 million.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe ALOE Institute will develop new AI theories and techniques for enhancing the quality of online education for lifelong learning and workforce development. According to some projections, about 100 million American workers will need to be reskilled or upskilled over the next decade. With the increase of AI and automation, said Co-Principal Investigator and Georgia Tech lead Professor \u003Cstrong\u003EAshok Goel\u003C\/strong\u003E, many jobs will be redefined.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;There will be some loss of jobs, but mostly we will see individuals needing to learn a new skill to get a new job or to advance their career,\u0026rdquo; said Goel, a professor of computer science and human-centered computing in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC) and the chief scientist with the \u003Ca href=\u0022https:\/\/c21u.gatech.edu\/\u0022\u003ECenter for 21\u003Csup\u003Est\u003C\/sup\u003E Century Universities\u003C\/a\u003E (C21U). \u0026ldquo;So, how do you help 100 million workers reskill or upskill in 10 years? Because AI is in part responsible for this need, it is our belief it should also be responsible for finding a solution.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThat is the goal of this project, which will be led by principal investigator \u003Cstrong\u003EMyk Garn\u003C\/strong\u003E, assistant vice chancellor for New Models of Learning at the University System of Georgia and senior advisor to the \u003Ca href=\u0022https:\/\/gra.org\/\u0022\u003EGeorgia Research Alliance\u003C\/a\u003E (GRA).\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Online education for adults has enormous implications for tomorrow\u0026rsquo;s workforce,\u0026rdquo; Garn said. \u0026ldquo;Yet, serious questions remain about the quality of online learning and how best to teach adults online. Artificial intelligence offers a powerful technology for dramatically improving the quality of online learning and adult education.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo do that successfully, the education must be personalized and scaled to unprecedented levels. Educating 100 million people in online environments will, of course, require far more time and energy than in-person educators can offer their students. That is where AI comes into play.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearchers will build new AI techniques that can adequately and efficiently train \u003Cem\u003Eother\u003C\/em\u003E AI agents to interact with humans in a classroom setting, similar to the virtual teaching assistant Jill Watson that Goel has used in his online computer science classes for the past five years. This will help satisfy the scalability requirement.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;That\u0026rsquo;s the fundamental advancement in AI,\u0026rdquo; Goel said. \u0026ldquo;A human can train an AI agent in just a few hours how to teach other AI agents on how to interact with humans on various subjects.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo satisfy the need for personalized AI, researchers will train machines to have a mutual theory of mind with their human counterparts. In other words, there will be a greater understanding by both machine and human of the others\u0026rsquo; needs, knowledge, and expectations.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Our vision is to develop AI agents that achieve a mutual understanding of learning expectations, outcomes, and methods between students and teachers,\u0026rdquo; said Alex Endert, an assistant professor in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/cc.gatech.edu\u0022\u003ECollege of Computing\u003C\/a\u003E who will help the team analyze and understand data from the project. \u0026ldquo;Along with my students, I look forward to developing visual analytic interfaces that serve that purpose to foster trust and interpretability of AI for this domain.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUltimately, the hope is that education becomes more available, affordable, achievable, and, thereby, equitable. Such an expansive project, understandably, requires the expertise of many kinds from many people. In addition to Endert and Goel, who will be executive director of the ALOE Institute, there will be a host of faculty at Georgia Tech will participate.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESenior Georgia Tech members of the ALOE team include \u003Cstrong\u003EStephen Harmon\u003C\/strong\u003E (Industrial Design and C21U), \u003Cstrong\u003EMichael Hoffmann\u003C\/strong\u003E (Public Policy), \u003Cstrong\u003EDavid Joyner\u003C\/strong\u003E (Online Master of Science in Computer Science), \u003Cstrong\u003ERuth Kanfer\u003C\/strong\u003E (Psychology), \u003Cstrong\u003EBrian Magerko\u003C\/strong\u003E (Language, Media, and Culture), \u003Cstrong\u003EKeith McGreggor\u003C\/strong\u003E (IC and VentureLab), \u003Cstrong\u003EChaohua Ou\u003C\/strong\u003E (Center for Teaching and Learning), and \u003Cstrong\u003ESpencer Rugaber\u003C\/strong\u003E (Computer Science).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOther partners in the ALOE Institute include Arizona State University, Drexel University, Georgia State University, Harvard University, the Technical College System of Georgia, the University of North Carolina at Greensboro, IMS Global, Boeing, IBM, and Wiley.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/research.gatech.edu\/georgia-tech-joins-us-national-science-foundation-advance-ai-research-and-education\u0022\u003EGeorgia Tech is a key partner in two additional institutes\u003C\/a\u003E in partnership with the U.S. Department of Agriculture, the National Institute of Food and Agricultures, the U.S. Department of Homeland Security Science \u0026amp; Technology Directorate, and the U.S. Department of Transportation Federal Highway Administration. Georgia Tech will lead the AI Institute for Advances in Optimization (AI4Opt) and the AI Institute for Collaborative Assistance and Responsiveness Interaction for Networked Groups (AI-CARING), the latter of which is led by College of Computing Associate Professor Sonia Chernova to support aging-related issues.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Led by the Georgia Research Alliance, the National AI Institute for Adult Learning in Online Education (ALOE) is one of 11 new NSF institutes created as part of an investment totaling $220 million."}],"uid":"33939","created_gmt":"2021-07-29 15:28:18","changed_gmt":"2021-07-29 15:28:18","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-07-29T00:00:00-04:00","iso_date":"2021-07-29T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"611004":{"id":"611004","type":"image","title":"Online learning stock","body":null,"created":"1536259875","gmt_created":"2018-09-06 18:51:15","changed":"1536259875","gmt_changed":"2018-09-06 18:51:15","alt":"Fingers typing on a laptop keyboard","file":{"fid":"232624","name":"online learning.jpg","image_path":"\/sites\/default\/files\/images\/online%20learning.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/online%20learning.jpg","mime":"image\/jpeg","size":68702,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/online%20learning.jpg?itok=CYZYPb3r"}}},"media_ids":["611004"],"related_links":[{"url":"https:\/\/research.gatech.edu\/georgia-tech-joins-us-national-science-foundation-advance-ai-research-and-education","title":"Georgia Tech Joins the U.S. National Science Foundation to Advance AI Research and Education"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"649087":{"#nid":"649087","#data":{"type":"news","title":"New Browser-Based Chart Builder Gives Line Graphs, Scatterplots Their Very Own Audio Track","body":[{"value":"\u003Cp\u003EA new multimodal data visualization tool for the web produces charts with a twist \u0026ndash; these charts also represent information using carefully designed sounds for a richer, more powerful, and accessible way to experience data.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EReleased by the Georgia Institute of Technology and open-source web application Highcharts, \u003Ca href=\u0022https:\/\/sonification.highcharts.com\/#\/\u0022\u003EHighcharts Sonification Studio (HSS)\u003C\/a\u003E\u0026nbsp;enables users to enter data into a spreadsheet to create traditional visual charts such as line graphs, scatterplots, and bar charts. At the same time, the tool creates non-speech audio tracks based on the data, a process known as sonification.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The goal of this tool is to provide a simple, intuitive, and accessible way for users to import, edit, visualize, and sonify their data, and then export the results to a useful format,\u0026rdquo; said Professor \u003Cstrong\u003EBruce Walker\u003C\/strong\u003E, director of \u003Ca href=\u0022http:\/\/sonify.psych.gatech.edu\/\u0022\u003EGeorgia Tech\u0026rsquo;s Sonification Lab\u003C\/a\u003E. \u0026ldquo;We want users to be able to use the tool without having to download software or write code, and without prior sonification expertise.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe data visualization+sonification approach lets users explore data with visual, auditory, or both modalities. This can lead to novel discoveries in its own right, and can also support users who may have limited ability to see or hear a given display.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Visually impaired readers find sonification and auditory graphs to be very useful for getting an overview of the data, as well as identifying patterns, outliers, and points of interest,\u0026rdquo; said Walker.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EBrandon Biggs\u003C\/strong\u003E, a researcher\u0026nbsp;and entrepreneur who is blind, highlighted the software\u0026rsquo;s ability to allow users such as himself to create a graph that he can trust will be visually appealing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I love how accessible all the components are with a screen-reader and how easy it is to create a sonification,\u0026rdquo; Biggs said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnd for all users\u0026mdash;even those who can see\u0026mdash;sound can communicate information without requiring visual attention. For instance, instead of looking at a weather forecast or a chart of a stock price on a screen, imagine being able to hear the ups and downs played like a melody, with additional sounds highlighting points of interest in the data.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHSS is the culmination of a multi-year collaboration between Highsoft\u0026mdash;the makers of Highcharts\u0026mdash;and the Georgia Tech Sonification Lab. The goal of the collaboration is to develop an extensible, accessible, online spreadsheet and multimodal graphing platform for the auditory display, assistive technology, and STEM education community.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWalker said that HSS is a systematic re-implementation of his lab\u0026rsquo;s Sonification Sandbox to integrate Highsoft\u0026rsquo;s industry-leading web-based Highcharts technology with Georgia Tech\u0026rsquo;s expertise in sonification and interactive auditory displays.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe tool is open-sourced under the MIT License to allow for extensions and forks in development from the community\u0026nbsp;and to ensure the tool is available to all. A Highcharts license is required for commercial use of the tool, but otherwise, usage is completely free.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This system will complement other tools and libraries actively used by the auditory display research community and help bring sonification to an even wider audience, especially in the visualization community and in situations of limited resources,\u0026rdquo; said \u003Cstrong\u003E\u0026Oslash;ystein Moseng\u003C\/strong\u003E, the Highcharts developer leading the implementation of the HSS.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA paper describing the research and development of the open-source tool is part of the 26\u003Csup\u003Eth\u003C\/sup\u003E annual International Conference on Auditory Displays (ICAD.org), which took place June 25-28, 2021. The paper \u003Cem\u003EHighcharts Sonification Studio: An Online, Open-Source, Extensible, And Accessible Data Sonification Tool\u003C\/em\u003E is co-authored by Stanley Cantrell, Walker, and Moseng.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe Highcharts Sonification Studio web app, source code, and developer community are available at \u003Ca href=\u0022https:\/\/sonification.highcharts.com\u0022\u003Ehttps:\/\/sonification.highcharts.com\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech researchers have created a data visualization plus sonification approach lets users explore data with visual, auditory, or both modalities."}],"uid":"32045","created_gmt":"2021-07-27 20:44:50","changed_gmt":"2021-07-28 15:20:25","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-07-27T00:00:00-04:00","iso_date":"2021-07-27T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"649088":{"id":"649088","type":"image","title":"Data vis sonification tool","body":null,"created":"1627422780","gmt_created":"2021-07-27 21:53:00","changed":"1627498800","gmt_changed":"2021-07-28 19:00:00","alt":"A user working with accessible browser-based Highcharts Sonification Studio software.","file":{"fid":"246435","name":"sonify-2.jpg","image_path":"\/sites\/default\/files\/images\/sonify-2.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/sonify-2.jpg","mime":"image\/jpeg","size":387592,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/sonify-2.jpg?itok=sxM8QM8z"}}},"media_ids":["649088"],"related_links":[{"url":"https:\/\/youtu.be\/VdKcyGXLyvg","title":"Hearing the Data"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"135","name":"Research"}],"keywords":[{"id":"170772","name":"Sonification"},{"id":"438","name":"data"},{"id":"7257","name":"visualization"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EJosh Preston, Research Communications Mgr.\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto:Jpreston@cc.gatech.edu?subject=Sonification\u0022\u003EJpreston@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["Jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"648905":{"#nid":"648905","#data":{"type":"news","title":"Georgia Tech Top Contributor to Research at International Conference on Machine Learning","body":[{"value":"\u003Cp\u003EGeorgia Tech researchers in the College of Engineering and College of Computing are presenting their work at the International Conference on Machine Learning (IMCL), which runs through Saturday.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EICML is the leading international academic conference in machine learning. Along with NeurIPS and ICLR, it is one of the three primary conferences of high impact in machine learning and artificial intelligence research. It is supported by the International Machine Learning Society (IMLS).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EExplore Georgia Tech people, research abstracts, and when authors will present (Tues-Thurs) in an interactive data graphic of \u003Ca href=\u0022https:\/\/public.tableau.com\/views\/GeorgiaTechatICML2021\/Dashboard1?:language=en-US\u0026amp;:display_count=n\u0026amp;:origin=viz_share_link\u0022\u003E\u003Cstrong\u003EGeorgia Tech at IMCL 2021\u003C\/strong\u003E\u003C\/a\u003E. Also explore the whole program in a second data graphic: \u003Ca href=\u0022https:\/\/public.tableau.com\/views\/ICML2021\/Dashboard12?:showVizHome=no\u0022\u003E\u003Cstrong\u003EWho\u0026rsquo;s Who at ICML 2021\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech\u0026rsquo;s work is represented in 2% of the program with 22 papers in a range of topics including (asterisk denotes a single paper):\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EApplications (CV and NLP)*\u003C\/li\u003E\r\n\t\u003Cli\u003EApplications (NLP)*\u003C\/li\u003E\r\n\t\u003Cli\u003EDeep Learning Algorithms*\u003C\/li\u003E\r\n\t\u003Cli\u003EDeep Learning Theory *\u003C\/li\u003E\r\n\t\u003Cli\u003EDeep Reinforcement Learning*\u003C\/li\u003E\r\n\t\u003Cli\u003ELearning Theory \u0026ndash; 2 papers\u003C\/li\u003E\r\n\t\u003Cli\u003EOptimal Transport \u0026ndash; 2 papers\u003C\/li\u003E\r\n\t\u003Cli\u003EOptimization (Convex)*\u003C\/li\u003E\r\n\t\u003Cli\u003EOptimization and Algorithms \u0026ndash; 2 papers\u003C\/li\u003E\r\n\t\u003Cli\u003EPrivacy *\u003C\/li\u003E\r\n\t\u003Cli\u003EReinforcement Learning \u0026ndash; 2 papers\u003C\/li\u003E\r\n\t\u003Cli\u003EReinforcement Learning and Optimization*\u003C\/li\u003E\r\n\t\u003Cli\u003EReinforcement Learning and Planning*\u003C\/li\u003E\r\n\t\u003Cli\u003EReinforcement Learning Theory*\u003C\/li\u003E\r\n\t\u003Cli\u003ETime Series \u0026ndash; 4 papers\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech researchers in the College of Engineering and College of Computing are presenting their work at the International Conference on Machine Learning (IMCL), which runs through Saturday."}],"uid":"33939","created_gmt":"2021-07-20 13:20:02","changed_gmt":"2021-07-21 05:00:40","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-07-20T00:00:00-04:00","iso_date":"2021-07-20T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"648904":{"id":"648904","type":"image","title":"ICML 2021","body":null,"created":"1626787175","gmt_created":"2021-07-20 13:19:35","changed":"1626787175","gmt_changed":"2021-07-20 13:19:35","alt":"","file":{"fid":"246336","name":"ICML2021.jpeg","image_path":"\/sites\/default\/files\/images\/ICML2021.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/ICML2021.jpeg","mime":"image\/jpeg","size":164013,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/ICML2021.jpeg?itok=e3cM_-yn"}}},"media_ids":["648904"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EJosh Preston\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003Ejpreston@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"648864":{"#nid":"648864","#data":{"type":"news","title":"Georgia Tech Faculty Hold Workshop to Improve Integration of Ethics into Courses","body":[{"value":"\u003Cp\u003EAs computer science becomes more ingrained into various areas of study and, indeed, our daily lives, an eye on the implications of innovation is needed, experts at Georgia Tech say.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo help students begin thinking about ethics with regards to research, faculty at Georgia Tech \u0026ndash; in conjunction with Mozilla \u0026ndash; held the first workshop on integrating ethics and responsible computing into courses this summer.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe workshop was a collaboration between faculty researchers at Georgia Tech in both the Ethics, Technology, and Human Interaction Center (ETHICx) and Computing and Society, as well as Mozilla. The workshop received a strong response, which organizers say indicates a growing desire for ethics at the center of computer science courses.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMembers of the College of Computing\u0026rsquo;s Division of Computing Instruction, the Schools of Interactive Computing, Computational Science and Engineering, Computer Science, and Electrical and Computer Engineering, along with attendees from Georgia State all participated in the online workshop.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s really gratifying to have broad representation because it demonstrates the desire for people from so many different areas to think more deeply about the role of ethics in our education,\u0026rdquo; said \u003Cstrong\u003EEllen Zegura\u003C\/strong\u003E, professor in the School of Computer Science and Fleming Chair in Telecommunications.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe goal of the workshop was to help instructors consider ways in which to implement ethics as a central piece in courses not just later in a student\u0026rsquo;s study, but from the very beginning. There\u0026rsquo;s an issue of urgency, Zegura said, that needed to be considered.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Computing has reached a point where it is being used for critical decision making that really affects people\u0026rsquo;s lives,\u0026rdquo; she said. \u0026ldquo;The need to use computing responsibly has moved up incredibly. And if we don\u0026rsquo;t talk about ethics early in the curriculum, we\u0026rsquo;re sending a message that it\u0026rsquo;s not important. If you only hear about it in one course and it\u0026rsquo;s later in your career, then what does that say about the importance? Students see that.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile official plans aren\u0026rsquo;t currently in place to continue the program, Zegura said the idea is to continue this as a series of activities that are responsive to what people\u0026rsquo;s needs are, specifically those who want to do a better job of embedding ethics into their computer science curriculum.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech graduate \u003Cstrong\u003EKathy Pham (CS \u0026rsquo;07, MS CS \u0026rsquo;09)\u003C\/strong\u003E, now at Mozilla, has been instrumental in engaging the computer science community from 15-20 universities on focusing on ethics, Zegura said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.youtube.com\/playlist?list=PLF0CYxpffvKx5W-y_xJ9xhrGapmeF70Og\u0022\u003EPortions of the workshop can be viewed on YouTube here.\u003C\/a\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"To help students begin thinking about ethics with regards to research, faculty at Georgia Tech \u2013 in conjunction with Mozilla \u2013 held the first workshop on integrating ethics and responsible computing into courses this summer."}],"uid":"33939","created_gmt":"2021-07-19 13:16:20","changed_gmt":"2021-07-19 13:16:20","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-07-19T00:00:00-04:00","iso_date":"2021-07-19T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"644759":{"id":"644759","type":"image","title":"Ethics stock image","body":null,"created":"1614365518","gmt_created":"2021-02-26 18:51:58","changed":"1614365518","gmt_changed":"2021-02-26 18:51:58","alt":"","file":{"fid":"244800","name":"AdobeStock_117212757.jpeg","image_path":"\/sites\/default\/files\/images\/AdobeStock_117212757.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/AdobeStock_117212757.jpeg","mime":"image\/jpeg","size":725547,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/AdobeStock_117212757.jpeg?itok=3tPD5rC9"}}},"media_ids":["644759"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"645832":{"#nid":"645832","#data":{"type":"news","title":"Assistant Professor Earns 2020 Salesforce AI Research Grant","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing Assistant Professor \u003Cstrong\u003EDiyi Yang\u003C\/strong\u003E was named a \u003Ca href=\u0022https:\/\/blog.einstein.ai\/celebrating-the-winners-of-the-third-annual-salesforce-ai-research-grant\/\u0022\u003ESalesforce AI Research Grant Winner for 2020\u003C\/a\u003E. One of seven winners of the award, she will receive a $50,000 grant to advance her work. It is the third year the grant has been provided by Salesforce.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EYang\u0026rsquo;s research, which is being led by her Ph.D. student \u003Cstrong\u003EJiaao Chen\u003C\/strong\u003E, aims to alleviate dependence of supervised models on labeled data via data augmentation approaches. Supervised learning is the machine learning task of learning a function that maps an input to an output based on example pairs, inferring the function from training data that has been tagged with identifying properties or characteristics (labeled data).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe hope is that they may improve upon the ability to transfer models from one setting to another despite the relative lack of intensive training examples.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In the era of deep learning, natural language processing (NLP) has achieved extremely good performances in most data-intensive settings,\u0026rdquo; Yang said. \u0026ldquo;However, when there are only one or a few training examples, supervised deep learning models often fail. This strong dependence on labeled data largely prevents neural network models from being applied to new settings or real-world situations.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EYang\u0026rsquo;s group has published a couple of papers in this field already, and she said the Salesforce grant will further support efforts to extend it to broader contexts, especially when NLP tasks involve complicated outputs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;These examples might include performing named entity recognition that finds the important information in a text, or semantic parsing that converts a natural language sentence into a structured command,\u0026rdquo; she said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EYou can read previous papers on the subject at the links below:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~dyang888\/docs\/mixtext_acl_2020.pdf\u0022\u003E\u003Cem\u003EMixText: Linguistically-Informed Interpolation of Hidden Space for Semi-Supervised Text Classification (Jiaao Chen, Zichao Yang, Diyi Yang)\u003C\/em\u003E\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/2010.01677.pdf\u0022\u003E\u003Cem\u003ELocal Additivity Based Data Augmentation for Semi-supervised NER (Jiaao Chen, Zhenghui Wang, Ran Tian, Zichao Yang, Diyi Yang)\u003C\/em\u003E\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EYang was chosen from a group of over 180 quality proposals from more than 30 countries.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Yang\u2019s research, which is being led by her Ph.D. student Jiaao Chen, aims to alleviate dependence of supervised models on labeled data via data augmentation approaches."}],"uid":"33939","created_gmt":"2021-03-29 14:42:23","changed_gmt":"2021-03-29 14:42:23","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-03-29T00:00:00-04:00","iso_date":"2021-03-29T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"630588":{"id":"630588","type":"image","title":"Diyi Yang 2020","body":null,"created":"1578338255","gmt_created":"2020-01-06 19:17:35","changed":"1578338255","gmt_changed":"2020-01-06 19:17:35","alt":"","file":{"fid":"240080","name":"Diyi_Yang.jpg","image_path":"\/sites\/default\/files\/images\/Diyi_Yang.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Diyi_Yang.jpg","mime":"image\/jpeg","size":194720,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Diyi_Yang.jpg?itok=T-Kv1Jqp"}}},"media_ids":["630588"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"644380":{"#nid":"644380","#data":{"type":"news","title":"Ph.D. Student Earns 2021 Focus Fellowship from Georgia Tech\u0027s Office of Minority Educational Development","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing (IC) Ph.D. student \u003Cstrong\u003EKantwon Rogers\u003C\/strong\u003E was awarded a 2021 Focus Fellowship by Georgia Tech\u0026rsquo;s \u003Ca href=\u0022https:\/\/omed.gatech.edu\/\u0022\u003EOffice of Minority Educational Development\u003C\/a\u003E (OMED).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe award recognizes participants in the \u003Ca href=\u0022https:\/\/focus.gatech.edu\/\u0022\u003EFocus Program\u003C\/a\u003E who have demonstrated academic excellence, community leadership, and been granted admittance to a graduate program. The Focus Program aims to introduce minority students to graduate school in hopes of increasing the number who pursue higher degrees.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERogers attended the Focus Program five years ago as an undergraduate student at Georgia Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It helped me learn about grad school and set me up for success,\u0026rdquo; Rogers said of the program.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe award, which carries a prize of up to $2,500 per student based on funds available and number of awardees, is not based on specific research but recognizes overall accomplishments. In an application essay, Rogers shared how OMED was pivotal to is success at Georgia Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs an undergraduate, he participated in the \u003Ca href=\u0022https:\/\/omed.gatech.edu\/programs\/challenge\u0022\u003EChallenge Program\u003C\/a\u003E, a five-week academic residential program for incoming first-year students. Later, he became a counselor in the same program, an OMED tutor, a Focus participant, a Focus panelist, and last summer a computer science (CS) instructor in the Challenge program.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It was really spooky because I was teaching the new Challenge students in the exact same room that I sat in when I was learning CS for the first time in Challenge a decade ago,\u0026rdquo; Rogers said. \u0026ldquo;Truly full circle. OMED has truly been a foundation for me here at Georgia Tech, and I am eternally grateful.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERogers\u0026rsquo; research focuses on human-robot interaction, investigating the effects that intelligent agent verbal deception has on human interaction.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Animals deceive. Humans deceive. Should robots and AI deceive?\u0026rdquo; Rogers poses in his research tagline.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAdditionally, the work aims to provide AI systems the ability to autonomously produce contextually meaningful and successfully deceptive utterances while determining when it is appropriate to verbally deceive humans.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe is advised by IC Chair \u003Cstrong\u003EAyanna Howard\u003C\/strong\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The award recognizes participants in the Focus Program who have demonstrated academic excellence, community leadership, and been granted admittance to a graduate program."}],"uid":"33939","created_gmt":"2021-02-17 16:57:08","changed_gmt":"2021-02-17 17:08:48","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-02-17T00:00:00-05:00","iso_date":"2021-02-17T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"585962":{"id":"585962","type":"image","title":"Kantwon Rogers 2","body":null,"created":"1484253211","gmt_created":"2017-01-12 20:33:31","changed":"1484253211","gmt_changed":"2017-01-12 20:33:31","alt":"","file":{"fid":"223340","name":"_MG_4285.jpg","image_path":"\/sites\/default\/files\/images\/_MG_4285.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/_MG_4285.jpg","mime":"image\/jpeg","size":173174,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/_MG_4285.jpg?itok=8se09y1V"}}},"media_ids":["585962"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"643612":{"#nid":"643612","#data":{"type":"news","title":"Georgia Tech Research Highlights Premier Artificial Intelligence Conference","body":[{"value":"\u003Cp\u003EGeorgia Tech faculty and student researchers will figure prominently into the proceedings of the \u003Ca href=\u0022https:\/\/aaai.org\/Conferences\/AAAI-21\/\u0022\u003E35\u003Csup\u003Eth\u003C\/sup\u003E AAAI Conference on Artificial Intelligence\u003C\/a\u003E, being held virtually from Feb. 2-9.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETwenty-three members of the Georgia Tech community contributed to 11 papers that will be presented at the conference, while two longtime contributors will join the ranks of the prestigious AAAI Fellows program.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E Chair \u003Cstrong\u003EAyanna Howard\u003C\/strong\u003E and Professor \u003Cstrong\u003EAshok Goel\u003C\/strong\u003E join \u003Ca href=\u0022http:\/\/cc.gatech.edu\/\u0022\u003ECollege of Computing\u003C\/a\u003E Dean \u003Cstrong\u003ECharles Isbell\u003C\/strong\u003E (elected in 2019) and Regents\u0026rsquo; Professor Emerita \u003Cstrong\u003EJanet Kolodner\u003C\/strong\u003E (elected in 1992) are 2021 inductees to the fellowship, giving the Institute four members. The program recognizes individuals who have made significant, sustained contributions to the field of artificial intelligence.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E[\u003Cstrong\u003ERelated news:\u003C\/strong\u003E \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/643355\/ic-professors-howard-goel-named-2021-aaai-fellows\u0022\u003EIC Professors Howard, Goel Named 2021 AAAI Fellows\u003C\/a\u003E]\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENotable research among the eight papers accepted to AAAI 2021 includes work from a multi-institution team working to understand and improve forecasting models of influenza-like illnesses like Covid-19. Effective forecasting is even more challenging amidst the current pandemic, when counts are affected by various factors such as symptomatic similarities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe approach in this paper steers historical forecasting models to new scenarios where the flu and Covid-19 co-exist, demonstrating success in adaptation without sacrificing overall performance.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech\u0026rsquo;s \u003Cstrong\u003EAlexander Rodr\u0026iacute;guez\u003C\/strong\u003E and \u003Cstrong\u003EB. Aditya Prakash\u003C\/strong\u003E are co-authors on the paper, along with \u003Cstrong\u003ENikhil Muralidhar\u003C\/strong\u003E, \u003Cstrong\u003EAnika Tabassum\u003C\/strong\u003E, and \u003Cstrong\u003ENaren Ramakrishnan\u003C\/strong\u003E of Virginia Tech, and \u003Cstrong\u003EBijaya Adhikari \u003C\/strong\u003Eof the University of Iowa.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E[\u003Cstrong\u003ERelated news:\u003C\/strong\u003E \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/642638\/research-team-wins-two-covid-19-challenges-one-week\u0022\u003EResearch Team Wins Two Covid-19 Challenges in One Week\u003C\/a\u003E]\u003C\/p\u003E\r\n\r\n\u003Cp\u003EExplore Georgia Tech\u0026rsquo;s presence in this visualization and view a list of papers below.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/public.tableau.com\/views\/AAAI2021-GeorgiaTechAIresearch\/Dashboard1?:language=en\u0026amp;:display_count=y\u0026amp;:origin=viz_share_link:showVizHome=no\u0022\u003EINTERACTIVE VISUALIZATION: Georgia Tech @ AAAI 20201\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/www.medrxiv.org\/content\/10.1101\/2020.09.28.20203109v2\u0022\u003EDeepCOVID: An Operational Deep Learning-driven Framework for Explainable Real-time COVID-19 Forecasting\u003C\/a\u003E (Alexander Rodr\u0026iacute;guez, Anika Tabassum, Jiaming Cui, Jiajia Xie, Javen Ho, Pulak Agarwal, Bijaya Adhikari, B. Aditya Prakash)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/www.medrxiv.org\/content\/10.1101\/2020.09.28.20203109v2\u0022\u003ESemantic MapNet: Building Alocentric SemanticMaps and Representations from Egocentric Views\u003C\/a\u003E (Vincent Cartillier, Zhile Ren, Neha Jain, Stefan Lee, Irfan Essa, Dhruv Batra)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/2009.11407.pdf\u0022\u003ESteering a Historical Disease Forecasting Model Under a Pandemic: Case of Flu and COVID-19\u003C\/a\u003E (Alexander Rodr\u0026iacute;guez, Nikhil Muralidhar, Bijaya Adhikari, Anika Tabassum, Naren Ramakrishnan, B. Aditya Prakash)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/2009.11407.pdf\u0022\u003EBias and Variance of Post-processing in Differential Privacy\u003C\/a\u003E (Keyu Zhu, Pascal Van Hentenryck, Ferdinando Fioretto)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003EBranch and Price for Bus Driver Scheduling with Complex Break Constraints (Lucas Kletzander, Nysret Musliu, Pascal Van Hentenryck)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003EDetecting and Adapting to Novelty in Games (Xiangyu Peng, Jonathan Balloch, Mark Riedl)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/2009.12562.pdf\u0022\u003EDifferentially Private and Fair Deep Learning: A Lagrangian Dual Approach\u003C\/a\u003E (Cuong Tran, Ferdinando Fioretto, Pascal Van Hentenryck)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/2010.00685.pdf\u0022\u003EHow to Motivate Your Dragon: Teaching Goal-Driven Agents to Speak and Act in Fantasy Worlds\u003C\/a\u003E\u0026nbsp;(Prithviraj Ammanabrolu, Jack Urbanek, Margaret Li, Arthur Szlam, Tim Rocktaschel, Jason Weston)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/2009.00829.pdf\u0022\u003EAutomated Storytelling via Causal, Commonsense Plot Ordering\u003C\/a\u003E\u0026nbsp;(Prithviraj Ammanabrolu, Wesley Cheung, William Broniec, Mark Riedl)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1902.06007.pdf\u0022\u003EEncoding Human Domain Knowledge to Warm Start Reinforcement Learning\u003C\/a\u003E\u0026nbsp;(Andrew Silva, Matthew Gombolay)\u003C\/li\u003E\r\n\t\u003Cli\u003E\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/2101.06351\u0022\u003EWeakly-Supervised Hierarchical Models for Predicting Persuasive Strategies in Good-faith Textual Requests\u003C\/a\u003E (Jiaao Chen, Diyi Yang)\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Eighteen members of the Georgia Tech community contributed to eight papers that will be presented virtually at AAAI 2021, while two longtime contributors will join the ranks of the prestigious AAAI Fellows program."}],"uid":"33939","created_gmt":"2021-01-29 13:24:52","changed_gmt":"2021-02-01 15:48:30","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-01-29T00:00:00-05:00","iso_date":"2021-01-29T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"643611":{"id":"643611","type":"image","title":"Artificial Intelligence","body":null,"created":"1611926616","gmt_created":"2021-01-29 13:23:36","changed":"1611926616","gmt_changed":"2021-01-29 13:23:36","alt":"Artificial Intelligence","file":{"fid":"244352","name":"artificial-intelligence-4469138_1280.jpg","image_path":"\/sites\/default\/files\/images\/artificial-intelligence-4469138_1280.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/artificial-intelligence-4469138_1280.jpg","mime":"image\/jpeg","size":212458,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/artificial-intelligence-4469138_1280.jpg?itok=6bKOxBNr"}},"643694":{"id":"643694","type":"image","title":"AAAI 2021 Visualization","body":null,"created":"1612194422","gmt_created":"2021-02-01 15:47:02","changed":"1612194422","gmt_changed":"2021-02-01 15:47:02","alt":"Georgia Tech at AAAI 2021","file":{"fid":"244377","name":"aaai_viz.jpg","image_path":"\/sites\/default\/files\/images\/aaai_viz.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/aaai_viz.jpg","mime":"image\/jpeg","size":409660,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/aaai_viz.jpg?itok=2w3bfp7_"}}},"media_ids":["643611","643694"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"643355":{"#nid":"643355","#data":{"type":"news","title":"IC Professors Howard, Goel Named 2021 AAAI Fellows","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing Chair \u003Cstrong\u003EAyanna Howard\u003C\/strong\u003E and Professor \u003Cstrong\u003EAshok Goel\u003C\/strong\u003E were both named \u003Ca href=\u0022https:\/\/www.aaai.org\/Awards\/fellows.php\u0022\u003E2021 Fellows by the Association for the Advancement of Artificial Intelligence\u003C\/a\u003E (AAAI).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe AAAI Fellows program recognizes individuals who have made significant, sustained contributions \u0026ndash; usually over at least a 10-year period \u0026ndash; to the field of artificial intelligence (AI).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGoel\u0026rsquo;s research, which spans about 35 years, has connected fields of AI, cognitive science, and human cognition. Increasingly, it has merged the fields of AI and education, culminating in his lab\u0026rsquo;s groundbreaking work on \u003Ca href=\u0022https:\/\/emprize.gatech.edu\/\u0022\u003EJill Watson\u003C\/a\u003E, a virtual teaching assistant that can answer student questions in discussion forums for online classes. This trailblazing work has been recognized by numerous media outlets across the globe and has enormous long-term implications for the future of education.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This is an exciting time for AI research into cognitive systems,\u0026rdquo; Goel said. \u0026ldquo;In one direction, my research uses the needs of human learning to ground and inspire novel AI techniques and tools. In the other, it uses AI theories and methods to provide new insights into human cognition and behavior.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe team responsible for the advancement of Jill Watson and additional AI techniques for education, called emPrize, \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/631981\/team-makes-semifinals-global-ai-competition\u0022\u003Eadvanced to the semifinals of the international XPrize AI competition in 2020\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHoward, \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/641685\/renowned-roboticist-departing-georgia-tech-new-position\u0022\u003Ewho was recently named the next Dean of Engineering at The Ohio State University\u003C\/a\u003E, has performed similarly impactful research over her time in the field. As the director of the Human-Automation Systems Lab (HumAnS) at Georgia Tech, she has led research in conceptualizing humanized intelligence, the process of embedding human cognitive capability into the control path of autonomous systems.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESpecifically, the lab studies how human-inspired techniques, such as soft computing methodologies, sensing, and learning can be used to enhance the autonomous capabilities of intelligent systems. This has impact in both virtual AI and robotics, and has led to enterprises like \u003Ca href=\u0022http:\/\/zyrobotics.com\/\u0022\u003EZyrobotics\u003C\/a\u003E, the company Howard co-founded that produces mobile therapy and educational products for children with differing needs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAdditionally, she has been a spokesperson for the importance of ethical research in the field.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We\u0026rsquo;re at such a critical moment in the development of artificial intelligence,\u0026rdquo; Howard said. \u0026ldquo;There is incredible possibility, but equally daunting challenges. It\u0026rsquo;s an honor to be recognized for the work we are doing in this field, but it\u0026rsquo;s far from over. My hope is that I can inspire future researchers to pursue impactful and ethical advancements in the field.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEight others aside from Goel and Howard were also selected to the fellowship program for 2021 and will be recognized at the \u003Ca href=\u0022https:\/\/aaai.org\/Conferences\/AAAI-21\/\u0022\u003E2021 AAAI conference\u003C\/a\u003E, being held virtually Feb. 2-9.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The AAAI Fellows program recognizes individuals who have made significant, sustained contributions to the field of artificial intelligence (AI)."}],"uid":"33939","created_gmt":"2021-01-22 18:41:43","changed_gmt":"2021-01-22 18:41:43","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-01-22T00:00:00-05:00","iso_date":"2021-01-22T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"643352":{"id":"643352","type":"image","title":"Ashok Goel and Ayanna Howard","body":null,"created":"1611340547","gmt_created":"2021-01-22 18:35:47","changed":"1611340547","gmt_changed":"2021-01-22 18:35:47","alt":"Ashok Goel and Ayanna Howard","file":{"fid":"244266","name":"Ashok Goel and Ayanna Howard.png","image_path":"\/sites\/default\/files\/images\/Ashok%20Goel%20and%20Ayanna%20Howard.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Ashok%20Goel%20and%20Ayanna%20Howard.png","mime":"image\/png","size":1469621,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Ashok%20Goel%20and%20Ayanna%20Howard.png?itok=qWIWw2Wl"}}},"media_ids":["643352"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181920","name":"cc-research; ic-ai-ml; ic-robotics"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"643307":{"#nid":"643307","#data":{"type":"news","title":"IC Associate Professor Wins 2021 ACM-W Rising Star Award","body":[{"value":"\u003Cp\u003E\u003Ca href=\u0022http:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E Associate Professor \u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E was named a winner of the \u003Ca href=\u0022https:\/\/women.acm.org\/awards\/rising-star-award\/\u0022\u003E2021 ACM-W Rising Star Award\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe award, bestowed by the Association for Computing Machinery, recognizes a woman whose early-career research has had a significant impact on the computing discipline, as measured by factors like society impact, frequent citation of work, or creation of a new research area.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDe Choudhury will receive a framed certificate and a $1,000 stipend for the recognition, which is in its first year of existence and will be given out annually. She will be recognized for the award at a research conference to be named later.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I feel deeply honored for this recognition and owe my successes to my wonderful students and collaborators, as well as the intellectual freedom provided by Georgia Tech\u0026rsquo;s College of Computing that has helped trailblaze interdisciplinary research in computing, like mine, for years,\u0026rdquo; she said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDe Choudhury\u0026rsquo;s work leverages large-scale online social data and advances in machine learning to help answer fundamental questions relating to our social lives. Chief among them lie within the field of mental health care \u0026ndash; understanding mental health, improving access to care, and more. Her work has been recognized by a number of other awards, including 13 best paper and honorable mention paper awards from the ACM and AAAI, as well as publications such as the New York Times, BBC, NPR, and others.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn addition to the personal appreciation, De Choudhury stressed the importance of recognizing the work of under-represented researchers in the computing field.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I\u0026rsquo;d like to commend the efforts of ACM-W for creating this new opportunity to celebrate the research of a group under-represented in the computing field,\u0026rdquo; she said. \u0026ldquo;There is a long way to go when it comes to computing making significant positive impact on a pervasive societal problem like mental health. Still, this award serves as a valuable encouragement for the next frontier of my research program.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDe Choudhury leads the \u003Ca href=\u0022http:\/\/socweb.cc.gatech.edu\/\u0022\u003ESocial Dynamics and Wellbeing Lab\u003C\/a\u003E. Research from the lab, both past and current, can be explored in more detail on its website.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The award recognizes a woman whose early-career research has had a significant impact on the computing discipline."}],"uid":"33939","created_gmt":"2021-01-21 19:57:50","changed_gmt":"2021-01-21 19:57:50","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-01-21T00:00:00-05:00","iso_date":"2021-01-21T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"587685":{"id":"587685","type":"image","title":"Munmun De Choudhury","body":null,"created":"1487686001","gmt_created":"2017-02-21 14:06:41","changed":"1487783642","gmt_changed":"2017-02-22 17:14:02","alt":"Georgia Tech Assistant Professor Munmun De Choudhury","file":{"fid":"223975","name":"munmun portrait_horz.jpg","image_path":"\/sites\/default\/files\/images\/munmun%20portrait_horz.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/munmun%20portrait_horz.jpg","mime":"image\/jpeg","size":711876,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/munmun%20portrait_horz.jpg?itok=GwpgdV5R"}}},"media_ids":["587685"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182015","name":"cc-research; ic-ai-ml; ic-hcc; ic-social-computing"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"642143":{"#nid":"642143","#data":{"type":"news","title":"Q\u0026A: De\u0027Aira Bryant Discusses Her Experience Programming a Robot for the Movie Superintelligence","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003EDe\u0026rsquo;Aira Bryant\u003C\/strong\u003E didn\u0026rsquo;t come to Georgia Tech to work in the movie industry. Her interests lie within the field of robotics, where she works on projects that will increase the quality of human life.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBeing in the heart of Atlanta, however, the burgeoning heart of the film industry, comes with a few perks. Last year, Bryant was able to take advantage of one when she was contacted by representatives from the production crew of \u003Cem\u003ESuperintelligence\u003C\/em\u003E. The movie stars Melissa McCarthy as a woman who must prove to an artificial intelligence that humanity is worth saving and was recently released on HBO Max.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor the movie, Bryant was asked to program a Nao, a humanoid robot she uses in the \u003Ca href=\u0022https:\/\/humanslab.ece.gatech.edu\/\u0022\u003EHuman-Automation Systems (HumAnS) Lab\u003C\/a\u003E run by her advisor, School of Interactive Computing Chair Ayanna Howard. Read about Bryant\u0026rsquo;s experience programming the biggest star on the set.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EHow did this opportunity to work with \u003Cem\u003ESuperintelligence\u003C\/em\u003E come about, and what was the experience like?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe production team reached out to the College of Computing. They were interested in having a robot for a scene and needed someone who could program the Nao to match the scene they had written. They reached out to Dr. Howard because they knew she had that type of robot, and she reached out to me because I\u0026rsquo;m the person who does most of the customized programming for this particular robot. If there\u0026rsquo;s a script or movements or whatever, I\u0026rsquo;m the choreographer.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt was exciting. I was like, \u0026ldquo;Oh my goodness, this is for a movie.\u0026rdquo; I had no idea what it was about, but I was just excited to be a part of it. They asked if their ideas were possible and the production team was like, \u0026ldquo;We don\u0026rsquo;t know what it can do, but we think it looks cool. Can you make it do this?\u0026rdquo; We talked on the phone, and then I went to work.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EHow long did you have to program it?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI had about a week to get it ready. I had this idea of what they wanted, and I just tried to program it as best as I could.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ESo, tell me about the day of. What was it like being on set?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI took the robot to the Klaus Advanced Computing Building. They were filming in there. It was so exciting to see everything. I had to tell the robot to go on their cue, so I was sitting right behind the camera. I got to meet Melissa McCarthy and some of the other stars, and I got a few pictures with them that I\u0026rsquo;m excited to finally be able to share with everyone. Everyone was so welcoming and understanding that the robot needed some time. I like to say that the robot was the biggest superstar on the set. It had its moments where it was like, \u0026ldquo;I\u0026rsquo;m not ready yet. My joint isn\u0026rsquo;t quite ready to do this movement.\u0026rdquo; They were understanding and eager to learn. They wanted their own pictures with the robot and everything, and had their own questions that I was excited to answer.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EA lot of non-roboticists or AI researchers\u0026rsquo; first experiences with robots is in mass media like movies or TV shows, and normally its some dystopian or disaster scenario. How seriously did you take that responsibility or opportunity to portray the lighter, more realistic side?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI think for a lot of people, robots \u0026ndash; especially these humanoid ones \u0026ndash; have been largely portrayed negatively. They focus on disaster cases they may never happen in the next 100 years, if ever. There hasn\u0026rsquo;t been a lot of mass media attention that focuses on more positive use cases. I take that very seriously in our work, just knowing that we focus on people, on children that can benefit from the technology and have it improve their quality of life. It\u0026rsquo;s important to show those cases to affect the narrative. But we also want to highlight the concerns that are just. Things like bias and ethics of using robotics in certain domains. Those are real things that people are working to mitigate now, so we can bring people closer to what the field actually looks like by highlighting both.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEvery time I teach kids or teach a class, I start out by showing what robots can actually do. I show videos of them falling over or something like that to illustrate that those terminators or killer robots, that doesn\u0026rsquo;t happen right now. But there are some other issues that are real and current and pressing, and here\u0026rsquo;s how we address them.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EBeing at Georgia Tech with movies filmed nearby has offered these kinds of neat opportunities. How neat is it to have this platform?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMy friends think it\u0026rsquo;s so much cooler that I helped work on a movie that is going to be on HBO Max than for me to have some paper published at this really prestigious conference. The movie resonates with them more, so it\u0026rsquo;s an opportunity to have a connection. They can relate to the technology in a way that is natural to them and ask questions, and I can share more about robotics and my work. That\u0026rsquo;s how we get people interested in the field.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"De\u0027Aira Bryant programmed a robot for a scene in the movie Superintelligence. She discusses her experience in this Q\u0026A."}],"uid":"33939","created_gmt":"2020-12-15 23:09:43","changed_gmt":"2020-12-15 23:09:43","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-12-15T00:00:00-05:00","iso_date":"2020-12-15T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"642140":{"id":"642140","type":"image","title":"De\u0027Aira Bryant Superintelligence","body":null,"created":"1608072918","gmt_created":"2020-12-15 22:55:18","changed":"1608072918","gmt_changed":"2020-12-15 22:55:18","alt":"De\u0027Aira Bryant works on the set of the movie Superintelligence","file":{"fid":"243949","name":"BryantSuperintelligence2.jpg","image_path":"\/sites\/default\/files\/images\/BryantSuperintelligence2.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/BryantSuperintelligence2.jpg","mime":"image\/jpeg","size":141730,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/BryantSuperintelligence2.jpg?itok=G6SL6u0X"}}},"media_ids":["642140"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182940","name":"cc-research; ic-ai-ml; ic-robotics; ic-hcc"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"642142":{"#nid":"642142","#data":{"type":"news","title":"Sehoon Ha Part of $500k Grant to Make Safer, More Deployable Robots","body":[{"value":"\u003Cp\u003ESafety is arguably the biggest barrier to large-scale deployability of humanoid assistive robots.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELarge, heavy, and with the potential to suddenly fall over all mean that the risk to humans has remained too high to place this technology in homes, hospitals, retail spaces, or care facilities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn 2016, however, researchers at UCLA posed a solution: What if we made robots that just couldn\u0026rsquo;t fall down? Now, researchers at Georgia Tech, in collaboration with UCLA and the University of Southern California, are working to develop a new class of locomotion systems that could enable this technology to become a larger part of our daily lives.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We have lots of robots,\u0026rdquo; said Sehoon Ha, an assistant professor in Georgia Tech\u0026rsquo;s School of Interactive Computing and a co-principal investigator on the project. \u0026ldquo;But they aren\u0026rsquo;t in our house or in our stores. It\u0026rsquo;s mainly because of safety. I have a young daughter. I wouldn\u0026rsquo;t be comfortable with a full-sized humanoid robot in my house.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPreviously, UCLA developed a new class of robots called \u0026ldquo;buoyancy-assisted robots.\u0026rdquo; Instead of the human-like hardware that was bulky, heavy, and subject to the pitfalls of gravity, these legged robots remained erect thanks to a body made of helium balloons.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Even though there is some mechanical or motor error, it never falls,\u0026rdquo; Ha said. \u0026ldquo;It never breaks. It\u0026rsquo;s super light. Even if it might collide with you, it doesn\u0026rsquo;t fall and it can\u0026rsquo;t hurt you.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECreating a new class of locomotion systems has a couple of challenges: designing a new hardware that is cheap and safe and developing an algorithm that supports locomotion and collaboration. This grant will support development of novel frameworks that address a fundamentally new family of legged robots and empower them with reliable locomotion skills.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The main philosophy is to deploy the reinforcement learning on real hardware,\u0026rdquo; Ha said. \u0026ldquo;This buoyancy-assisted robot is subject to a relatively larger magnitude of drag forces. It\u0026rsquo;s hard to simulate it. There\u0026rsquo;s a discrepancy between simulation and the real world. We want to collect real-world experience and limit the reality gap.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe technology could help carry out a search and rescue in a disaster relief zone or answer a question in a retail space. The new project, funded by a $500,000 grant from the National Science Foundation\u0026rsquo;s National Robotics Initiative, will help create new locomotion control systems using reinforcement learning to improve the state of this technology.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlready cheaper than its bulkier counterparts, these robots could be as inexpensive as a couple hundred dollars produced at scale, Ha said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Now you might imagine a scenario where you could drop 1,000 of these into a disaster area to carry our search and rescue missions,\u0026rdquo; he said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe grant runs for four years and research from the project will be open-source to encourage additional collaboration. The grant will also support a competition for middle and high school students using the low-cost platforms to foster student interest in the field.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Researchers at Georgia Tech, in collaboration with UCLA and the University of Southern California, are working to develop a new class of locomotion systems that could enable buoyancy-assisted robots to become a larger part of our daily lives."}],"uid":"33939","created_gmt":"2020-12-15 23:02:58","changed_gmt":"2020-12-15 23:02:58","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-12-15T00:00:00-05:00","iso_date":"2020-12-15T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"642141":{"id":"642141","type":"image","title":"Sehoon Ha","body":null,"created":"1608073322","gmt_created":"2020-12-15 23:02:02","changed":"1608073322","gmt_changed":"2020-12-15 23:02:02","alt":"Sehoon Ha","file":{"fid":"243950","name":"sehoon.jpg","image_path":"\/sites\/default\/files\/images\/sehoon.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/sehoon.jpg","mime":"image\/jpeg","size":542864,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/sehoon.jpg?itok=95iLKDqy"}}},"media_ids":["642141"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181920","name":"cc-research; ic-ai-ml; ic-robotics"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"641437":{"#nid":"641437","#data":{"type":"news","title":"New Grant Helps Researchers Bring Cybersecurity into the Physical World","body":[{"value":"\u003Cp\u003EImagine if you could physically feel a threat to your digital security \u0026ndash; perhaps a vibration on your wrist to alert you to nearby danger. What kinds of precautions would you take if you felt these digital threats the same way you felt those of the physical world?\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELike carrying a can of pepper spray when walking down a dark alleyway \u0026ndash; or avoiding the alleyway altogether \u0026ndash; a new project out of Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC) aims to connect this abstract world of cybersecurity and privacy with concrete physical environments to promote better security behavior.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In the real world, we have these corporeal sensations that give us cues on how to act,\u0026rdquo; said IC Assistant Professor \u003Cstrong\u003ESauvik Das\u003C\/strong\u003E, the principal investigator on the project. \u0026ldquo;If you feel a cold breeze on your cheek, you may decide to wear a scarf. If you are walking down a dark alleyway, you may become more alert and aware of your surroundings. It\u0026rsquo;s a different story in the present state of cybersecurity and privacy.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThat current state is mostly limited to a warning when you\u0026rsquo;re leaving a secure network on your computer or a pop-up box that might caution against proceeding to a specific website. But what about the digital threats we face daily when proceeding throughout our daily routines, perusing the internet on our phones or walking through a crowded airport?\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThere are no corporeal sensory perception cues that indicate what is threatening or worthy of our attention. Similarly, we don\u0026rsquo;t have affordances that allow us to manipulate digital interfaces in ways that will better protect us against these threats that we find salient.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;That\u0026rsquo;s the idea here,\u0026rdquo; Das said. \u0026ldquo;We want to solve this abstraction problem by physically alerting people to threats and giving them means to defend against them.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe project presents three solutions to the digital abstraction problem \u0026ndash; Spidey Sense, Bit Whisperer, and Horcrux. Each aims to solves a specific branch of the problem: notifying you to threats, giving you more effective means to defend against threats, and providing means to better govern shared resources.\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Cstrong\u003ESpidey Sense\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003ESpidey Sense uses a wristband that integrates with modern Apple watches that can squeeze the wrist in programmable patterns to notify the wearer of perceived digital threats.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe idea is that people might not feel the threat through visual communication design the same way they might when walking down a dark alleyway at night.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;How can we similarly communicate that threat?\u0026rdquo; Das poses. \u0026ldquo;This field of affective haptics was a good bridge.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Cstrong\u003EBit Whisperer\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003ESo, what do you do when you know threats exist? In the real world, one might intuit that to block entry into a room they could place a heavy object in front of a door or that to communicate secure information they might need to whisper. This project aims to present similar options for digital information.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s like whispering through the digital world,\u0026rdquo; Das said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo transfer data from one smart device to another, one might use Bluetooth. But one can\u0026rsquo;t see the bits traveling through the air as they are communicated. Bit Whisperer uses physical objects, like a table, to communicate.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing inaudible sound frequencies that can be generated through smartphones, data is transmitted through the physical surface from one device to other devices on the same surface. Anyone off the surface can\u0026rsquo;t receive the data without physically placing their device on it, making it much more challenging for would-be attackers.\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Cstrong\u003EHorcrux\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EHorcrux is a more abstract project at present. It aims to assist individuals governing shared digital resources. Current state of the art provides point-and-click resources, but those make it impossible to multitask and challenging to specify access controls.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis project, like the others, aims to provide physical tools that can be manipulated by hand to make it easier to specify access.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe idea now is a mat where play pieces like figurines can represent people or resources that people own.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Think of a castle where you can move figurines through different accesses,\u0026rdquo; Das said. \u0026ldquo;These tangible interfaces allow for more interaction, more multitasking, and visible physical representations for what everyone has access to.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThese projects are being funded by a $500,000 grant from the National Science Foundation. IC Professor \u003Cstrong\u003EGregory Abowd\u003C\/strong\u003E is a co-principal investigator on the grant, and Ph.D. student \u003Cstrong\u003EYoungwook Do\u003C\/strong\u003E is a key contributor.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A new project out of Georgia Tech\u2019s School of Interactive Computing (IC) aims to connect the abstract world of cybersecurity and privacy with concrete physical environments to promote better security behavior."}],"uid":"33939","created_gmt":"2020-11-19 15:38:31","changed_gmt":"2020-11-19 15:38:31","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-11-19T00:00:00-05:00","iso_date":"2020-11-19T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"626044":{"id":"626044","type":"image","title":"Cybersecurity stock image","body":null,"created":"1568223064","gmt_created":"2019-09-11 17:31:04","changed":"1568223064","gmt_changed":"2019-09-11 17:31:04","alt":"Stock photo of stylized padlock icons surrounded by a word cloud of information security terms.","file":{"fid":"238338","name":"Cybersecurity_stock_image.jpg","image_path":"\/sites\/default\/files\/images\/Cybersecurity_stock_image.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Cybersecurity_stock_image.jpg","mime":"image\/jpeg","size":110089,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Cybersecurity_stock_image.jpg?itok=0IXlXdwN"}}},"media_ids":["626044"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182941","name":"cc-research; ic-cybersecurity; ic-hcc"}],"core_research_areas":[{"id":"145171","name":"Cybersecurity"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"641381":{"#nid":"641381","#data":{"type":"news","title":"Need a Note Taker? This AI Can Help.","body":[{"value":"\u003Cp\u003EA new tool that uses artificial intelligence is bringing notetaking up to speed and may help future digital assistants ease fears of ever missing a meeting again.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt\u0026rsquo;s an age-old problem: We are inundated with informal forms of communication like phone calls, remote video conferences, text conversations on group messaging platforms like Slack or Microsoft Teams. Remembering key points of each discussion can at times be overwhelming, not to mention the stress caused by missing a meeting or seeing a couple hundred messages stack up while you were out for lunch.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis digital solution, developed by Georgia Tech researchers and being presented in a paper this week at the \u003Ca href=\u0022https:\/\/2020.emnlp.org\/\u0022\u003E2020 Conference on Empirical Methods in Natural Language Processing\u003C\/a\u003E, can assuage those concerns by generating summaries of informal conversations. Using a subset of machine learning called natural language processing, the method identifies conversational structure using particular keywords.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Think about informal conversational structure: It has an opening, problem statements, discussions, a conclusion,\u0026rdquo; said \u003Cstrong\u003EDiyi Yang\u003C\/strong\u003E, an assistant professor in the School of Interactive Computing and a co-author on the paper. \u0026ldquo;We want to mine those structures to teach the model what may be informative within the conversation for generating better summaries.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWords like any variation of \u0026ldquo;hello\u0026rdquo; or \u0026ldquo;good,\u0026rdquo; for example, might indicate that it is a greeting. Other action words likely indicate some kind of intention, and dates or times a discussion and conclusion on plans. Knowing this, the model can represent the unstructured conversation better to craft an accurate summary.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThese types of summaries are more important now than ever. More individuals all over the world are working or attending school remotely. More discussions are being handled over the phone or video conferencing, plans being made through applications like Microsoft Teams. Previous research on the subject has focused on formal content like books, papers, or news articles, but the existing body of work on informal language is relatively sparse.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This is applicable now more than ever because of where we are,\u0026rdquo; Yang said. \u0026ldquo;There\u0026rsquo;s so much online and text conversation, and we have way too much information. We need help storing it in a shorter and more structured way. If you\u0026rsquo;re away from your laptop for 30 minutes, it\u0026rsquo;s important to be able to get a quick summary of what you missed.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EChallenges still exist. There are problems with referral in the conversation, or calling back to a previous discussion point later in a meeting. There are also typos or slang, repetition, interruption or changes in role, language changes that can interfere with the model\u0026rsquo;s ability to determine structure. These are items Yang and her collaborator are continuing to address moving forward.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This is a great starting point,\u0026rdquo; Yang said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe work is presented in the paper \u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/2010.01672.pdf\u0022\u003E\u003Cem\u003EMulti-View Sequence-to-Sequence Models with Conversational Structure for Abstractive Dialogue Summarization\u003C\/em\u003E\u003C\/a\u003E. The paper is co-authored by Yang and \u003Cstrong\u003EJiaao Chen\u003C\/strong\u003E, a second-year Ph.D. student in the School of Interactive Computing.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A new AI tool that summarizes unstructured conversational language could help future digital assistants ease fears of ever missing a meeting again."}],"uid":"33939","created_gmt":"2020-11-17 17:03:48","changed_gmt":"2020-11-17 17:03:48","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-11-17T00:00:00-05:00","iso_date":"2020-11-17T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"641380":{"id":"641380","type":"image","title":"Taking Notes","body":null,"created":"1605631344","gmt_created":"2020-11-17 16:42:24","changed":"1605631344","gmt_changed":"2020-11-17 16:42:24","alt":"A stack of notes on a table","file":{"fid":"243729","name":"Note taking photo.jpg","image_path":"\/sites\/default\/files\/images\/Note%20taking%20photo.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Note%20taking%20photo.jpg","mime":"image\/jpeg","size":28007,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Note%20taking%20photo.jpg?itok=9unEOuC7"}}},"media_ids":["641380"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181639","name":"cc-research; ic-ai-ml"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"640793":{"#nid":"640793","#data":{"type":"news","title":"Georgia Tech Researchers Contribute 13 Papers to Premier Visualization Conference","body":[{"value":"\u003Cp\u003EGeorgia Tech contributed to 13 papers and two workshops this week at \u003Ca href=\u0022http:\/\/ieeevis.org\/year\/2020\/welcome\u0022\u003EIEEE VIS 2020\u003C\/a\u003E, the premier forum for advances in theory, methods, and applications of visualization and visual analytics.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe conference highlights research from universities, government, and industry around the world. It is comprised of three separate events: IEEE Visual Analytics Science and Technology (VAST), IEEE Information Visualization (InfoVis), and IEEE Scientific Visualization (SciVis). Like other conferences throughout the Covid-19 pandemic, VIS was held virtually.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech\u0026rsquo;s research was highlighted by one Best Paper Honorable Mention titled \u003Cem\u003EMapping Researchers with PeopleMap\u003C\/em\u003E. The paper \u0026ndash; authored by \u003Cstrong\u003EJon Saad-Falcon\u003C\/strong\u003E, \u003Cstrong\u003EOmar Shaikh\u003C\/strong\u003E, \u003Cstrong\u003EZijie J. Wang\u003C\/strong\u003E, \u003Cstrong\u003EAustin P. Wright\u003C\/strong\u003E, \u003Cstrong\u003ESasha Richardson\u003C\/strong\u003E, and \u003Cstrong\u003EPolo Chau\u003C\/strong\u003E \u0026ndash; presents an open-source interactive tool that uses natural language processing to create visual maps for researchers based on their research interests and publications.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Discovering research expertise at universities can be a difficult task,\u0026rdquo; the paper contends. \u0026ldquo;Directories routinely become outdated, and few help in visually summarizing researchers\u0026rsquo; work or supporting the exploration of shared interests among researchers. This results in lost opportunities for both internal and external entities to discover new connections, nurture research collaboration, and explore the diversity of research.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe paper also received a VAST Poster Research Award.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlso of note, new School of Computational Science \u0026amp; Engineering Chair \u003Cstrong\u003EHaesun Park\u003C\/strong\u003E received recognition for a 2010 IEEE VAST Paper. The paper received a Test of Time Award, recognizing it for continued contributions to the visual analytics and visualization community. The paper is titled \u003Cem\u003EiVisClassifier: An Interactive Visual Analytics System for Classification Based on Supervised Dimension Reduction\u003C\/em\u003E and co-authored by \u003Cstrong\u003EJaegul Choo\u003C\/strong\u003E, \u003Cstrong\u003EHanseung Lee\u003C\/strong\u003E, and \u003Cstrong\u003EJaeyeon Kihm\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESchool of Interactive Computing Ph.D. student \u003Cstrong\u003EEmily Wall\u003C\/strong\u003E, who is advised by Associate Professor \u003Cstrong\u003EAlex Endert\u003C\/strong\u003E, was also recognized with the VGTC Outstanding Dissertation Honorable Mention for her work \u003Cem\u003EDetecting and Mitigating Human Bias in Visual Analytics\u003C\/em\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;People are susceptible to a multitude of biases, including perceptual biases and illusions; cognitive biases like confirmation bias or anchoring bias; and social biases like racial or gender bias that are borne of cultural experiences and stereotypes,\u0026rdquo; Wall contends. \u0026ldquo;As humans are an integral part of data analysis and decision making in many domains, their biases can be injected into and even amplified by models and algorithms.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHer work aims to develop a better understanding of the role human bias plays in visual data analysis by defining bias, detecting bias, and mitigating bias.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EExplore more about Georgia Tech\u0026rsquo;s contributions to IEEE VIS at the links below, or visit the \u003Ca href=\u0022http:\/\/vis.gatech.edu\/\u0022\u003EGeorgia Tech Visualization Lab\u003C\/a\u003E. You can follow the lab on Twitter at \u003Ca href=\u0022https:\/\/twitter.com\/GT_Vis\u0022\u003E@GT_Vis\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EGeorgia Tech at IEEE VIS 2020\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EPapers\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/2007.15832\u0022\u003ESafetyLens: Visual Data Analysis of Functional Safety of Vehicles (Arpit Narechania, Ahsan Qamar, and Alex Endert)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/nl4dv.github.io\/nl4dv\/\u0022\u003ENL4DV: A Toolkit for Generating Analytic Specifications for Data Visualization from Natural Language Queries (Arpit Narechania, Arjun Srinivasan, and John Stasko)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arjun010.github.io\/individual-projects\/databreeze.html\u0022\u003EInterweaving Multimodal Interaction with Flexible Unit Visualizations for Data Exploration (Arjun Srinivasan, Bongshin Lee, and John Stasko)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/terrancelaw.github.io\/publications\/data_insight_interviews_vis20.pdf\u0022\u003EWhat are Data Insights to Professional Visualization Users? (Po-Ming Law, Alex Endert, and John Stasko)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/terrancelaw.github.io\/publications\/auto_insights_vis20.pdf\u0022\u003ECharacterizing Automated Data Insights (Po-Ming Law, Alex Endert, and John Stasko)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/2004.15004\u0022\u003ECNN Explainer: Learning Convolutional Neural Networks with Interactive Visualization (Zijie J. Wang, Robert Turko, Omar Shaikh, Haekyu Park, Nilaksh Das, Fred Hohman, Minsuk Kahng, Duen Horng (Polo) Chau)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/2009.02608\u0022\u003EBluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks (Nilaksh Das, Haekyu Park, Zijie J. Wang, Fred Hohman, Robert Firstman, Emily Rogers, Duen Horng (Polo) Chau)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/poloclub.github.io\/papers\/20-vis-ganlabeval.pdf\u0022\u003EHow Does Visualization Help People Learn Deep Learning? Evaluating GAN Lab with Observational Study and Log Analysis (Minsuk Kahng, Duen Horng (Polo) Chau)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/2009.00091\u0022\u003EMapping Researchers with PeopleMap (Jon Saad-Falcon, Omar Shaikh, Zijie J. Wang, Austin P. Wright, Sasha Richardson, Duen Horng (Polo) Chau)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/gtvalab.github.io\/files\/legion.pdf\u0022\u003ELEGION: Visually compare modeling techniques for regression (Subhajit Das, Alex Endert)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/gtvalab.github.io\/files\/cava_dataaug.pdf\u0022\u003ECAVA: A Visual Analytics System for Exploratory Columnar Data Augmentation Using Knowledge Graphs (Dylan Cashman, Shenyu Xu, Subhajit Das, Florian Heimerl, Cong Liu, Shah Rukh Humayoun, Michael Gleicher, Alex Endert, Remco Chang)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003EA Comparative Analysis of Industry Human-AI Interaction Guidelines (Austin P. Wright, Zijie J. Wang, Haekyu Park, Grace Guo, Fabian Sperrle, Mennatallah El-Assady, Alex Endert, Daniel Keim, Duen Horng (Polo) Chau)\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/trexvis.github.io\/Workshop2020\/papers\/Coscia.pdf\u0022\u003EToward A Bias-Aware Future for Mixed Initiative Visual Analytics (Adam Coscia, Duen Horng (Polo) Chau, Alex Endert)\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ERecognitions\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~hpark\/papers\/choo_vast10_v1.pdf\u0022\u003EiVisClassifier: an Interactive Visual Analytics System for Classification Based on Supervised Dimension Reduction (Jaegul Choo, Hanseung Lee, Jaeyeon Kihm and Haesun Park)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/smartech.gatech.edu\/handle\/1853\/63597\u0022\u003EDetecting and Mitigating Human Bias in Visual Analytics (Emily Wall (Advisor: Alex Endert))\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWorkshops\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EMoVIS \u0026#39;20 (Organizers: Clio Andris, Somayeh Dodge, Alan MacEachren)\u003C\/li\u003E\r\n\t\u003Cli\u003EVISxAI \u0026#39;20 (Organizers: Adam Perer, Duen Horng (Polo) Chau, Fred Hohman, Hendrik Strobelt, Mennatallah El-Assady)\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"IEEE VIS highlights research from universities, government, and industry around the world."}],"uid":"33939","created_gmt":"2020-10-30 04:41:57","changed_gmt":"2020-10-30 04:41:57","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-10-30T00:00:00-04:00","iso_date":"2020-10-30T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"640792":{"id":"640792","type":"image","title":"Georgia Tech at IEEE VIS 2020","body":null,"created":"1604032582","gmt_created":"2020-10-30 04:36:22","changed":"1604032582","gmt_changed":"2020-10-30 04:36:22","alt":"Georgia Tech at IEEE VIS 2020","file":{"fid":"243550","name":"Screen Shot 2020-10-30 at 12.34.13 AM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202020-10-30%20at%2012.34.13%20AM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202020-10-30%20at%2012.34.13%20AM.png","mime":"image\/png","size":244701,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202020-10-30%20at%2012.34.13%20AM.png?itok=xIsvy28M"}}},"media_ids":["640792"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"186124","name":"cc-research; ic-ai-ml; ic-hcc; ic-social-computing; ic-visualization"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"640199":{"#nid":"640199","#data":{"type":"news","title":"Ivan Allen College of Liberal Arts and the College of Computing Launch New Ethics Center","body":[{"value":"\u003Cp\u003EBuilding on years of experience in research and education in ethics and technology, the College of Computing and the Ivan Allen College of Liberal Arts have launched the Ethics, Technology, and Human Interaction Center (ETHIC\u003Csup\u003Ex\u003C\/sup\u003E).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe new Center \u0026mdash; pronounced \u0026ldquo;ethics\u0026rdquo; \u0026mdash; will advance ethics-in-technology-centered research, education, and engagement at the Georgia Institute of Technology in collaboration with communities, government, non-governmental organizations, and industry. The office of the Executive Vice President for Research provided significant funds over a three-year period to seed the Center.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We must foster Georgia Tech\u0026rsquo;s strengths in ethics, responsible research, and the development of emerging technologies in collaborative ways,\u0026rdquo; said Raheem Beyah, Georgia Tech\u0026rsquo;s vice president for interdisciplinary research. \u0026ldquo;ETHIC\u003Csup\u003Ex \u003C\/sup\u003E\u0026nbsp;will provide the necessary environment to support this work and Georgia Tech\u0026rsquo;s mission to advance technology and improve the human condition.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe Colleges already have in-depth research and education experience addressing technology-related ethics questions. For instance, the School of Public Policy founded the Center for Ethics and Technology more than 12 years ago to foster a critical inquiry culture and deliberation about technology-related ethical issues. Faculty in that Center research ethical issues in the design of emerging contact tracing technologies, design ethics, social justice theory, and criticism broadly, and their relationship to emerging technologies such as smart cities, self-driving cars, and smart assistants, and a platform for fostering reflection and self-correcting reasoning in teaching and deliberation. The College of Computing also has created thriving research and educational initiatives such as the Ethical AI professional development course and the Law, Policy, and Ethics Initiative for Machine Learning @ GATECH.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe new Center will build on those strengths and position the Georgia Institute of Technology to become the leader in framing ethical concerns in technology, including fairness, accountability, transparency, social justice, and technological change.\u003C\/p\u003E\r\n\r\n\u003Ch2\u003EAnticipating New Ethical Challenges\u003C\/h2\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;ETHIC\u003Csup\u003Ex\u003C\/sup\u003E will be a place for robust, multidisciplinary research and a place to engage in systematic ethical analyses,\u0026rdquo; said Kaye Husbands Fealing, dean of the Ivan Allen College of Liberal Arts and co-director of the new Center. \u0026ldquo;It also will be a place for communities, corporations, governments, technologists, educators, and others to discuss and find solutions to complex ethical issues in science and technology.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe Center will conduct research in ethics and emerging technologies, frame ethical questions, solutions in ethics and technology, and social justice and equity. Interdisciplinary and community-based research also will be emphasized.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEducational initiatives will include investigating and designing curricula for ethics training that can be woven throughout students\u0026rsquo; educational journeys and for employees at affiliated companies.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026rdquo;Responsibility is a core value of everything we do in the College of Computing at Georgia Tech. That means focusing on our communities and examining the impacts, both positive and negative, of our research and curricula,\u0026rdquo; said Charles Isbell, dean and John P. Imlay, Jr. chair of the College of Computing. \u0026ldquo;It means reaching across disciplines to collaborate with experts in other fields\u0026nbsp;who\u0026nbsp;can inform our own technological developments. We find solutions for tomorrow\u0026rsquo;s problems, which means we have to anticipate the new ethical challenges we will face. This Center will help us do that.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch2\u003ENew Center Builds on Deep Experience\u003C\/h2\u003E\r\n\r\n\u003Cp\u003EAyanna Howard, chair in the School of Interactive Computing, joins Husbands Fealing as co-director of the new Center.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In the School of Interactive Computing, we encourage all of our faculty and student researchers to think critically about the new challenges their research presents and offer strategies to mitigate any potential negative impact on society,\u0026rdquo; Howard said. \u0026ldquo;Good innovation isn\u0026rsquo;t just about developing new technologies; it\u0026rsquo;s about developing solutions to problems that can make the world a better, more equitable, and more inclusive place.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech launched the School of Interactive Computing in anticipation of the need for interdisciplinary research in computer science, liberal arts, and more. Faculty members examine diverse ethical challenges, including misinformation, content moderation, free speech on social platforms, data privacy and security, virtual reality, wearable computing devices, and robo-ethics.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFaculty and students throughout the Ivan Allen College of Liberal Arts engage in interdisciplinary research collaborations on ethics and emerging technologies, including in areas such as engineering, the environment, bioethics, responsible innovation, research ethics, the \u003Cem\u003Eethical\u003C\/em\u003E\u0026nbsp;and political dimensions of design and technology, and more.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In the Ivan Allen College, careful consideration of the impacts of technology on people, and of people on technology, is a central part of our curriculum and values,\u0026rdquo; said Justin Biddle, an associate professor in the School of Public Policy, director of the Center for Ethics and Technology, and a member of the new Center\u0026rsquo;s leadership team. \u0026ldquo;With innovation today often outpacing our ability to understand its consequences, and widespread questions regarding the relations between technology, equity, and social justice, this kind of thinking is more important than ever.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFaculty in both Colleges also have initiated discussions on the social and ethical implications of emerging technologies\u0026nbsp;across campus and beyond. These include the \u003Ca href=\u0022https:\/\/ethics.gatech.edu\/techdebates\u0022\u003E\u003Cem\u003ETechDebates on Emerging Technologies\u003C\/em\u003E\u003C\/a\u003E\u003Cem\u003E, \u003C\/em\u003Ethe \u003Ca href=\u0022https:\/\/ethics.gatech.edu\/sparks-forum\u0022\u003ESparks Forum on Ethics and Engineering\u003C\/a\u003E, the Machine Learning@GT Seminar Series, and the \u003Ca href=\u0022http:\/\/techfutures.lmc.gatech.edu\/\u0022\u003EEthics and Technological Futures\u003C\/a\u003E series developed by Nassim Parvin and Susana Morris in the \u003Ca href=\u0022https:\/\/lmc.gatech.edu\u0022\u003ESchool of Literature, Media, and Communication\u003C\/a\u003E. Ellen Zegura, a professor in the School of Computer Science, also leads a Mozilla grant aimed at embedding ethics in computer science classes through role play.\u003C\/p\u003E\r\n\r\n\u003Ch2\u003E\u0026#39;Where the Best of Sciences and Humanities Meet\u0026#39;\u003C\/h2\u003E\r\n\r\n\u003Cp\u003EDeven Desai, associate professor and area coordinator for Law and Ethics at Scheller College of Business, also will assume a key leadership role at ETHIC\u003Csup\u003Ex\u003C\/sup\u003E. He said the new Center will \u0026ldquo;build and deepen technology-related ethics scholarship and research across Georgia Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Scheller College\u0026rsquo;s focus on law and ethics is part of how we train future business leaders, the people who take innovation and bring it to market,\u0026rdquo; said Desai, who is also associate director for Law, Policy, and Ethics for Machine Learning at GA Tech (ML@GATECH), an interdisciplinary research center.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;ETHICx will be a place where the best of science and humanities meet to challenge and push to find the unasked, important questions. In that friction and fun, the best questions about the problems we face and the best answer about how to solve them so that everyone can benefit will come out,\u0026rdquo; he said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOther members of the new Center\u0026rsquo;s key leadership team include Jason Borenstein, director of graduate research ethics programs in the School of Public Policy; Betsy DiSalvo, director of the human-centered computing Ph.D. program and associate professor in the School of Interactive Computing; Michael Hoffmann, a professor in the School of Public Policy; and Nassim Parvin, an associate professor in the School of Literature, Media, and Communication.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA launch event is planned for November, during Ethics Awareness Week, with a forum to identify key challenges in technology ethics. The Center will soon announce details.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor more information about ETHIC\u003Csup\u003Ex\u003C\/sup\u003E, contact Husbands Fealing at \u003Ca href=\u0022mailto:dean@gatech.edu\u0022\u003Edean@gatech.edu\u003C\/a\u003E or Howard at \u003Ca href=\u0022mailto:ah260@gatech.edu\u0022\u003Eah260@gatech.edu\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EThe new Center will advance ethics-in-technology-centered research, education, and engagement at the Georgia Institute of Technology in collaboration with communities, government, non-governmental organizations.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"The new Center will advance ethics-in-technology-centered research, education, and engagement at the Georgia Institute of Technology."}],"uid":"33939","created_gmt":"2020-10-14 15:01:00","changed_gmt":"2020-10-14 17:15:52","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-10-13T00:00:00-04:00","iso_date":"2020-10-13T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"640176":{"id":"640176","type":"image","title":"ETHICx Center graphic","body":null,"created":"1602623629","gmt_created":"2020-10-13 21:13:49","changed":"1602623629","gmt_changed":"2020-10-13 21:13:49","alt":"","file":{"fid":"243345","name":"ETHICx graphic.jpg","image_path":"\/sites\/default\/files\/images\/ETHICx%20graphic.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/ETHICx%20graphic.jpg","mime":"image\/jpeg","size":490407,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/ETHICx%20graphic.jpg?itok=rW3XGy3i"}}},"media_ids":["640176"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"186032","name":"ETHICx"},{"id":"186033","name":"Ethics Technology and Human Interaction Center"},{"id":"1616","name":"Ivan Allen College of Liberal Arts"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39511","name":"Public Service, Leadership, and Policy"}],"news_room_topics":[{"id":"71871","name":"Campus and Community"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EMichael Pearson\u003Cbr \/\u003E\r\nmichael.pearson@iac.gatech.edu\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDavid Mitchell\u003Cbr \/\u003E\r\ndavid.mitchell@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["michael.pearson@iac.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"639092":{"#nid":"639092","#data":{"type":"news","title":"Georgia Tech Receives Google Grant to Study Impact of Pandemic Information Seeking on Vulnerable Populations","body":[{"value":"\u003Cp\u003E\u003Ca href=\u0022http:\/\/gatech.edu\u0022\u003EGeorgia Tech\u003C\/a\u003E will receive $155,000 from \u003Ca href=\u0022https:\/\/ai.google\/social-good\/\u0022\u003EGoogle\u0026rsquo;s Covid-19 AI for Social Good\u003C\/a\u003E program to investigate patterns and impact of pandemic information-seeking amongst vulnerable populations, such as older adults, low-income households, and Black and Hispanic adults. These populations have experienced disproportionately high rates of Covid-19-related death, severe sickness, and life disruptions like job loss.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFactors like higher rates of underlying health problems, reduced access to health care, and structural inequities shape access to critical resources. These same populations, however, also often have less access to the types of online information designed to improve health outcomes.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis project, led by principal investigator \u003Cstrong\u003EAndrea Grimes Parker\u003C\/strong\u003E, an associate professor in the \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E\u0026nbsp;and member of the \u003Ca href=\u0022http:\/\/ipat.gatech.edu\u0022\u003EInstitute for People and Technology\u003C\/a\u003E, will investigate how vulnerable and marginalized populations use technology for information seeking during the Covid-19 pandemic, as well as the impact of information exposure on their psychological wellbeing over time.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The Covid-19 pandemic has brought further attention to systemic disparities in health that have long existed in the United States,\u0026rdquo; Parker said. \u0026ldquo;Within a public health crisis, the information that people are exposed to has huge implications for how attitudes around the pandemic are shaped, how people respond, and thus the course of the pandemic.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Our work will provide both qualitative and quantitative evidence of the particular ways in which Covid-19 information exposure is tied to outcomes such as mental health in those most vulnerable to Covid-19 mortality and morbidity.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearchers will examine this information exposure over time. Their\u0026nbsp;findings will help to shape recommendations for crisis information communication, particularly online, in the future. This work builds upon existing work by Parker and collaborators at Northeastern University.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EParker and colleagues Professors \u003Cstrong\u003EMiso Kim\u003C\/strong\u003E and Dr. \u003Cstrong\u003EJacqueline Griffin\u003C\/strong\u003E began their collaboration investigation how well crisis apps \u0026ndash; mobile apps designed to provide help during emergency situations \u0026ndash; support older adults. This work was published at the 2020 ACM Conference on Human Factors in Computing Systems.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhen the pandemic began, they expanded their focus to additional groups of vulnerable to poor health, such as low-income and racial and ethnic minority populations. The team, in collaboration with Professor \u003Cstrong\u003EStacy Marsella\u003C\/strong\u003E, also expanded their focus beyond crisis apps, designing a survey to investigate information seeking practices in vulnerable populations amidst the pandemic.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis survey has been distributed to over 600 individuals in Massachusetts and Georgia to date. Parker\u0026rsquo;s new Google funding will enable the team to iterate on and expand the dissemination of this survey, conduct longitudinal analyses, and compliment the quantitative analysis with a qualitative component that will help unpack the nuances behind information-seeking practices and resulting Covid-19 attitudes, behaviors, and mental health outcomes.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis funding is part of Google.org\u0026rsquo;s $100 million commitment to Covid-19 relief efforts.\u0026nbsp;Organizations receiving funds were selected through a competitive review. Funding focus areas include health equity, disease spread monitoring and forecasting, frontline health worker support, secondary public health effects, and privacy-preserving contact tracing efforts.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Populations including older adults, low-income households, and Black and Hispanic adults have disproportionately high fatality rates, as well as less access to critical pandemic information."}],"uid":"33939","created_gmt":"2020-09-14 19:46:37","changed_gmt":"2020-09-14 19:46:37","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-09-14T00:00:00-04:00","iso_date":"2020-09-14T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"639090":{"id":"639090","type":"image","title":"Covid-19 Google Grant","body":null,"created":"1600112099","gmt_created":"2020-09-14 19:34:59","changed":"1600112099","gmt_changed":"2020-09-14 19:34:59","alt":"Two women wearing masks during Covid-19 pandemic","file":{"fid":"242990","name":"coronavirus-4981906_1920.jpg","image_path":"\/sites\/default\/files\/images\/coronavirus-4981906_1920.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/coronavirus-4981906_1920.jpg","mime":"image\/jpeg","size":134084,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/coronavirus-4981906_1920.jpg?itok=UdfavL2O"}}},"media_ids":["639090"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"184821","name":"cc-research; ic-hcc; ic-ai-ml; COVID-19"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"639077":{"#nid":"639077","#data":{"type":"news","title":"Georgia Tech Part of $5 Million Grant to Develop AI Tech Supporting Individuals With Autism Spectrum Disorder in the Workplace","body":[{"value":"\u003Cp\u003EThe \u003Ca href=\u0022http:\/\/nsf.gov\u0022\u003ENational Science Foundation\u003C\/a\u003E has awarded a $5 million grant to a multi-university team of researchers that includes \u003Ca href=\u0022http:\/\/gatech.edu\u0022\u003EGeorgia Tech\u003C\/a\u003E to create novel artificial intelligence technology that trains and supports individuals with Autism Spectrum Disorder (ASD) in the workplace.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe investment follows a successful $1 million, nine-month pilot grant to the same team, which also includes Yale University, Cornell University, Vanderbilt University, and the Vanderbilt University Medical Center. Georgia Tech\u0026rsquo;s portion of the grant is $500,000.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELed by co-principal investigator Professor \u003Cstrong\u003EJim Rehg\u003C\/strong\u003E of the \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E, Georgia Tech will develop methods for assessing nonverbal communication behaviors during face-to-face social interactions such as job interviews.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Our innovative approach uses an unobtrusive wearable camera to record social behaviors, which are then analyzed using computer vision and deep learning models,\u0026rdquo; Rehg said. \u0026ldquo;Our automated analysis will allow job seekers to get feedback on their communication skills as part of our team\u0026rsquo;s integrated approach to job interview coaching.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe project, which is part of the NSF\u0026rsquo;s \u003Ca href=\u0022https:\/\/www.nsf.gov\/od\/oia\/convergence-accelerator\/\u0022\u003EConvergence Accelerator\u003C\/a\u003E program, addresses an underutilized U.S. talent pool that poses a \u0026ldquo;critical but overlooked public health and economic challenge: how to include individuals with ASD\u0026rdquo; in the workforce, according to Vanderbilt Professor \u003Cstrong\u003ENilanjan Sarkar\u003C\/strong\u003E who is leading the project team.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EConsider:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EOne in 54 people in the United States has ASD;\u003C\/li\u003E\r\n\t\u003Cli\u003EEach year 70,000 young adults with ASD leave high school and face grim employment prospects;\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003EMore than 8 in 10 adults with ASD are either unemployed or underemployed, a significantly higher rate than adults with other developmental disabilities;\u003C\/li\u003E\r\n\t\u003Cli\u003EThe estimated lifetime cost of supporting an individual with ASD and limited employment prospects $3.2 million.\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003EThe total estimated cost of caring for Americans with ASD was $268 billion in 2015 and projected to grow to $461 billion in 2025.\u003C\/li\u003E\r\n\t\u003Cli\u003EAn estimated $50,000 per person per year could be contributed back into society when individuals with ASD are employed.\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We want to harness the power of AI, stakeholder engagement and convergent research to include neurodiverse individuals in the 21\u003Csup\u003Est\u003C\/sup\u003E century workforce,\u0026rdquo; Sarkar said. \u0026ldquo;We feel that there is a big opportunity to turn great societal cost into great societal value.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor this project, organizational, clinical and implementation experts are integrated with engineering teams to pave the way for real-world impact. The multi-university, multi-disciplinary team already has commitments from major employers to license some of the technology and tools developed.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearchers will address three themes:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EIndividualized assessment of unique abilities and appropriate job-matching\u003C\/li\u003E\r\n\t\u003Cli\u003ETailored understanding and ongoing support related to social communication and interaction challenge\u003C\/li\u003E\r\n\t\u003Cli\u003ETools to support job candidates, employees and employers.\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EAlready, notable private-sector companies that employ people with ASD have committed to using at least one of the technologies developed under this program: Auticon, The Precisionists, Ernst \u0026amp; Young and SAP among them.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETwo other companies, Floreo and Tipping Point Media, will make their existing VR modules available for adaptation to the program. Microsoft, which has a long-standing interest in hiring people with ASD, is involved as well and provided seed funding and access to cloud services for technology integration.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe five technologies can be used separately or as an integrated system, and the work has broader potential beyond ASD to expand employment access. In the U.S. alone, an estimated 50 million people have ASD, attention-deficit\/ hyper-activity disorder, learning disability or other neuro-diverse conditions.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech will develop methods for assessing nonverbal communication behaviors during face-to-face social interactions such as job interviews."}],"uid":"33939","created_gmt":"2020-09-14 17:52:01","changed_gmt":"2020-09-14 17:52:01","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-09-14T00:00:00-04:00","iso_date":"2020-09-14T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"590844":{"id":"590844","type":"image","title":"Child Study Lab Autism Research","body":null,"created":"1493061979","gmt_created":"2017-04-24 19:26:19","changed":"1493061979","gmt_changed":"2017-04-24 19:26:19","alt":"Lab coordinator Audrey Southerland, along with undergraduate assistants, leads data collection at the Child Study Lab.","file":{"fid":"225112","name":"Autism5.jpg","image_path":"\/sites\/default\/files\/images\/Autism5.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Autism5.jpg","mime":"image\/jpeg","size":329499,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Autism5.jpg?itok=a3RDfy3M"}}},"media_ids":["590844"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182941","name":"cc-research; ic-cybersecurity; ic-hcc"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"638703":{"#nid":"638703","#data":{"type":"news","title":"Welcome New IC Faculty: Seven Join School from Variety of Research Areas","body":[{"value":"\u003Cp\u003EEach year, the School of Interactive Computing conducts a rigorous search for the brightest minds to carry forward its academic and research initiatives. This year, IC welcomes seven\u0026nbsp; to that mission. Take a quick glance at the new research\u0026nbsp;coming to the School in 2020.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ESehoon Ha\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPh.D. in Computer Science, Georgia Tech 2015\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearch interests: Robotics, Artificial Intelligence, Character Animation\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHa\u0026rsquo;s research lies at the intersection of computer graphics and robotics, including physics-based animation, deep reinforcement learning, and computational robot design. Specifically, he has published work that addresses the need for more intelligent control software in robotics to improve agility, robustness, efficiency, and safety. In the long term, he aims to develop robotic companions for the home, search-and-rescue robots for disaster recovery scenes, and custom medical surgery robots that are tailored to individual patients.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EJennifer Kim\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPh.D. in Computer Science, University of Illinois, Urbana-Champaign 2019\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearch interests: Human-Computer Interaction, Interactive Systems, Health Care\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKim\u0026rsquo;s research investigates and develops interactive systems as communication artifacts to address various health-related challenges such as financial burdens of medical costs, difficulties in understanding behaviors of people with neurological disorders, and online health misinformation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EChris Le Dantec\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPh.D. in Human-Centered Computing, Georgia Tech 2011\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearch interests: Digital Media, Science and Technology Studies\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELe Dantec is interested in developing community-based design practices that support new forms of collective action through production and use of civic data. Specifically, his research has direct impact on how policy makers and citizens work together to address issues of community engagement, social justice, urban transportation, and development.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EAndrea Grimes Parker\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPh.D. in Human-Centered Computing, Georgia Tech 2011\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearch interests: Human-Computer Interaction, Computer Supported Cooperative Work, Health Informatics\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGrimes Parker designs and evaluates the impact of software tools that help people manage their health and wellness with a particular focus on equity. She studies racial, ethnic and economic health disparities, and the social context of health management. Through technology design, her research examines intrapersonal, social, cultural, and environmental factors that influence a person\u0026rsquo;s ability and desire to make healthy decisions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EAlan Ritter\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPh.D. in Computer Science and Engineering, University of Washington 2013\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearch interests: Natural Language Processing, Information Extraction, Machine Learning\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERitter\u0026rsquo;s research aims to solve challenging technical problems that can help machines learn to read vast quantities of text with minimal supervision. Past work included a system that reads millions of tweets for mentions of new software vulnerabilities. This tool spotted critical security flaws in software. He is also interested in data-driven dialogue agents that can converse with people more naturally.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ESashank Varma\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPh.D. in Cognitive Studies, Vanderbilt University 2006\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearch interests: Abstract Mathematical Thinking, Memory Systems Supporting Language Processing, Computational Models of High-Level Cognition\u003C\/p\u003E\r\n\r\n\u003Cp\u003EVarma\u0026rsquo;s research investigates complex forms of cognition that are uniquely human from multiple disciplinary perspectives. Primarily, this involves mathematical cognition, where he investigates how people use symbols systems to understand abstract mathematical concepts, how they develop intuitions about and insights into mathematics, and the mental mechanisms shared between reasoning and algorithmic thinking.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWei Xu\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPh.D. in Computer Science, New York University 2014\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearch Interests: Natural Language Processing, Machine Learning, Social Media\u003C\/p\u003E\r\n\r\n\u003Cp\u003EXu\u0026rsquo;s recent work focuses on methods to understand the varied expressions in human language and to generate paraphrases for applications, such as reading and writing assistive technology. She has also worked on crowdsourcing, summarization, and information extraction for user-generated data, such as Twitter and StackOverflow.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Take a quick glance at the new research\u00a0coming to the School of Interactive Computing in 2020."}],"uid":"33939","created_gmt":"2020-09-02 17:13:29","changed_gmt":"2020-09-02 17:13:29","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-09-02T00:00:00-04:00","iso_date":"2020-09-02T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"638702":{"id":"638702","type":"image","title":"New IC faculty 2020","body":null,"created":"1599066470","gmt_created":"2020-09-02 17:07:50","changed":"1599066470","gmt_changed":"2020-09-02 17:07:50","alt":"Sashank Varma, Sehoon Ha, Chris Le Dantec, Wei Xu, Alan Ritter, Andrea Grimes Parker, Jennifer Kim","file":{"fid":"242860","name":"New IC Faculty 2020.png","image_path":"\/sites\/default\/files\/images\/New%20IC%20Faculty%202020.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/New%20IC%20Faculty%202020.png","mime":"image\/png","size":1039808,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/New%20IC%20Faculty%202020.png?itok=EBKlSdn8"}}},"media_ids":["638702"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181216","name":"cc-research"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"638689":{"#nid":"638689","#data":{"type":"news","title":"IC Student Ceara Byrne Trades Dog Toys for Masks to Chip in on Covid Relief","body":[{"value":"\u003Cp\u003EWhat do dog toys have to do with Covid-19 pandemic relief? Leave it to a Georgia Tech student to find a connection.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESchool of Interactive Computing Ph.D. student \u003Cstrong\u003ECeara Byrne\u003C\/strong\u003E, whose primary research focuses on instrumenting dog toys with various sensors to measure canine behavior, found a way to contribute to the cause when she was approached by a fellow Georgia Tech student for assistance in 3D printing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ELee Whitcher\u003C\/strong\u003E, a Ph.D. student in the \u003Ca href=\u0022https:\/\/www.ae.gatech.edu\/\u0022\u003EDaniel Guggenheim School of Aerospace Engineering\u003C\/a\u003E, had already joined colleagues from the \u003Ca href=\u0022https:\/\/gtri.gatech.edu\/\u0022\u003EGeorgia Tech Research Institute\u003C\/a\u003E and \u003Ca href=\u0022https:\/\/www.me.gatech.edu\/\u0022\u003EGeorge W. Woodruff School of Mechanical Engineering\u003C\/a\u003E to design and manufacture personal protective equipment (PPE) like face shields to supplement the available supplies in the Atlanta area.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe work from GTRI and ME assisted in hospitals, and Whitcher\u0026rsquo;s work \u0026ndash; a non-profit called \u003Ca href=\u0022http:\/\/AtlantaBeatsCOVID.com\u0022\u003EAtlanta Beats COVID\u003C\/a\u003E \u0026ndash; aimed to design and produce masks and ventilators that could be produced by non-engineers wherever they are needed.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo do that, Whitcher and his partners needed a 3D printer that could cast the negatives for the masks. Georgia Tech\u0026rsquo;s \u003Ca href=\u0022https:\/\/gvu.gatech.edu\/\u0022\u003EGVU\u003C\/a\u003E Prototyping Lab in the Technology Square Research Building had just what they needed. So did Byrne.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EByrne has been using the Prototyping Lab\u0026rsquo;s printer for a while now to develop negatives of the silicone dog toys she uses in her research. Byrne\u0026rsquo;s work involves studying behavior in canines to understand temperament for service animals.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I was inspired by a friend from high school who grew up on a ranch,\u0026rdquo; Byrne said. \u0026ldquo;She and I got involved in 4-H. When I came back for a master\u0026rsquo;s degree, I started working with \u003Cstrong\u003EThad Starner\u003C\/strong\u003E and \u003Cstrong\u003EMelody Jackson\u003C\/strong\u003E on the FIDO project. I started noticing these aspects of the data that were reflective of dog temperament like drive and how they tackle activities. It really interested me.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPart of the research was to find good ways to measure that temperament beyond just visual observation. One solution was to place sensors into toys to take measurements as the dog played with it.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I\u0026rsquo;ve used the Prototyping Lab to 3D print my negative molds so that I can silicone cast the positives like balls and tug toys,\u0026rdquo; Byrne said. \u0026ldquo;It\u0026rsquo;s a long process of finding the right silicones, materials, hardness.\u0026nbsp; For the toys, I went through three or four different molds to find the right way to actually cast the parts. It was a lot of experimenting.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThat experimentation made her uniquely prepared to chip in with Whitcher\u0026rsquo;s project when Covid-19 hit. Looking for a way to develop the right mold for easy do-it-yourself mask production, Whitcher turned to Byrne for assistance.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;There are a number of aspects to it,\u0026rdquo; Byrne said. \u0026ldquo;How do you de-gas some of the silicone? When you have a mask, you can\u0026rsquo;t have the bubbles in the mold because you need a seal. How do you do it with the vacuum? If there\u0026rsquo;s no vacuum available, what are some easier ways? How do we make these negatives properly, and how many can you cast at once? What are the environmental aspects when you do it from home?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThese are all questions Byrne has had to explore when it comes to her dog toys. The experience proved useful in the mask production, as well.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EByrne was happy to get involved in pandemic relief assistance. She has brothers and sisters-in-law who are doctors.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;They\u0026rsquo;ve been amazing in helping around the community,\u0026rdquo; she said. \u0026ldquo;My brother is making masks, which I think is fascinating. He\u0026rsquo;s a radiation oncologist and has built respiratory masks with the Pancreatic Cancer Foundation. So, I wanted to help out in any way that I could, as well.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBeing at Georgia Tech, she said, made the collaboration a natural occurrence.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;That\u0026rsquo;s what makes Georgia Tech unique, right?\u0026rdquo; she said. \u0026ldquo;We can collaborate across these disciplines that maybe don\u0026rsquo;t connect to each other on the surface.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERead more about the relief effort, how to request PPE, and how to get involved at \u003Ca href=\u0022http:\/\/AtlantaBeatsCOVID.com\u0022\u003EAtlantaBeatsCOVID.com\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Byrne, whose work uses a 3D printer to make dog toys, is using her expertise to help in mask production."}],"uid":"33939","created_gmt":"2020-09-01 22:53:17","changed_gmt":"2020-09-01 22:53:17","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-09-01T00:00:00-04:00","iso_date":"2020-09-01T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"638688":{"id":"638688","type":"image","title":"Ceara Byrne","body":null,"created":"1598997204","gmt_created":"2020-09-01 21:53:24","changed":"1598997204","gmt_changed":"2020-09-01 21:53:24","alt":"ceara byrne","file":{"fid":"242857","name":"heart-innovation.jpg","image_path":"\/sites\/default\/files\/images\/heart-innovation.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/heart-innovation.jpg","mime":"image\/jpeg","size":20656,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/heart-innovation.jpg?itok=orZbCYDG"}}},"media_ids":["638688"],"related_links":[{"url":"https:\/\/ae.gatech.edu\/news\/2020\/04\/what-engineers-do-crisis","title":"What Engineers Do in a Crisis"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"185769","name":"cc-research; ic-hcc; COVID-19"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"637711":{"#nid":"637711","#data":{"type":"news","title":"Two IC Grads Earn Sigma Xi Best Ph.D. Thesis Awards","body":[{"value":"\u003Cp\u003ERecent Georgia Tech Ph.D. graduates \u003Cstrong\u003ECaitlyn Seim\u003C\/strong\u003E and \u003Cstrong\u003EAishwarya Agrawal\u003C\/strong\u003E, both from the School of Interactive Computing, were awarded the 2020 Sigma Xi Best Ph.D. Thesis Award. They were two of just 10 Ph.D. students at Georgia Tech recognized with the honor.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeim\u0026rsquo;s thesis, titled \u003Cem\u003EWearable Vibrotactile Stimulation: How Passive Stimulation Can Train and Rehabilitate\u003C\/em\u003E, presents a technique in which a vibrating wearable device is used to retrain motor function following debilitating occurrences of spinal fracture or stroke. Now a postdoc at Stanford University and a fellow with the National Institutes of Health, Seim is currently working with stroke survivors to develop accessible and functional wearable devices to reduce physical disability in both the upper and lower limbs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Lately, I have also developed new mechanical tools to assess hand and arm function when there are no quantitative clinical tests available,\u0026rdquo; Seim said. \u0026ldquo;I plan to continue research on wearable and ubiquitous systems for health, accessibility, and training.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn Agrawal\u0026rsquo;s thesis, titled \u003Cem\u003EVisual Question Answering and Beyond\u003C\/em\u003E, she explores a multi-modal artificial intelligence task called visual question answering. In this task, given an image and natural language question about it, a machine is programmed to automatically produce an accurate natural language answer. The applications of VQA include aiding visually impaired users in understanding their surroundings, aiding analysts in examining large quantities of surveillance data, teaching children through interactive demos, interacting with personal AI assistants, and making visual social media content more accessible.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENow at DeepMind and soon to be an assistant professor at the University of Montreal and Mila, an AI research institute, Agrawal intends to equip current VQA systems with better skills to move toward artificial general intelligence.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In the long term, I am excited about science fiction becoming reality, when we all have smart virtual assistants that can see and talk and serve as an aid to visually impaired users,\u0026rdquo; she said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe eight other recipients of the Georgia Tech Sigma Xi Best Ph.D. Thesis Award were Mingue Kim (ECE), Ming Zhao (Chemistry), Andres Caballero (BME), Ke (Chris) Liu (CEE), Monica McNerney (ChBE), Chris Sugino (ME), Hamid Reza Seyf (ME), and Eric Tervo (ME).\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"They were two of just 10 Ph.D. students at Georgia Tech recognized with the honor."}],"uid":"33939","created_gmt":"2020-08-10 13:46:02","changed_gmt":"2020-08-10 13:46:02","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-08-10T00:00:00-04:00","iso_date":"2020-08-10T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"637710":{"id":"637710","type":"image","title":"Aishwarya Agrawal and Caitlyn Seim","body":null,"created":"1597067128","gmt_created":"2020-08-10 13:45:28","changed":"1597067128","gmt_changed":"2020-08-10 13:45:28","alt":"Aishwarya Agrawal and Caitlyn Seim","file":{"fid":"242547","name":"Personal Vlog YouTube Thumbnail.png","image_path":"\/sites\/default\/files\/images\/Personal%20Vlog%20YouTube%20Thumbnail.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Personal%20Vlog%20YouTube%20Thumbnail.png","mime":"image\/png","size":1057627,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Personal%20Vlog%20YouTube%20Thumbnail.png?itok=SUD_F-qp"}}},"media_ids":["637710"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181639","name":"cc-research; ic-ai-ml"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"637122":{"#nid":"637122","#data":{"type":"news","title":"Georgia Tech, 6 Collaborators Receive $5.9 Million NIH Grant for a National Center in AI-based mHealth Research","body":[{"value":"\u003Cp\u003EGeorgia Tech researchers will develop more effective and personalized treatment approaches for chronic health conditions under a new grant from the \u003Ca href=\u0022http:\/\/nih.gov\u0022\u003ENational Institutes of Health\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe NIH is issuing $5.9 million in funding for a new national biomedical technology\u0026nbsp;resource center (BTRC), called the mHealth Center for Discovery, Optimization \u0026amp; Translation of Temporally-Precise Interventions (mDOT). Georgia Tech, one of seven collaborators on the project, will receive $500,000, and mDOT\u0026nbsp;will be headquartered at the MD2K Center of Excellence at The University of Memphis.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne of the biggest drivers of the nation\u0026rsquo;s rising healthcare spending is providing care for patients with chronic diseases, many of which are linked to daily behaviors such as dietary choices, sedentary behavior, stress, and addiction. The mDOT Center will be a new national technology resource for improving people\u0026rsquo;s health and wellness. It will conduct cutting-edge AI research to produce easily deployable wearables, apps for wearables and smartphones, and a companion cloud system. mDOT\u0026rsquo;s innovative technology will enable patients to initiate and sustain the healthy lifestyle choices necessary to prevent and\/or successfully manage the growing burden of multiple chronic conditions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELed by \u003Cstrong\u003EJim Rehg\u003C\/strong\u003E, a Professor in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E, Georgia Tech\u0026rsquo;s project will focus on analyzing streams of biomarker data to enable the development of more effective, personalized treatment approaches for chronic health conditions like smoking and physical activity. To achieve this, the team will develop machine learning methods that can discover important risk factors from sensor data and identify effective intervention targets.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Consider developing an intervention to help people who are trying to quit smoking by providing personalized strategies for managing risk factors that are known to precipitate relapse,\u0026rdquo; Rehg said. \u0026ldquo;Researchers and practitioners would use our tools to analyze biomarker data and characterize the patterns that lead to relapse and identify potential intervention targets.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe collaboration can then use the tools provided by the other teams to develop and tailor an effective personalized stress intervention and deliver it efficiently on a mobile device. \u003Cstrong\u003EOmer Inan\u003C\/strong\u003E, a faculty member in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/ece.gatech.edu\u0022\u003ESchool of Electrical and Computer Engineering\u003C\/a\u003E, will also collaborate with the team, leveraging work on novel non-invasive biosensors that detect cardiovascular changes in heart failure. Working alongside the mDOT team will enhance the ability to develop and deploy interventions based on his novel wearable sensors.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Researchers and industry innovators can leverage mDOT\u0026rsquo;s technological resources to create the next generation of mHealth technology that is highly personalized to each user, transforming people\u0026rsquo;s health and wellness,\u0026rdquo; said \u003Cstrong\u003ESantosh Kumar\u003C\/strong\u003E, the lead investigator of mDOT, who is the director of MD2K Center of Excellence and Lillian \u0026amp; Morrie Moss Chair of Excellence Professor of Computer Science at the University of Memphis.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo ensure mDOT\u0026rsquo;s innovative technology can be used by scientists to solve real-world problems, mDOT will be working closely with over a dozen other federally-funded projects to engage in joint technology development, testing, and large-scale real-life deployment. To ensure that mDOT\u0026rsquo;s technological resources can fuel innovation in the health technology industry, the mDOT Center is establishing a new industry consortium to provide access to mDOT\u0026rsquo;s latest research and seek feedback to inform its ongoing research.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe mDOT Center will be administered by the National Institute of Biomedical Imaging and Bioengineering (NIBIB).\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The mDOT Center will be the first\u003Ca href=\u0022https:\/\/www.nibib.nih.gov\/research-funding\/biomedical-technology-resource-centers\u0022\u003E\u0026nbsp;BTRC\u003C\/a\u003E\u0026nbsp;focused on developing innovative mHealth technologies. It is positioned to empower scientists to discover, personalize, and deliver temporally-precise mHealth interventions and treatments, ensuring that health and wellness tools are delivered at the right moment, via the right personal device, and is optimized to have the most influence,\u0026rdquo; said mDOT\u0026rsquo;s program officer\u0026nbsp;\u003Cstrong\u003ETiffani Lash\u003C\/strong\u003E, director of the NIBIB program inConnected Health.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe multidisciplinary mDOT team consists of leading researchers in artificial intelligence (AI), mobile computing, wearable sensors, privacy, and precision medicine from Harvard University, Georgia Institute of Technology, The Ohio State University, The University of Massachusetts-Amherst, The University of Memphis (lead), The University of California at Los Angeles, and The University of California at San Francisco.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EAbout MD2K:\u003C\/strong\u003E\u0026nbsp;The Center of Excellence for Mobile Sensor Data-to-Knowledge (MD2K), headquartered (in FedEx Institute of Technology) at The University of Memphis, was established in 2014 by a grant from National Institutes of Health (NIH) under its Big-Data-To-Knowledge (BD2K) initiative. It has developed mobile sensor big data technologies to improve health and wellness. MD2K\u0026rsquo;s open-source software platforms for smartphones and the cloud are used across the nation to conduct scientific studies.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"he NIH is issuing $5.9 million in funding for a new national biomedical technology\u00a0resource center (BTRC), called the mHealth Center for Discovery, Optimization \u0026 Translation of Temporally-Precise Interventions (mDOT)."}],"uid":"33939","created_gmt":"2020-07-20 20:36:27","changed_gmt":"2020-07-20 20:36:27","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-07-20T00:00:00-04:00","iso_date":"2020-07-20T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"592632":{"id":"592632","type":"image","title":"Rehg-Jim","body":null,"created":"1497298524","gmt_created":"2017-06-12 20:15:24","changed":"1497298713","gmt_changed":"2017-06-12 20:18:33","alt":"James Rehg","file":{"fid":"225873","name":"Rehg-Jim250.jpg","image_path":"\/sites\/default\/files\/images\/Rehg-Jim250.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Rehg-Jim250.jpg","mime":"image\/jpeg","size":66316,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Rehg-Jim250.jpg?itok=Fzvp-y4u"}}},"media_ids":["592632"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50877","name":"School of Computational Science and Engineering"}],"categories":[],"keywords":[{"id":"182525","name":"cc-research; ic-hcc; ic-ai-ml"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"636549":{"#nid":"636549","#data":{"type":"news","title":"C4G BLIS Update Improves Usability, Could Prove Useful in Fight Against Disease Outbreaks","body":[{"value":"\u003Cp\u003EAn update to a laboratory information system used in countries across Africa is improving usability and could prove critical in response to future disease outbreaks.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn 2010, a group of researchers at \u003Ca href=\u0022http:\/\/gatech.edu\/\u0022\u003EGeorgia Tech\u003C\/a\u003E, the CDC, and Ministries of Health in several African countries launched an open-source laboratory management system as part of the \u003Ca href=\u0022https:\/\/ptc.gatech.edu\/computing-for-good-college-of-computing\u0022\u003ECollege of Computing\u0026rsquo;s Computing-for-Good\u003C\/a\u003E (C4G) initiative. Designed to be ultra-configurable to meet variable needs of labs across developing countries with minimal training for staff, it quickly grew to become one of C4G\u0026rsquo;s biggest success stories.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMore than 10 nations in sub-Saharan Africa adopted the program, called the \u003Ca href=\u0022http:\/\/blis.cc.gatech.edu\/\u0022\u003EBasic Laboratory Information System\u003C\/a\u003E (BLIS), giving areas with little or poor internet connectivity an easy-to-use system for many who had minimal computing experience. These countries, which had over 1 million patients at the time, were using paper-based systems to manage information on disease spread, local illnesses, and much more. As information and communications technologies have expanded in the area, however, many labs gained a standardized reports system that could track prevalence rates of infections, slowing their spread.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut a lot can change in just 10 years. What was once designed for personal computing interfaces is now desired for a wide range of new platforms. Although laptops are still the device of choice for the majority of nurses \u0026ndash; 79.6 percent reported in a study of a Nigerian hospital -- smartphones and tablets have seen a steady increase. The coming years will include many more innovations that render even those obsolete.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs users in the global south aspire to embrace mobile computing in clinical settings, a flexible interface, adaptable to everchanging applications, is needed.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEnter: \u003Cstrong\u003EJung Wook Park\u003C\/strong\u003E and \u003Cstrong\u003EAditi Shah\u003C\/strong\u003E, a Ph.D. student in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC) and former master\u0026rsquo;s student in the \u003Ca href=\u0022http:\/\/scs.gatech.edu\/\u0022\u003ESchool of Computer Science\u003C\/a\u003E (SCS), respectively. Along with SCS Professor \u003Cstrong\u003ESantosh Vempala\u003C\/strong\u003E and IC Principal Research Scientist \u003Cstrong\u003ERosa Arriaga\u003C\/strong\u003E, Park and Shah published research updating the current interface of C4G BLIS\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETheir updates focused on a handful of key areas, primarily mobile support. A responsive user interface framework supporting various screen sizes and resolutions was developed and evaluated by real users at hospitals in Africa currently using BLIS.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThey compared user experience with the current interface on desktops and smartphones with a proposed interface on both and found that there was a significant improvement on both the desktop and smartphone.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;When you bring in a new system, they may feel uncomfortable with it,\u0026rdquo; Park said. \u0026ldquo;If we didn\u0026rsquo;t do a great job, you might get the same score or lower at the beginning. Over time, we saw improvements of 32 and 34 percent on desktops and smartphones.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EShah, now at Microsoft, offered plenty of help in the development of the system, and her experience with a visual impairment allowed her to provide perspective on accessibility, as well.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe implications of this research extend far beyond ease of use for nurses, however. Park identified a growing problem across the globe in health care: communication. As the current pandemic can illustrate, viruses and diseases can spread quickly across many different populations. It isn\u0026rsquo;t sufficient to have just local data to mount an appropriate response; teams around the world must be able to rapidly share information.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA system like C4G BLIS, with its improved user interface that can be used across multiple platforms depending on the local needs of various communities, can help that communication.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If you notice something locally and maybe other areas of the country or continent notice something, how do you know if it is a pandemic?\u0026rdquo; Park posed. \u0026ldquo;You need to be able to share that information to manage the spread. By turning these local systems into a standardized cloud-based system, we can improve communication.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlready, Vempala said, he has heard reports from many labs that have adapted the flexible system to keep track of COVID-19 data in their communities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe paper is titled \u003Cem\u003ERedesigning a Basic Laboratory Information System for the Global South\u003C\/em\u003E, and was presented at the International Telecommunication Union Kaleidoscope conference, earning a Best Paper award.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A system that has helped bring digital record keeping to hospitals across Africa has received a needed update for new platforms like smartphones and tablets."}],"uid":"33939","created_gmt":"2020-06-25 20:22:10","changed_gmt":"2020-06-25 20:22:10","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-06-25T00:00:00-04:00","iso_date":"2020-06-25T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"636548":{"id":"636548","type":"image","title":"Jung Wook Park and Aditi Shah","body":null,"created":"1593116182","gmt_created":"2020-06-25 20:16:22","changed":"1593116182","gmt_changed":"2020-06-25 20:16:22","alt":"Jung Wook Park and Aditi Shah","file":{"fid":"242182","name":"Shah and Park Image.png","image_path":"\/sites\/default\/files\/images\/Shah%20and%20Park%20Image.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Shah%20and%20Park%20Image.png","mime":"image\/png","size":1093782,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Shah%20and%20Park%20Image.png?itok=qfxEI99m"}}},"media_ids":["636548"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"431631","name":"OMS"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"184890","name":"cc-research; ic-hcc"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"636275":{"#nid":"636275","#data":{"type":"news","title":"Robots Gain Ability to Master Object Manipulation with Context-Aware Technique","body":[{"value":"\u003Cp\u003EGeorgia Institute of Technology researchers have developed one of the most robust research methods currently available to allow robots to correctly pick up common objects based on how they should be used.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhereas humans might touch a hot pan on a stove once and never forget the lesson, it\u0026rsquo;s more complex to train robots to apply such universal knowledge across different situations.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe new technique, called CAGE, or Context-Aware Grasping Engine, takes into consideration a range of factors \u0026ndash; such as the task the object will be used for, whether the object is full or empty, what it\u0026rsquo;s made of, and its shape \u0026ndash; so that a robot can learn the right way to grasp various objects in a given context. For example, it allows a robot to learn not to hold a hot cup of tea by its opening, or to handle a cooking pot differently based on whether it just left a stovetop or a cabinet.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In order for robots to effectively perform object manipulation, a broad sense of contexts, including object and task constraints, needs to be accounted for,\u0026rdquo; said\u0026nbsp;\u003Cstrong\u003EWeiyu Liu\u003C\/strong\u003E, lead researcher on CAGE and Ph.D. student in robotics.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing CAGE, a robot is able to apply what it has learned to objects it\u0026rsquo;s never seen.\u0026nbsp; For example, if trained to grasp a spatula by the handle to make a scooping motion, the robot is able to generalize this knowledge and know to grasp a mug by the handle and use it to scoop \u0026mdash; if that was the programmed task \u0026mdash; even if the robot has never encountered a mug before.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research team, from the Robot Autonomy and Interactive Learning (RAIL) lab at Georgia Tech, validated their approach against three existing methods for teaching robots to handle objects. The team used a novel dataset consisting of 14,000 grasps for 44 objects, 7 tasks, and 6 different object states (e.g. objects contained solids, liquids, or were empty).\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECAGE outperformed the other methods in a simulation by statistically significant margins, according to the researchers, highlighting the model\u0026rsquo;s ability to collectively reason about contextual information.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECAGE had an 86 percent success rate when averaged across tests looking at how well it identified context-aware grasps and if the model could generalize to new objects a robot had not seen previously. Among the existing methods, the highest success rate averaged 69 percent.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELiu said that the team\u0026rsquo;s model can rank grasp \u0026ldquo;candidates\u0026rdquo; for various contexts, ensuring that more suitable candidates are ranked higher than less suitable ones given a context. So a robot might, for example, learn to hand a sharp metal knife to a person handle-first, but hand over a plastic knife in any orientation due to its relative safety.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA final experiment evaluated the effectiveness of CAGE using a Fetch robot equipped with a camera, moving arm, and a parallel-jaw gripper. It performed almost perfectly in making a judgement on how to grasp objects for several distinct tasks, including scooping, pouring, lifting, and handing over an object, among others. If there was no suitable grasp for the given situation, the robot made no attempt in all cases.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe work, developed by Liu,\u0026nbsp;\u003Cstrong\u003EAngel Daruna\u003C\/strong\u003E, and\u0026nbsp;\u003Cstrong\u003ESonia Chernova\u003C\/strong\u003E, was accepted into the\u0026nbsp; International Conference on Robotics and Automation, taking place virtually this June. The paper is titled\u0026nbsp;\u003Ca href=\u0022http:\/\/rail.gatech.edu\/assets\/files\/Liu_ICRA20.pdf\u0022 rel=\u0022noopener noreferrer\u0022 target=\u0022_blank\u0022\u003ECAGE: Context-Aware Grasping Engine\u003C\/a\u003E\u0026nbsp;and the research data is publicly available at\u0026nbsp;\u003Ca href=\u0022https:\/\/github.com\/wliu88\/rail_semantic_grasping\u0022\u003Ehttps:\/\/github.com\/wliu88\/rail_semantic_grasping\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EThis work is supported in part by NSF IIS 1564080, NSF GRFP DGE-1650044, and ONR N000141612835. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors.\u003C\/em\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Institute of Technology researchers have developed one of the most robust research methods currently available to allow robots to correctly pick up common objects based on how they should be used.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Institute of Technology researchers have developed one of the most robust research methods currently available to allow robots to correctly pick up common objects based on how they should be used."}],"uid":"27592","created_gmt":"2020-06-16 21:01:58","changed_gmt":"2020-06-16 21:15:10","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-06-16T00:00:00-04:00","iso_date":"2020-06-16T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"636277":{"id":"636277","type":"image","title":"One Step Closer to Domestic Robots | ICRA 2020","body":null,"created":"1592341649","gmt_created":"2020-06-16 21:07:29","changed":"1592341649","gmt_changed":"2020-06-16 21:07:29","alt":"","file":{"fid":"242101","name":"robot coffee graphic_mercury.png","image_path":"\/sites\/default\/files\/images\/robot%20coffee%20graphic_mercury.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/robot%20coffee%20graphic_mercury.png","mime":"image\/png","size":430675,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/robot%20coffee%20graphic_mercury.png?itok=JhRvbsMO"}},"636276":{"id":"636276","type":"image","title":"Sonia Chernova with robot arm","body":null,"created":"1592341598","gmt_created":"2020-06-16 21:06:38","changed":"1592341598","gmt_changed":"2020-06-16 21:06:38","alt":"","file":{"fid":"242100","name":"sonia chernova.jpg","image_path":"\/sites\/default\/files\/images\/sonia%20chernova.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/sonia%20chernova.jpg","mime":"image\/jpeg","size":230142,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/sonia%20chernova.jpg?itok=KiFGCHXl"}}},"media_ids":["636277","636276"],"related_links":[{"url":"https:\/\/www.youtube.com\/watch?v=EnHUHQv8hr0\u0026feature=emb_logo","title":"CAGE: Context-Aware Grasping Engine"}],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu?subject=CAGE%20algorithm%3B%20ICRA%202020\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\nResearch Communications Manager\u003Cbr \/\u003E\r\nGVU Center and College of Computing\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"636196":{"#nid":"636196","#data":{"type":"news","title":"ML@GT Faculty Members Will Discuss Projects Related to Covid-19 Relief During Virtual Panel","body":[{"value":"\u003Cp\u003EThe coronavirus (Covid-19) pandemic has wreaked havoc on the world, spurring researchers across disciplines into action to help human-kind. Four researchers affiliated with the \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EMachine Learning Center at Georgia Tech (ML@GT)\u003C\/a\u003E and one \u003Ca href=\u0022https:\/\/omscs.gatech.edu\/\u0022\u003EOnline Master of Science in Computer Science (OMSCS)\u003C\/a\u003E student examined different aspects of the virus\u0026rsquo; impact. From creating forecasting models to studying the psychological impact of the disease, these researchers are helping people understand the virus.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOn June 24, ML@GT faculty members \u003Cstrong\u003ESrijan Kumar \u003C\/strong\u003E(School of Computational Science and Engineering,) \u003Cstrong\u003EAditya Prakash \u003C\/strong\u003E(School of Computational Science and Engineering,) \u003Cstrong\u003EMunmun De Choudhury \u003C\/strong\u003E(School of Interactive Computing,) \u003Cstrong\u003ENicoleta Serban\u0026nbsp;\u003C\/strong\u003E(H. Milton Stewart School of Industrial and Systems Engineering,) and OMSCS student \u003Cstrong\u003EKenneth Miller\u003C\/strong\u003E will participate in a virtual panel discussing their work. The panel will be moderated by ML@GT executive director \u003Cstrong\u003EIrfan Essa\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPanelists will give individual presentations before participating in a general question-and-answer segment with audience members.\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EKumar and De Choudhury will share details of their work regarding the \u003Ca href=\u0022http:\/\/ml.gatech.edu\/hg\/item\/635397\u0022\u003Epsychological impact of Covid-19\u003C\/a\u003E. Kumar will also discuss his work examining \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/635858\/predicting-hate-crimes-targeting-asian-americans-amid-covid-19-outbreak\u0022\u003Ehate and counter-hate messages on Twitter against Asian Americans\u003C\/a\u003E during the pandemic.\u003C\/li\u003E\r\n\t\u003Cli\u003EPrakash is a member of the Center for Disease Control and Prevention\u0026rsquo;s (CDC) forecasting team, and will share their \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/635849\/forecasting-covid-19-pandemic-united-states\u0022\u003Enew data-driven approach to disease forecasting\u003C\/a\u003E.\u003C\/li\u003E\r\n\t\u003Cli\u003ESerban\u0026rsquo;s presentation will focus on her work creating an \u003Ca href=\u0022https:\/\/www.georgiahealthnews.com\/2020\/05\/georgia-tech-model-predicts-spike-covid-cases-deaths\/\u0022\u003Eagent-based simulation\u0026nbsp;forecasting model\u003C\/a\u003E. This model captures the progression of the disease in an individual and in households, schools, communities, and workplaces.\u003C\/li\u003E\r\n\t\u003Cli\u003EA lawyer by day and OMSCS student by night, Miller participated in a Kaggle challenge using natural language processing and machine learning to \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/635081\/omscs-student-uses-machine-learning-help-understand-covid-19\u0022\u003Ehelp doctors and scientists read the most important studies\u003C\/a\u003E related to Covid-19.\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EThe panel will take place virtually via a Bluejeans Event at 11 a.m. on June 24 and is open to the public. \u003Ca href=\u0022https:\/\/primetime.bluejeans.com\/a2m\/register\/sfpbpsgg\u0022\u003ERegistration is required\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Members of the ML@GT community will discuss their Covid-19 related research efforts in a panel discussion on June 24, 2020."}],"uid":"34773","created_gmt":"2020-06-12 13:40:53","changed_gmt":"2020-06-15 19:52:10","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-06-12T00:00:00-04:00","iso_date":"2020-06-12T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"636195":{"id":"636195","type":"image","title":"Members of the ML@GT community will discuss their Covid-19 related research efforts in a panel discussion on June 24, 2020.","body":null,"created":"1591969094","gmt_created":"2020-06-12 13:38:14","changed":"1591969094","gmt_changed":"2020-06-12 13:38:14","alt":"Members of the ML@GT community will discuss their Covid-19 related research efforts in a panel discussion on June 24, 2020.","file":{"fid":"242073","name":"Using Machine Learning to Respond to Covid-19.png","image_path":"\/sites\/default\/files\/images\/Using%20Machine%20Learning%20to%20Respond%20to%20Covid-19.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Using%20Machine%20Learning%20to%20Respond%20to%20Covid-19.png","mime":"image\/png","size":504783,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Using%20Machine%20Learning%20to%20Respond%20to%20Covid-19.png?itok=HSZ2sXoG"}}},"media_ids":["636195"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"},{"id":"431631","name":"OMS"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"636173":{"#nid":"636173","#data":{"type":"news","title":"Research Conference Shows Social Challenges are Manifested, Magnified, and Mitigated Online at Pivotal Time for Nation","body":[{"value":"\u003Cp\u003EThe value of online mental health communities, how crisis events are described differently over time on social media, and refining how cyberbullying is detected and classified are major topics of research by Georgia Institute of Technology researchers at this week\u0026rsquo;s International Conference on Web and Social Media (ICWSM), taking place virtually. It was originally scheduled to be held in Atlanta near the Georgia Tech campus.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOver 220 academics at the 14\u003Csup\u003Eth\u003C\/sup\u003E annual event are convening and discussing work that is especially relevant during a time of an ongoing global health crisis and social unrest that has taken root across the United States.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearch in the conference proceedings include many topics directly addressing social ills and injustices that are magnified online as well as potential ways to better understand and mitigate them.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeveral College of Computing faculty, current and former students, and postdoctoral researchers are part of the organizing committee. \u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E (Interactive Computing) is serving as the general chair of the conference this year. Former Human-Centered Computing PhD student \u003Cstrong\u003EStevie Chancellor\u003C\/strong\u003E is workshop chair, former Computer Science PhD student \u003Cstrong\u003ETanushree Mitra\u003C\/strong\u003E is tutorials chair, current CS PhD student \u003Cstrong\u003EKoustuv Saha\u003C\/strong\u003E is web chair, and current postdoc \u003Cstrong\u003ETalayeh Aledavood\u003C\/strong\u003E is local\/social chair. CoC faculty \u003Cstrong\u003EDiyi Yang\u003C\/strong\u003E (Interactive Computing) and \u003Cstrong\u003ESrijan Kumar\u003C\/strong\u003E (Computational Science and Engineering) are data challenge chairs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne of the two keynotes at the conference is by IC faculty \u003Cstrong\u003EAmy Bruckman\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech has three papers in this year\u0026rsquo;s program:\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EA study in causal inference by CS PhD student \u003Cstrong\u003EKoustuv Saha\u003C\/strong\u003E that tests what leads to favorable psychosocial outcomes in mental health forums.\u003Cbr \/\u003E\r\n\t\u003Cem\u003ELink: \u003C\/em\u003E\u003Ca href=\u0022https:\/\/aaai.org\/ojs\/index.php\/ICWSM\/article\/view\/7326\u0022\u003E\u003Cem\u003Ehttps:\/\/aaai.org\/ojs\/index.php\/ICWSM\/article\/view\/7326\u003C\/em\u003E\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003EA paper by HCC PhD student \u003Cstrong\u003EIan Stewart\u003C\/strong\u003E, with advisors \u003Cstrong\u003EDiyi Yang\u003C\/strong\u003E and \u003Cstrong\u003EJacob Eisenstein\u003C\/strong\u003E, that intends to gather a sharper view of \u0026ldquo;collective attention\u0026rdquo; on social media. Looking at descriptive details for a crisis event, researchers find that the information needed to describe that event changes as time goes on.\u003Cbr \/\u003E\r\n\t\u003Cem\u003ELink: \u003C\/em\u003E\u003Ca href=\u0022https:\/\/aaai.org\/ojs\/index.php\/ICWSM\/article\/view\/7331\u0022\u003E\u003Cem\u003Ehttps:\/\/aaai.org\/ojs\/index.php\/ICWSM\/article\/view\/7331\u003C\/em\u003E\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003EA socially-inspired approach to detect cyberbullying online, by incoming PhD student \u003Cstrong\u003ECaleb Ziems\u003C\/strong\u003E. The paper proposes new criteria for cyberbullying (e.g. harmful intent) and finds that both text and social features help prediction. This paper has been recognized with an Honorable Mention Award, given to a total of eight papers this year.\u003Cbr \/\u003E\r\n\t\u003Cem\u003ELink: \u003C\/em\u003E\u003Ca href=\u0022https:\/\/aaai.org\/ojs\/index.php\/ICWSM\/article\/view\/7345\u0022\u003E\u003Cem\u003Ehttps:\/\/aaai.org\/ojs\/index.php\/ICWSM\/article\/view\/7345\u003C\/em\u003E\u003C\/a\u003E\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EFor details about more research and to read the organizing committee\u0026rsquo;s full statement on the commitment to Black Lives Matter, fighting structural racism, and promoting inclusion and equity, go to \u003Ca href=\u0022https:\/\/www.icwsm.org\/2020\/index.html\u0022\u003Ehttps:\/\/www.icwsm.org\/2020\/index.html\u003C\/a\u003E. In the wake of current events in the United States, the conference made 20 registration fee waivers available for Black scholars and individuals from other marginalized groups throughout the world, and provided scheduling flexibility to speakers and attendees participating in the Shutdown STEM walkout on June 10.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe conference is sponsored by the Association for the Advancement of Artificial Intelligence.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EThe value of online mental health communities, how crisis events are described differently over time on social media, and refining how cyberbullying is detected and classified are major topics of research by Georgia Institute of Technology researchers at this week\u0026rsquo;s International Conference on Web and Social Media (ICWSM 2020).\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"The value of online mental health communities, how crisis events are described differently over time on social media, and refining how cyberbullying is detected and classified are major topics of research from Georgia Tech at ICWSM 2020."}],"uid":"27592","created_gmt":"2020-06-11 15:20:55","changed_gmt":"2020-06-11 15:25:41","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-06-10T00:00:00-04:00","iso_date":"2020-06-10T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"636174":{"id":"636174","type":"image","title":"International Conference on Web and Social Media (ICWSM 2020)","body":null,"created":"1591888971","gmt_created":"2020-06-11 15:22:51","changed":"1591888971","gmt_changed":"2020-06-11 15:22:51","alt":"","file":{"fid":"242065","name":"ICWSM 2020.png","image_path":"\/sites\/default\/files\/images\/ICWSM%202020.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/ICWSM%202020.png","mime":"image\/png","size":4680771,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/ICWSM%202020.png?itok=_IyYw1Qt"}}},"media_ids":["636174"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu?subject=ICWSM%202020\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\nResearch Communications Manager\u003Cbr \/\u003E\r\nGVU Center and College of Computing\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"636110":{"#nid":"636110","#data":{"type":"news","title":"Robotics Research Includes Advances in Systems Design, Applications, and other Key Areas","body":[{"value":"\u003Cp\u003ERoboticists from around the world, including researchers from the Georgia Institute of Technology, are publishing their latest work at the 2020 IEEE International Conference on Robotics and Automation (ICRA), a two-week live online virtual affair\u0026nbsp;the first part of June, with research activities continuing through August.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/icra.cc.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003EGeorgia Tech is a leading contributor\u003C\/a\u003E with 42 papers that include research covering more than 60 subfields within robotics. Deep learning in robotics and automation is one of the top areas of research among authors from the College of Computing and College of Engineering, the two largest contributors to Georgia Tech\u0026rsquo;s research at ICRA.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDeep learning is computing\u0026rsquo;s strongest area while mechanism design is where engineering excels based on the number of authors in the areas.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ETop 3 subfields for computing authors:\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EDeep Learning\u003C\/li\u003E\r\n\t\u003Cli\u003ESemantic Scene Understanding\u003C\/li\u003E\r\n\t\u003Cli\u003EPhysically Assistive Devices and Network Devices (tie)\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ETop 3 subfields for engineering authors:\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EMechanism Design\u003C\/li\u003E\r\n\t\u003Cli\u003ELearning and Adaptive Systems\u003C\/li\u003E\r\n\t\u003Cli\u003EMulti-Robot Systems\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EThe diversity of Georgia Tech robotics research at ICRA is also represented in the number of academic units and disciplines involved in advancing the field. Multidisciplinary teams often come together to tackle challenges that require a unique combination of skill sets.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThere are nearly 100 Georgia Tech authors with work at ICRA. Explore the people, research, and trends from our community at \u003Ca href=\u0022https:\/\/icra.cc.gatech.edu\/\u0022\u003Ehttps:\/\/icra.cc.gatech.edu\/\u003C\/a\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003ERoboticists from around the world, including researchers from the Georgia Institute of Technology, are publishing their latest work at the 2020 IEEE International Conference on Robotics and Automation (ICRA), a two-week live online virtual affair\u0026nbsp;the first part of June, with research activities continuing through August.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Roboticists from around the world, including researchers from the Georgia Institute of Technology, are publishing their latest work at the 2020 IEEE International Conference on Robotics and Automation (ICRA)."}],"uid":"27592","created_gmt":"2020-06-09 20:02:45","changed_gmt":"2020-06-09 20:06:23","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-06-09T00:00:00-04:00","iso_date":"2020-06-09T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"636111":{"id":"636111","type":"image","title":"ICRA 2020","body":null,"created":"1591733133","gmt_created":"2020-06-09 20:05:33","changed":"1591733133","gmt_changed":"2020-06-09 20:05:33","alt":"","file":{"fid":"242036","name":"viz-icra_authors-by-kw.png","image_path":"\/sites\/default\/files\/images\/viz-icra_authors-by-kw.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/viz-icra_authors-by-kw.png","mime":"image\/png","size":148295,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/viz-icra_authors-by-kw.png?itok=GDoMSXSY"}}},"media_ids":["636111"],"groups":[{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu?subject=ICRA\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\nResearch Communications Manager\u003Cbr \/\u003E\r\nCollege of Computing\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"636082":{"#nid":"636082","#data":{"type":"news","title":"Dellaert Awarded IEEE ICRA Milestone Award","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003EFrank Dellaert\u003C\/strong\u003E, a professor in the\u0026nbsp;\u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E, and affiliated with the\u0026nbsp;\u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EMachine Learning Center at Georgia Tech (ML@GT)\u003C\/a\u003E\u0026nbsp;and\u0026nbsp;\u003Ca href=\u0022https:\/\/gvu.gatech.edu\/\u0022\u003EGVU Center\u003C\/a\u003E, has been honored with the IEEE ICRA Milestone Award at the\u0026nbsp;\u003Ca href=\u0022https:\/\/www.icra2020.org\/\u0022\u003E2020 IEEE International Conference on Robotics and Automation (ICRA.)\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe award recognizes the most influential ICRA paper published between 1998-2002 and selected\u0026nbsp;\u003Ca href=\u0022https:\/\/www.ri.cmu.edu\/pub_files\/pub1\/dellaert_frank_1999_2\/dellaert_frank_1999_2.pdf\u0022\u003E\u003Cem\u003EMonte Carlo Localization for Mobile Robots\u003C\/em\u003E\u003C\/a\u003E\u0026nbsp;as this year\u0026rsquo;s recipient. Dellaert conducted this work during his Ph.D studies at Carnegie Mellon University with\u0026nbsp;\u003Cstrong\u003EDieter Fox, Wolfram Burgard\u003C\/strong\u003E, and\u0026nbsp;\u003Cstrong\u003ESebastian Thrun\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It is a great honor to be recognized, but receiving a \u0026rsquo;20 years on\u0026rsquo; milestone award also makes you feel old!\u0026rdquo; said Dellaert.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe paper was accepted to ICRA in 1999 and introduced the Monte Carlo Localization (MLC) method or particle filter localization, which represents the probability density involved in maintaining a set of samples that are randomly drawn from it. This method is faster, more accurate, and less memory-intensive than earlier grid-based methods and allows a robot to be localized without knowledge of its starting location.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMCL is simple to apply to the robotics domain, leading to its popularity. It is now taught in every robotics 101 class around the world. Many mobile robots, including commercial efforts, rely on MCL for localizing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Simplicity is key for acceptance and you cannot predict which of your research will have the most impact. This paper was a result of me procrastinating on my Ph.D. thesis which is a paper almost nobody read. It is an enormous honor that MCL has made a lasting impact on our field,\u0026rdquo; said Dellaert.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The award recognizes the most influential ICRA paper published between 1998-2002 and selected\u00a0Monte Carlo Localization for Mobile Robots\u00a0as this year\u2019s recipient. "}],"uid":"34773","created_gmt":"2020-06-09 15:09:11","changed_gmt":"2020-06-09 15:09:11","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-06-09T00:00:00-04:00","iso_date":"2020-06-09T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"636081":{"id":"636081","type":"image","title":"Frank Dellaert, a professor in the School of Interactive Computing, and affiliated with the Machine Learning Center at Georgia Tech (ML@GT) and GVU Center, has been honored with the IEEE ICRA Milestone Award at the 2020 IEEE International Conference on Ro","body":null,"created":"1591715211","gmt_created":"2020-06-09 15:06:51","changed":"1591715211","gmt_changed":"2020-06-09 15:06:51","alt":"","file":{"fid":"242027","name":"frank-dellaert2.jpeg","image_path":"\/sites\/default\/files\/images\/frank-dellaert2.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/frank-dellaert2.jpeg","mime":"image\/jpeg","size":126074,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/frank-dellaert2.jpeg?itok=Ks7F6Fyh"}}},"media_ids":["636081"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"134","name":"Student and Faculty"},{"id":"135","name":"Research"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"152","name":"Robotics"}],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"635397":{"#nid":"635397","#data":{"type":"news","title":"NSF Grant to Fund Georgia Tech Research into Psychological Impact of COVID-19","body":[{"value":"\u003Cp\u003EArguably the most visible of all prescriptions to the COVID-19 pandemic this year have been guidelines or imposed restrictions commonly referred to as \u0026ldquo;social distancing.\u0026rdquo; Less physical contact, the thinking goes, means a lowered risk of viral transmission.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELike the virus itself, however, stress and anxiety stemming from overconsumption of news or other media can spread through social networks. As the mental health fallout becomes clearer, are some similar social media distancing recommendations needed to stem the flow through the online world?\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA multidisciplinary team of researchers at Georgia Tech, Washington University-St. Louis, and the University of Wisconsin-Madison argue that these mental health implications of the pandemic are equally important, and \u003Ca href=\u0022https:\/\/www.nsf.gov\/awardsearch\/showAward?AWD_ID=2027689\u0022\u003Ea new grant from the National Science Foundation (NSF) has recently funded new research to that effect\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s not just the fear and anxiety that I might get infected or I might infect or know someone who is infected,\u0026rdquo; said \u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E, an associate professor in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E and the co-principal investigator on the project. \u0026ldquo;It\u0026rsquo;s all of these things around it that are furthering the psychological impact. It\u0026rsquo;s very different from other kinds of illnesses or pandemics because of the uncertainty of the crisis. We simply don\u0026rsquo;t know how long we are into it.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe grant is funded by the NSF\u0026rsquo;s Rapid Response Project program, which is intended for research that addresses an immediate need within society. It has provided $200,000 toward the yearlong project.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research will combine investigations in two separate environments: the online world, where news, personal posts, videos, and other media are shared rampantly across social networks, and the offline real world, where the epidemiological data about the spread of the virus or economic data about the financial fallout can be measured.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor the former, they will use social media data from various popular social platforms \u0026ndash; Twitter, Reddit, and YouTube \u0026ndash; to measure the spread of information and how consumers of it express themselves in terms of anxiety or fear, or what they are saying about their own psychological wellbeing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;How often are people expressing anger or fear or blaming someone through their posts?\u0026rdquo; said \u003Cstrong\u003ESrijan Kumar\u003C\/strong\u003E, an assistant professor in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/cse.gatech.edu\/\u0022\u003ESchool of Computational Science and Engineering\u003C\/a\u003E and the other co-principal investigator. \u0026ldquo;We\u0026rsquo;ll develop new classifiers using natural language processing that will help us classify social posts into two categories: either anxiety-inducing or anxiety itself.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis is new territory, according to De Choudhury. Although there have been other pandemics such as the 1918 influenza epidemic, none of this magnitude have taken place during the digital\/social age. And while social media provides an important mechanism for staying informed and remaining in contact with friends and loved ones during the difficult social distancing measures, overexposure could result in negative mental health consequences.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;There is probably a sweet spot,\u0026rdquo; De Choudhury said. \u0026ldquo;Just like we need physical distancing in the real world, we probably need to practice distancing from social media or online information to an extent to avoid consuming too much anxiety-inducing media, while also staying informed.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If I say something, it doesn\u0026rsquo;t just affect me. It affects all the people who read my posts. If they share it or if they post something, then it affects all of their social neighbors. It can be an outward ripple that affects people. We want to measure that, how they spread through social networks.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThey\u0026rsquo;ll compare that data with the other element: the offline world. Currently, people in New York City are likely more stressed and anxious in a different way than people in Georgia. New York has been the epicenter of the viral outbreak in the United States, meaning that much of the anxiety locally stems from the virus itself.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EWill I contract the virus? Will someone I know contract the virus? Can I go to the store for groceries? How much disinfecting is required when I return home?\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnd then, you can tease out that geographical data. How are higher-income individuals stressed in comparison to lower-income? What about differences along racial lines? Data has shown higher mortality rates in African-Americans, for example, which leads to different fears than those in other communities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn U.S. cities where there is also sufficient social media data, they will examine this offline data to see rates of infection, fatalities, when shelter-in-place was imposed, and more.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe final piece will be what they will do with this information. The goal is to create tools for social platforms to provide coping techniques or guidelines for use.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Maybe that might include encouraging you to limit the amount of time you spend on social media,\u0026rdquo; Kumar said. \u0026ldquo;Or, maybe you step out and do something with family members. Some kind of physical activity. Then we can begin to examine how people react to these messages. Do we see that their anxiety levels are coming down, or not?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In this time, we have a very unique lens to study this pandemic in a whole new light as opposed to other events of a global scale,\u0026rdquo; De Choudhury said. \u0026ldquo;There is no guarantee this won\u0026rsquo;t come back. And even if it doesn\u0026rsquo;t, something else will. Being able to have these tools built and available will better prepare us for the future.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor more coverage of Georgia Tech\u0026rsquo;s response to the coronavirus pandemic, please visit our \u003Ca href=\u0022https:\/\/helpingstories.gatech.edu\/\u0022\u003EResponding to COVID-19 page\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A multidisciplinary team of researchers has received a grant from the NSF to study the mental health outcomes of COVID-19 through examination of social media activity and geographic epidemiological data."}],"uid":"33939","created_gmt":"2020-05-15 16:40:10","changed_gmt":"2020-06-04 13:09:47","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-05-15T00:00:00-04:00","iso_date":"2020-05-15T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"635396":{"id":"635396","type":"image","title":"Munmun De Choudhury and Srijan Kumar","body":null,"created":"1589560736","gmt_created":"2020-05-15 16:38:56","changed":"1589560736","gmt_changed":"2020-05-15 16:38:56","alt":"Munmun De Choudhury and Srijan Kumar","file":{"fid":"241787","name":"NSF RAPID GRANT - Munmun and Srijan.png","image_path":"\/sites\/default\/files\/images\/NSF%20RAPID%20GRANT%20-%20Munmun%20and%20Srijan.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/NSF%20RAPID%20GRANT%20-%20Munmun%20and%20Srijan.png","mime":"image\/png","size":1487509,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/NSF%20RAPID%20GRANT%20-%20Munmun%20and%20Srijan.png?itok=8E8NSuCB"}}},"media_ids":["635396"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"184821","name":"cc-research; ic-hcc; ic-ai-ml; COVID-19"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"635593":{"#nid":"635593","#data":{"type":"news","title":"IC Students Support Innovation in India through \u0027MakerGhat\u0027","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003EAzra Ismail\u003C\/strong\u003E was working with health workers in Delhi, India, when she had a realization. What she saw from locals in the community was that there was an intense desire for societal impact from many workers \u0026ndash; and the ideas to go with it \u0026ndash; but an absence of resources necessary to fully realize the innovation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The experience that these health workers had in these communities provided unique perspectives and ideas that produced the kinds of ideas that could be relevant,\u0026rdquo; said Ismail, now a Ph.D. student in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;But because they were the lowest rung on the health infrastructure and were low income or low social class, those ideas weren\u0026rsquo;t recognized and represented.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAround the same time, \u003Ca href=\u0022http:\/\/cc.gatech.edu\u0022\u003ECollege of Computing\u003C\/a\u003E alumnus \u003Cstrong\u003EAditya Vishwanath\u003C\/strong\u003E, now a doctoral student at Stanford University, had a similar realization. He was working with Asha Mumbai, a non-profit in a low-resourced slum in India\u0026rsquo;s biggest city, using virtual reality to see how students appropriated and made sense of it. Like Ismail, he recognized a group of students who had unique viewpoints and drive, but too few resources to realize them.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKnowing how important it is to support innovation from those who understand the specific needs of a community, the two of them founded \u003Ca href=\u0022https:\/\/makerghat.org\/space\u0022\u003EMakerGhat,\u003C\/a\u003E a non-profit with the mission to take ideas from concept to creation and application where they are needed most: the communities they serve.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESituated in an impoverished neighborhood in Mumbai, MakerGhat is a community lab in which local students, young and old, can join to receive education and resources to put their ideas into practice. Makers join through subscription or scholarships if they are unable to afford membership. In exchange, they receive access to support ranging from an electronics room, a 3D printing and PC workstation, a science lab, a woodworking shop, and a design and workshop studio.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe space is intentionally unsophisticated. Enter the space, and you may find a mish-mash of supplies and painting on the walls, a far cry from the labs of the nearby Indian Institute of Technology-Bombay, one of the top technological universities in India.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We want people to be encouraged to try things and not afraid to break it,\u0026rdquo; Ismail said. \u0026ldquo;We don\u0026rsquo;t want something that people are afraid to use.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIf a maker can\u0026rsquo;t find what they are looking for, they can turn to connections within the community to meet the need. Heavier equipment, for example, might require a trip to the local smith for welding.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Students coming in have family members in these other industries, so it sets up an informal infrastructure where the students know where to go for a specific need,\u0026rdquo; Ismail said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe model has resulted in a number of tangible outputs. In Summer 2019, a handful of interns from Georgia Tech, Stanford, and Smith College were able to take advantage of the Denning Global Engagement Seed Fund to fund their travel to India. Interns were there not just to teach or run the lab, but to co-learn with locals. Collaborations between the technical expertise of the interns and the locally-significant knowledge of the makers resulted in a handful of innovations that addressed local needs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne collaboration resulted in a system that could compact plastic bottles to assist in a waste management challenge in Mumbai. Workers who collect waste locally and transport to recycling plants to sell to companies or government institutions face challenges transporting plastic bottles, the most common waste item, which take up a lot of space.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnother created a community mapping platform to help identify local resources. Makers and interns went into the community and conducted surveys to find needs specific to different geographies.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;A big part of this is engaging with the community to identify needs, current status quos, and how to approach the challenge,\u0026rdquo; Ismail said. \u0026ldquo;This happens in the schools too. What are the gaps that need to be addressed, and how can we help address them?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMakerGhat serves about 300 students weekly, ranging from young to old \u0026ndash; it is open to any age or background. Many come from STEM fields, but others may be interested in math or art or fashion design.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s a melting pot,\u0026rdquo; Ismail said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe goal is to turn MakerGhat into an incubator. As the first class of students graduates from the program, they will move on to other sources of education or work. Ismail said that she and her collaborators \u0026ndash; which includes Vishwanath, a team programmer, local leaders in finances and project resources, and a group of 10 or so volunteers \u0026ndash; want to help build companies from the ideas and innovations that formed at MakerGhat.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The mission is to actually transform these students and community members into entrepreneurs,\u0026rdquo; Ismail said. \u0026ldquo;We want to take these creations to the next level and help them scale beyond their own community.\u0026quot;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThat might mean launching new MakerGhat centers elsewhere. The goal is to make the model of the original open-source so that other communities can replicate \u0026ndash; in India and beyond. While it may play out different in each location depending on the community\u0026rsquo;s needs, the organizational structure would be the same.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;There\u0026rsquo;s a misconception that great innovation only comes from these big tech companies or big universities,\u0026rdquo; Ismail said. \u0026ldquo;But we want to challenge that narrative. Many of the great ideas that can make significant impacts on society come from the people in these communities of need themselves.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOther members of the Georgia Tech community have contributed to the project. \u003Cstrong\u003ENeha Kumar\u003C\/strong\u003E, an assistant professor joint between the School of Interactive Computing and the Sam Nunn School of International Affiars, is an advisor. Students involved in a Makers-in-Residence program last summer were \u003Cstrong\u003ERitesh Bhatt\u003C\/strong\u003E, \u003Cstrong\u003ESolum Onwuchekwa\u003C\/strong\u003E, and \u003Cstrong\u003EJosiah Mangiameli\u003C\/strong\u003E. \u003Cstrong\u003EVishal Sharma\u003C\/strong\u003E, an incoming IC Ph.D. student, was also involved.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"MakerGhat is a local makerspace in India designed to cater specifically to low-resourced innovators."}],"uid":"33939","created_gmt":"2020-05-22 19:17:15","changed_gmt":"2020-05-22 19:17:15","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-05-22T00:00:00-04:00","iso_date":"2020-05-22T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"635592":{"id":"635592","type":"image","title":"MakerGhat","body":null,"created":"1590174252","gmt_created":"2020-05-22 19:04:12","changed":"1590174252","gmt_changed":"2020-05-22 19:04:12","alt":"Makers paint walls at MakerGhat in India.","file":{"fid":"241868","name":"MakerGhat.jpeg","image_path":"\/sites\/default\/files\/images\/MakerGhat.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/MakerGhat.jpeg","mime":"image\/jpeg","size":188915,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/MakerGhat.jpeg?itok=V5YPfbtN"}}},"media_ids":["635592"],"related_links":[{"url":"https:\/\/www.cc.gatech.edu\/content\/researchers-work-kids-mumbai-examine-classroom-potential-virtual-reality","title":"Researchers Work with Kids in Mumbai to Examine Classroom Potential of Virtual Reality"},{"url":"https:\/\/www.cc.gatech.edu\/news\/605000\/vr-taking-students-where-once-only-ms-frizzle-and-magic-school-bus-could","title":"VR Taking Students Where Once Only Ms. Frizzle and the Magic School Bus Could"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"184890","name":"cc-research; ic-hcc"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"634312":{"#nid":"634312","#data":{"type":"news","title":"Machine Learning Technique Helps Wearable Devices Get Better at Diagnosing Sleep Disorders and Quality","body":[{"value":"\u003Cp\u003EGetting diagnosed with a sleep disorder or assessing quality of sleep is an often expensive and tricky proposition, involving sleep clinics where patients are hooked up to sensors and wires for monitoring.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWearable devices, such as the Fitbit and Apple Watch, offer less intrusive and more cost-effective sleeping monitoring, but the tradeoff can be inaccurate or imprecise sleep data.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearchers at the Georgia Institute of Technology are working to combine the accuracy of sleep clinics with the convenience of wearable computing by developing machine learning models, or smart algorithms, that provide better sleep measurement data as well as considerably faster, more energy-efficient software.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe team is focusing on electrical ambient noise\u0026nbsp;that is emitted by devices but that is often not audible and can interfere with sleep sensors on a wearable gadget. Leave the TV on at night, and the electrical signal - not the infomercial in the background - might mess with your sleep tracker.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/cse.gatech.edu\/news\/616715\/new-deep-learning-approach-improves-access-sleep-diagnostic-testing\u0022\u003E[Related News:\u0026nbsp;New Deep Learning Approach Improves Access to Sleep Diagnostic Testing]\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThese additional electrical signals are problematic for wearable devices that typically have only one sensor to measure a single biometric data point, normally heart rate. A device picking up signals from ambient electrical noise skews the data and leads to potentially misleading results.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We are building a new process to help train [machine learning] models to be used for the home environment and help address this and other issues around sleep,\u0026rdquo; said\u0026nbsp;\u003Cstrong\u003EScott Freitas\u003C\/strong\u003E, a second-year machine learning Ph.D. student and co-lead author of a newly published\u0026nbsp;\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/2001.11363.pdf\u0022 target=\u0022_blank\u0022\u003Epaper\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe team employed adversarial training in tandem with spectral regularization, a technique that makes neural networks more robust to electrical signals in the input data. This means that the system can accurately assess sleep stages even when an EEG signal is corrupted by additional signals like a TV or washing machine.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing machine-learning methods such as sparsity regularization, the new model can also compress the amount of time it takes to gather and analyze data, as well as increase energy efficiency of the wearable device.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe researchers are testing with a product worn on the head but hope to also integrate it into smartwatches and bracelets. Results would then be transmitted to a person\u0026rsquo;s doctor to analyze and provide a diagnosis. This could result in fewer visits to the doctor, reducing the cost, time, and stress involved with receiving a sleep disorder diagnosis.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnother issue that the researchers are looking at is reducing the\u0026nbsp;amount\u0026nbsp;of sensors needed to accurately track sleep.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;When someone visits a sleep clinic, they are hooked up to all kinds of monitors and wires to gather data ranging from brain activity on EEG\u0026rsquo;s, heart rate, and more. Wearable tech only monitors heart rate with one sensor. The one sensor is more ideal and comfortable, so we are looking for a way to get more data without adding more wires or sensors,\u0026rdquo; said\u0026nbsp;\u003Cstrong\u003ERahul Duggal\u003C\/strong\u003E, a second-year computer science Ph.D. student and co-lead author.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe team\u0026rsquo;s work is published in the paper\u0026nbsp;\u003Cem\u003EREST: Robust and Efficient Neural Networks for Sleep Monitoring in the Wild\u003C\/em\u003E,\u0026nbsp;accepted to the\u0026nbsp;\u003Ca href=\u0022https:\/\/www2020.thewebconf.org\/\u0022 target=\u0022_blank\u0022\u003EInternational World Wide Web Conference\u0026nbsp;(WWW)\u003C\/a\u003E, scheduled to take place April 20 through 24 in Taipei, Taiwan.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"ML@GT researchers are improving the accuracy and efficiency of devices used to track sleeping using machine learning techniques."}],"uid":"34773","created_gmt":"2020-04-13 17:47:08","changed_gmt":"2020-05-21 13:30:25","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-04-15T00:00:00-04:00","iso_date":"2020-04-15T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"634311":{"id":"634311","type":"image","title":"ML@GT researchers are improving the accuracy and efficiency of devices used to track sleeping using machine learning techniques.","body":null,"created":"1586799743","gmt_created":"2020-04-13 17:42:23","changed":"1586799743","gmt_changed":"2020-04-13 17:42:23","alt":"woman sleeping","file":{"fid":"241370","name":"kinga-cichewicz-5NzOfwXoH88-unsplash.jpg","image_path":"\/sites\/default\/files\/images\/kinga-cichewicz-5NzOfwXoH88-unsplash.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/kinga-cichewicz-5NzOfwXoH88-unsplash.jpg","mime":"image\/jpeg","size":770009,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/kinga-cichewicz-5NzOfwXoH88-unsplash.jpg?itok=hU7kSCg6"}}},"media_ids":["634311"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[{"id":"184463","name":"sleep tracking"},{"id":"9167","name":"machine learning"},{"id":"2556","name":"artificial intelligence"},{"id":"365","name":"Research"}],"core_research_areas":[{"id":"39451","name":"Electronics and Nanotechnology"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"634175":{"#nid":"634175","#data":{"type":"news","title":"Four Machine Learning Faculty Members Earn Promotions and Tenure","body":[{"value":"\u003Cp\u003EFour faculty members at the \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EMachine Learning Center at Georgia Tech\u003C\/a\u003E have received promotions or been granted tenure.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EJake Abernethy\u003C\/strong\u003E has been promoted to associate professor in the \u003Ca href=\u0022https:\/\/scs.gatech.edu\/\u0022\u003ESchool of Computer Science\u003C\/a\u003E and granted tenure. Abernethy\u0026rsquo;s research focus is machine learning, where he enjoys discovering connections between optimization, statistics, and economics.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn 2011, he completed his Ph.D. at the University of California, Berkeley before becoming a Simons postdoctoral fellow for the following two years. After the water crisis in Flint, Mich., Abernethy worked on \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~jabernethy9\/flint\/\u0022\u003Edetecting lead contamination and infrastructure remediation\u003C\/a\u003E. Prior to studying and teaching machine learning, Abernethy performed comedy and juggling shows, opening for Sinbad and Dave Chappelle.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E has been promoted to associate professor in the\u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003E School of Interactive Computing\u003C\/a\u003E and granted tenure. De Choudhury is also affiliated with the \u003Ca href=\u0022http:\/\/gvu.gatech.edu\/\u0022\u003EGVU\u003C\/a\u003E Center and \u003Ca href=\u0022http:\/\/ipat.gatech.edu\/\u0022\u003EInstitute for People and Technology (IPaT)\u003C\/a\u003E and leads the \u003Ca href=\u0022http:\/\/socweb.cc.gatech.edu\/\u0022\u003ESocial Dynamics and Wellbeing Lab (SocWeb Lab.)\u003C\/a\u003E De Choudhury studies problems at the intersection of computer science and social media, building computational methods and artefacts to help understand human behaviors and psychological states and how they manifest online.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPrior to joining Georgia Tech in 2014, De Choudhury was a postdoctoral researcher in the nexus group at Microsoft Research, Redmond. In 2011, she received her Ph.D. from Arizona State University, Tempe. After graduate school, De Choudhury spent time at Rutgers University and was a faculty associate with the Berkman Center for Internet and Society at Harvard University.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EYajun Mei\u003C\/strong\u003E has been promoted to professor in the \u003Ca href=\u0022https:\/\/www.isye.gatech.edu\/\u0022\u003EH. Milton Stewart School of Industrial and Systems Engineering\u003C\/a\u003E. Mei\u0026#39;s research interests include change-point problems and sequential analysis in mathematical statistics and sensor networks and information theory in engineering. Mei also examines longitudinal data analysis, random effects models, and clinical trials in biostatistics.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMei received his Ph.D. in mathematics from the California Institute of Technology in 2003. He has also worked as a postdoc in biostatistics at the Fred Hutchinson Cancer Research Center. In 2010, Mei was awarded the National Science Foundation (NSF) CAREER Award and in 2008 was awarded Best Paper at FUSION. Mei was awarded the prestigious Abraham Wald Prize in Sequential Analysis in 2009.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EAlex Endert \u003C\/strong\u003Ehas been promoted to associate professor and granted tenure in the School of Interactive Computing. Endert directs the \u003Ca href=\u0022https:\/\/gtvalab.github.io\/\u0022\u003EVisual Analytics Lab\u003C\/a\u003E where he and his students apply fundamental research to\u0026nbsp;domains including text analysis, intelligence analysis, cybersecurity, and decision-making, and explore novel user interaction techniques for visual analytics.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEndert earned his Ph.D. from Virginia Tech in 2012, and in 2013 his work on Semantic Interaction was awarded the IEEE VGTC VPG Pioneers Group Doctoral Dissertation Award, and the Virginia Tech Computer Science Best Dissertation Award. In 2018, Endert received the NSF CAREER Award.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEditors Note:\u0026nbsp;\u003Cstrong\u003EMolei Tao\u003C\/strong\u003E\u0026nbsp;has been promoted to associate professor with tenure in the School of Math. Tao is an applied and computational mathematician, designing algorithms for faster and more accurate computations and developing mathematical tools to analyze and design engineering systems or answer scientific questions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe earned his Ph.D. in control and dynamical systems with a minor in physics from the California Institute of Technolgy where he also worked as a postdoctoral researcher. He is the 2011 recipient of the W.P. Carey Ph.D. Prize in Applied Mathematics and a 2019 NSF CAREER Award.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Four faculty members at the Machine Learning Center at Georgia Tech have received promotions or been granted tenure."}],"uid":"34773","created_gmt":"2020-04-08 17:37:17","changed_gmt":"2020-05-11 18:57:34","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-04-08T00:00:00-04:00","iso_date":"2020-04-08T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"634173":{"id":"634173","type":"image","title":"Four ML@GT faculty members earn promotions and tenure","body":null,"created":"1586367276","gmt_created":"2020-04-08 17:34:36","changed":"1586367276","gmt_changed":"2020-04-08 17:34:36","alt":"Congratulations Alex, Jake, Munmun, and Yajun","file":{"fid":"241321","name":"Spring 2020 ML Promotions.png","image_path":"\/sites\/default\/files\/images\/Spring%202020%20ML%20Promotions.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Spring%202020%20ML%20Promotions.png","mime":"image\/png","size":440694,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Spring%202020%20ML%20Promotions.png?itok=frxxuWzs"}}},"media_ids":["634173"],"groups":[{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"635208":{"#nid":"635208","#data":{"type":"news","title":"Social Media and Wellbeing: Does Bias in Self-Reported Data Impact Research?","body":[{"value":"\u003Cp\u003EAlong with the development of each new technological platform comes a series of questions designed to understand its ultimate impact on users\u0026rsquo; wellbeing or performance. It\u0026rsquo;s like clockwork.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EDoes watching too much television rot your child\u0026rsquo;s brain? How much is too much when it comes to video games? Is our time spent on social media impacting our mental health?\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThese are all important questions, but how they are asked matters to the ultimate conclusions we can draw. It is well-established that the most commonly used method in this area of research \u0026ndash; user self-reports and survey questions \u0026ndash; are prone to error. Now, new research from collaborators at Georgia Tech, Facebook, and the University of Michigan have shed light on the nature of error \u0026ndash; that is to say whether user over or underestimate their data, who and which questions are more prone to error, and more.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EError in the data, said \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E Ph.D. student\u0026nbsp;\u003Cstrong\u003ESindhu Ernala\u003C\/strong\u003E, can impact the inferences drawn from the data itself.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We know survey questions have several well-documented biases,\u0026rdquo; Ernala said. \u0026ldquo;People may not remember correctly. They can\u0026rsquo;t keep up with their time. They remember recent things more accurately than those further in the past. All of this matters because error in measurement might impact the downstream inferences we make. Accurate assessments of social media use is critical because of the everyday impact it has on people\u0026rsquo;s lives.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIndeed, Ernala and her collaborators found that these biases held up in many surveys. In a paper accepted to the \u003Ca href=\u0022http:\/\/chi.gatech.edu\u0022\u003E2020 ACM Conference on Human Factors in Computing\u003C\/a\u003E (CHI), they picked 10 of the most common survey questions in prior literature that investigate time spent on Facebook. The questions were asked in a variety of ways: open ended or multiple choice, the frequency of visits or the total time spent. They asked these 10 questions in a survey to 50,000 random users in 15 countries around the world.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWith self-reported data in hand, they compared it to the actual server logs at Facebook to see how it stacked up. Interestingly, people most often overestimated the time they spent on the platform and underestimated the number of times they visited. Specifically, in the 18-24 demographic, a common age range for research done at universities, there was even more error in self-reports.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This is important, because a lot of our research is done with these age samples,\u0026rdquo; Ernala said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWith this information in mind, the researchers made a handful of recommendation in order to improve the data and, thus, the research around the data itself:\u003C\/p\u003E\r\n\r\n\u003Col\u003E\r\n\t\u003Cli\u003EAs a researcher, if you are investigating time spent, consider using time tracking applications as an alternative to self-report time spent measures. These applications include things like Apple\u0026rsquo;s screen time feature or Facebook\u0026rsquo;s \u0026ldquo;Your Time on Facebook.\u0026rdquo;\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003EIf researchers want to use surveys, which often makes sense, consider using the phrasing with the lowest error or multiple-choice questions.\u003C\/li\u003E\r\n\u003C\/ol\u003E\r\n\r\n\u003Cp\u003EThe researchers caution against using time spent self-reports directly, but rather interpret reports as noisy estimates of where someone falls on a distribution. More important when determining wellbeing outcomes is\u0026nbsp;\u003Cem\u003Ehow\u003C\/em\u003E\u0026nbsp;users actually spend their time on the platform.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Social platforms change and user habits change over time,\u0026rdquo; Ernala said. \u0026ldquo;The questions now might not be the best questions five or 10 years from now. This is fluid, and we need to continue to look at this to make sure our past and future research is well-informed.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EShe and her collaborators hope to contribute positively to this ongoing process by providing some validated measures that can be used across studies, while understanding that these methods may change over time as user habits transform.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Error in the data, said School of Interactive Computing Ph.D. student\u00a0Sindhu Ernala, can impact the inferences drawn from the data itself."}],"uid":"33939","created_gmt":"2020-05-08 08:36:27","changed_gmt":"2020-05-08 08:36:27","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-05-08T00:00:00-04:00","iso_date":"2020-05-08T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"624519":{"id":"624519","type":"image","title":"Social Media Logos","body":null,"created":"1565805908","gmt_created":"2019-08-14 18:05:08","changed":"1565805908","gmt_changed":"2019-08-14 18:05:08","alt":"A keyboard featuring different social media logos","file":{"fid":"237806","name":"Social Media logos.jpg","image_path":"\/sites\/default\/files\/images\/Social%20Media%20logos.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Social%20Media%20logos.jpg","mime":"image\/jpeg","size":215846,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Social%20Media%20logos.jpg?itok=G7qWkSGs"}}},"media_ids":["624519"],"related_links":[{"url":"http:\/\/chi.gatech.edu","title":"CHI 2020 at Georgia Tech"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182508","name":"cc-research; ic-hcc; ic-social-computing"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"634469":{"#nid":"634469","#data":{"type":"news","title":"IC Ph.D. Students Named 2020 Members of NSF Graduate Research Fellowship Program","body":[{"value":"\u003Cp\u003EA pair of \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E students was selected as 2020 members of the N\u003Ca href=\u0022https:\/\/www.nsfgrfp.org\/\u0022\u003Eational Science Foundation Graduate Research Fellowship Program\u003C\/a\u003E (NSF GRFP).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFirst-year Ph.D. students \u003Cstrong\u003EDaniel Bolya\u003C\/strong\u003E (advised by \u003Cstrong\u003EJudy Hoffman\u003C\/strong\u003E) and \u003Cstrong\u003EJoanne Truong\u003C\/strong\u003E (advised by \u003Cstrong\u003EDhruv Batra\u003C\/strong\u003E and \u003Cstrong\u003ESonia Chernova\u003C\/strong\u003E) were recognized by the program, which supports graduate students pursuing research-based Master\u0026rsquo;s and doctoral degrees at United States institutions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe NSF GRFP provides financial support for three years, comprised of a $34,000 stipend per 12-month fellowship year, as well as a direct payment of $12,000 to Georgia Tech toward the cost of education for each of the three years of fellowship funding.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBolya\u0026rsquo;s work is in machine learning and computer vision. Recent work at Georgia Tech has focused on error profiling in instance segmentation and object detection models. His method, building upon previous work at MIT, is unique in that it captures all possible sources of error in a model, while properly weighing the importance of each. He plans to continue pursuit of faster methods of instance segmentation that can he make accessible. Current methods are not practical for many applications due to limits in speed, accuracy, and data efficiency. His research addresses this challenge.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This is not just about computer vision,\u0026rdquo; he said in his research statement. \u0026ldquo;Improving instance segmentation would impact the tech we use every day.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELike his work at MIT, called YOLACT, he plans to fully release the project open source once it is ready.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETruong\u0026rsquo;s long-term research goal is to develop robots that can see, talk, reason, and act in complex human environments. Specifically, she will focus on a method called \u0026ldquo;sim2robot transfer,\u0026rdquo; which develops efficient domain adaptation techniques to enable pre-training of AI agents in simulators while ensuring that the learned skills generalize to a real robotic platform.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The overall goals of my research plan are to, one, break down the possible errors in simulation-to-reality transfer that result in a reality gap, and, two, close the loop between simulation and reality by using data collected on a real robot to finetune and optimize parameters in simulation,\u0026rdquo; she said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EShe worked on the first goal last fall, achieving optimization in simulator settings for sim2real predictivity. Currently, she is working on the second goal, developing domain adaptation techniques to enable low-shop adaptation between simulation and reality.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The NSF GRFP provides financial support for three years, comprised of a $34,000 stipend per 12-month fellowship year, as well as a direct payment of $12,000 to Georgia Tech toward the cost of education for each of the three years of fellowship funding."}],"uid":"33939","created_gmt":"2020-04-16 20:02:46","changed_gmt":"2020-04-16 20:02:46","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-04-15T00:00:00-04:00","iso_date":"2020-04-15T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"634467":{"id":"634467","type":"image","title":"Daniel Bolya and Joanne Truong","body":null,"created":"1587067147","gmt_created":"2020-04-16 19:59:07","changed":"1587067147","gmt_changed":"2020-04-16 19:59:07","alt":"Daniel Bolya and Joanne Truong","file":{"fid":"241446","name":"Joanne and Daniel.png","image_path":"\/sites\/default\/files\/images\/Joanne%20and%20Daniel.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Joanne%20and%20Daniel.png","mime":"image\/png","size":1176810,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Joanne%20and%20Daniel.png?itok=SNMuNKXd"}}},"media_ids":["634467"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181639","name":"cc-research; ic-ai-ml"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"634055":{"#nid":"634055","#data":{"type":"news","title":"Looking for Activities at Home? Try These Interactive Tools from IC Researchers","body":[{"value":"\u003Cp\u003EThe world is on lockdown right now, and we\u0026rsquo;re all searching for new ways to occupy our time inside. With only so many times you can re-watch The Office (oh, who are we kidding \u0026ndash; maybe just one more time through\u0026hellip;), we thought it would be fun to share some of the interactive tools from our own researchers\u0026rsquo; workshops.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBelow, you\u0026rsquo;ll find just a couple of the tools you can interact with online, giving you opportunities from learning how to code to creating art. But this is only just a start \u0026ndash; we\u0026rsquo;d love to hear from you.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIf you\u0026rsquo;re a Georgia Tech student or faculty member, submit your interactive tools to communications officer David Mitchell at \u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E. We\u0026rsquo;ll add to the list, share with our audience, and help everyone find some enjoyment during a difficult time.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ECreate Your Own Generative Art Pieces \u0026ndash; \u003C\/strong\u003Esubmitted by Devi Parikh\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELooking for a new piece of art for your wall? With this tool, you can flex your creative muscles. Choose a style, adjust the values, colors, and properties, and generate a piece that would fit in nicely in your home.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis work demonstrates a broader area of research into machine learning and creativity. The first piece of AI-generated art to go to auction sold for $432,500 in 2018.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ELINK: \u003C\/strong\u003E\u003Ca href=\u0022https:\/\/cc.gatech.edu\/~parikh\/art.html\u0022\u003Ehttps:\/\/cc.gatech.edu\/~parikh\/art.html\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EInteract with Visual Chatbot \u003C\/strong\u003E\u0026ndash; submitted by Devi Parikh\u003C\/p\u003E\r\n\r\n\u003Cp\u003EParikh\u0026rsquo;s lab is doing research in an area called visual question answering. Developed in 2017, this demo allows you to upload an image and have a conversation with a chatbot about it. Pick out an image you\u0026rsquo;ve taken or just grab one from the web and ask questions to see just how quickly and accurately this AI can perform the task. This research is key to developing agents that can reason about specific tasks in the real world.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ELINK: \u003C\/strong\u003E\u003Ca href=\u0022http:\/\/demo-visualdialog.cloudcv.org\/\u0022\u003Ehttp:\/\/demo-visualdialog.cloudcv.org\/\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ELearn to Code Using EarSketch and TunePad\u003C\/strong\u003E \u0026ndash; submitted by Brian Magerko\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHave you been dying to learn how to code? There\u0026rsquo;s no time like the present. Without the benefit of a classroom setting to learn all the ins and outs, you might find a usable tool like EarSketch beneficial. EarSketch uses music to guide the learner. With sounds from the EarSketch library or your own uploads, along with Python or JavaScript to code, you can produce quality music online.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELike EarSketch, TunePad \u0026ndash; developed in collaboration with Northwestern University \u0026ndash; is a tool for creating music using the Python programming language. No knowledge in music or coding is required to get started. Get those musical juices flowing, and start creating.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ELINK: \u003C\/strong\u003E\u003Ca href=\u0022http:\/\/earsketch.gatech.edu\/\u0022\u003Eearsketch.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ELearn About Grasping Tasks Using this Online Tool \u003C\/strong\u003E\u0026ndash; submitted by Samarth Brahmbhatt\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis tool allows people to interactively explore how we grasp household objects. So, why is this important? Grasping is a key capability in the development of household robotics. In order to train robots how to grab and use items in the house, we need to identify the most efficient approach. Explore this tool, which includes items from an apple to a doorknob to a video game controller.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ELINK: \u003C\/strong\u003E\u003Ca href=\u0022https:\/\/contactdb.cc.gatech.edu\/contactdb_explorer.html\u0022\u003Ehttps:\/\/contactdb.cc.gatech.edu\/contactdb_explorer.html\u003C\/a\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"These are just a couple of the tools you can interact with online, giving you opportunities from learning how to code to creating art."}],"uid":"33939","created_gmt":"2020-04-04 00:00:47","changed_gmt":"2020-04-04 00:00:47","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-04-03T00:00:00-04:00","iso_date":"2020-04-03T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"444971":{"id":"444971","type":"image","title":"EarSketch","body":null,"created":"1449256205","gmt_created":"2015-12-04 19:10:05","changed":"1475895184","gmt_changed":"2016-10-08 02:53:04","alt":"EarSketch","file":{"fid":"203156","name":"static1.squarespace.png","image_path":"\/sites\/default\/files\/images\/static1.squarespace_0.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/static1.squarespace_0.png","mime":"image\/png","size":411122,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/static1.squarespace_0.png?itok=lWrzDShH"}}},"media_ids":["444971"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182940","name":"cc-research; ic-ai-ml; ic-robotics; ic-hcc"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"633985":{"#nid":"633985","#data":{"type":"news","title":"Pitch Perfect: GT Computing Undergrads Provide Automated Training Upgrade for Softball Team","body":[{"value":"\u003Cp\u003EThere\u0026rsquo;s a classic story that former Atlanta Braves pitching coach Leo Mazzone used to share about Hall-of-Famer Greg Maddux, one of the smartest hurlers of all time. Although the exact details have changed in retelling over time, it goes something like this:\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMaddux, a meticulous documenter of pitch sequences and batter results throughout his career, once explained to Mazzone in between innings that the leadoff batter in the following frame would pop out to third base on the fourth pitch of the at-bat. He\u0026rsquo;d start him with a fastball, change speeds for strike two, waste a pitch outside, and then induce the popup on a one-ball, two-strike count. Sure enough, a few minutes later, Maddux did exactly as he\u0026rsquo;d said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThere are a couple of lessons here: One, Maddux was a wizard. Many pitchers over time have tried to replicate his impeccable approach to the game, but few have ever succeeded at that level; two, pitch sequence matters \u0026ndash; perhaps more than how overpowering your fastball is or how sharp the break is on your curve.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECapitalizing on this intuition, a group of undergraduate students at \u003Ca href=\u0022http:\/\/gatech.edu\u0022\u003EGeorgia Tech\u003C\/a\u003E are working with the softball team to provide an automated upgrade to players\u0026rsquo; training. Using the wealth of statistics kept by the team \u0026ndash; pitch-by-pitch data for balls, strikes, types of pitches thrown, and results \u0026ndash; they have trained an algorithm that can select the best pitch to throw in any given situation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe tool is used by the coaches and pitchers for game planning purposes, generating daily reports after every game and practice to help inform coaches of trends in sequences and results.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In baseball and softball nowadays, data analytics has become such an incredibly important part of the game,\u0026rdquo; said \u003Cstrong\u003EJack Bennett\u003C\/strong\u003E, a third-year \u003Ca href=\u0022http:\/\/isye.gatech.edu\u0022\u003EIndustrial Engineering\u003C\/a\u003E student. \u0026ldquo;Anything that can get them data to go into games more prepared. Technology is at the forefront of this.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThey began using the approach during the 2019 season. Bennett and partners \u003Cstrong\u003EZach Panzarino\u003C\/strong\u003E (third-year \u003Ca href=\u0022http:\/\/cc.gatech.edu\u0022\u003EComputer Science\u003C\/a\u003E) and Ron Kushkuley (third-year \u003Ca href=\u0022http:\/\/coe.gatech.edu\u0022\u003EComputer Engineering\u003C\/a\u003E) had demonstrated a similar capability at last year\u0026rsquo;s Sports Innovation Hackathon using data for Atlanta Braves pitcher Mike Foltynewicz, finishing in third place. \u003Cstrong\u003EDoug Allvine\u003C\/strong\u003E, assistant athletics director for innovation at Georgia Tech, put the team in touch with softball coach \u003Cstrong\u003EAileen Morales\u003C\/strong\u003E. Morales was interested, and the students were able to begin testing the approach.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt works like this: The softball team keeps track of its own data \u0026ndash; not just player statistics, but pitch selections and results for every pitcher in every game throughout the season. That\u0026rsquo;s a lot of data and can offer a lot of information. What happened when Pitcher X threw a 3-2 changeup to a lefthanded batter?\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut it goes a little deeper than that. Panzarino, Bennett, and Kushkuley found that the pitch sequence is what matters most. That follows the standard strategic thinking \u0026ndash; a slider away can be more effective if set up by an inside fastball on the previous pitch, for example. What the algorithm does, however, is consider the order each pitch is thrown in the at-bat and provide a score for which pitch will be most effective based on past data.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We leverage sequences, the count, outs, everything,\u0026rdquo; Panzarino said. \u0026ldquo;Looking at the current state and the previous pitches, it will score all the potential future routes a pitcher can choose. We give them reports before each game so that they can prepare, and then we look at success or failure after the game.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAfter a test run a year ago, the students have honed the technology and are working with the team again this year. Qualitatively speaking, they said they noticed results throughout the year.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;When we first gave them our analysis, it would recommend certain stuff in certain situations,\u0026rdquo; Bennett said. \u0026ldquo;Maybe it would say a changeup should be thrown more in this situation. Then, when we\u0026rsquo;d get postgame data later, we\u0026rsquo;d see that more changeups were being thrown and were continuing to be effective.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;When I first saw what they were developing, I was beyond impressed,\u0026rdquo;\u0026nbsp;Morales said. \u0026ldquo;We are very meticulous with collecting data in our program and trying to find ways to learn more about what is and what is not working for our athletes. It\u0026rsquo;s remarkable to see how they can take the data we had and leverage it in a way that allowed us to finetune our training.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERecently, at the 2020 Sports Innovation Hackathon, the group developed a similar solution for baseball. They received runner-up in the competition, and hope to connect further with the Georgia Tech baseball team in the future.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Tons of theory has been written on how pitchers should approach sequencing in games, but this is a model that can show you the data about how well that works,\u0026rdquo; Panzarino said.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A group of undergraduate students at Georgia Tech are working with the softball team to provide an automated upgrade to players\u2019 training."}],"uid":"33939","created_gmt":"2020-04-01 17:57:21","changed_gmt":"2020-04-01 17:57:21","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-04-01T00:00:00-04:00","iso_date":"2020-04-01T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"520851":{"id":"520851","type":"image","title":"Softball","body":null,"created":"1459789200","gmt_created":"2016-04-04 17:00:00","changed":"1475895289","gmt_changed":"2016-10-08 02:54:49","alt":"Softball","file":{"fid":"206045","name":"softball.png","image_path":"\/sites\/default\/files\/images\/softball_0.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/softball_0.png","mime":"image\/png","size":301482,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/softball_0.png?itok=plm8gn5h"}}},"media_ids":["520851"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181639","name":"cc-research; ic-ai-ml"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"633834":{"#nid":"633834","#data":{"type":"news","title":"Passing the Torch: Georgia Tech Roboticists Lead Future Generation of Women in the Field","body":[{"value":"\u003Cp\u003EThere\u0026rsquo;s a piece of advice \u003Ca href=\u0022http:\/\/gatech.edu\u0022\u003EGeorgia Tech\u003C\/a\u003E Ph.D. student \u003Cstrong\u003EDe\u0026rsquo;Aira Bryant\u003C\/strong\u003E recalls most often when it comes to her adviser, \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E Chair \u003Cstrong\u003EAyanna Howard\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;You\u0026rsquo;ve got to start somewhere,\u0026rdquo; said Bryant, a robotics student in the school. \u0026ldquo;I feel like whenever I\u0026rsquo;m going through my research, the way I approach it is I have these grand ideas, and I have to break it down to this and this and that. I\u0026rsquo;m the type who\u0026rsquo;s normally working on four or five things at the same time. Dr. Howard always tells me: \u0026lsquo;Okay, slow down. We have to start somewhere. We have to start somewhere so we have something to move toward.\u0026rsquo;\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt\u0026rsquo;s an appropriate metaphor for Bryant, who began her career in computer science with no previous experience as an undergraduate student at the University of South Carolina. It also applies to all the other women in the field who, like Bryant, rely heavily on those who come before them and pass the torch to those who come after.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUnlike many robotic students, who have stories about being introduced through Lego Mindstorms that allow them to build and program their own Lego robots, the concepts of computer science and robotics were the furthest things from Bryant\u0026rsquo;s mind. They were completely foreign ideas that she had never given a second thought during her time in middle school and high school.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;That wasn\u0026rsquo;t me,\u0026rdquo; Byrant said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EInstead, she happened to take an Intro to Java class in her first year. There, she met her first collegiate mentor, \u003Cstrong\u003EKarina Liles\u003C\/strong\u003E. Liles was a graduate student who worked in a robotics lab and, after first semester, invited Bryant to come work with her as an undergraduate assistant. Bryant saw it as a part-time job and a place she could have her own desk. She wasn\u0026rsquo;t thinking about it as much more than that.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I had no idea what her research was,\u0026rdquo; Bryant said. \u0026ldquo;I knew it was a robotics lab, so that was cool. And she was in education for low-resource communities. I came from a school that didn\u0026rsquo;t offer computer science at all, so I found that appealing.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOnce Bryant was introduced to the research process, asking and answering new questions, she was hooked. Collecting data, programming and testing robots then seeing children interact with them face-to-face.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It made all the difference,\u0026rdquo; she said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt was a start, but it was still a new world. Neither of her parents had earned four-year degrees, and her dad had passed away when she was in middle school. When she told her mom and grandmother that she was interested in computer science, she received some hesitancy back. But, while neither had experience in technology, they had raised her to be inquisitive and to seek out mentorship. That\u0026rsquo;s exactly what Bryant did.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThrown into the deep end, she relied on Karina and a handful of other women she came across at South Carolina or conferences like \u003Ca href=\u0022https:\/\/humanrobotinteraction.org\/\u0022\u003EHuman Robot Interaction\u003C\/a\u003E and \u003Ca href=\u0022https:\/\/ghc.anitab.org\/\u0022\u003EGrace Hopper\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I was drawn to women in the field, because the nurturing and the support from people who are also in an underrepresented group \u0026ndash; whether it\u0026rsquo;s gender or race or whatever \u0026ndash; they can talk to you about those specific challenges that you might come across,\u0026rdquo; Bryant said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEventually, that led her to Howard.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAfter her junior year in Columbia, Bryant applied to a program called \u003Ca href=\u0022https:\/\/cra.org\/cra-wp\/dreu\/\u0022\u003EDistributed Research Experiences for Undergrads\u003C\/a\u003E (DREU). For minority undergraduate students, DREU matches students with mentors who have signed up to take on undergrads into their labs over the summer. Although students can be matched with anyone in the United States, Bryant\u0026rsquo;s mentor happened to be Howard.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I was so excited,\u0026rdquo; said Bryant, who knew of Howard\u0026rsquo;s research through her own work at South Carolina. The work she was doing with social robots for kids with autism aligned with Howard\u0026rsquo;s, and it wasn\u0026rsquo;t uncommon for the Georgia Tech professor\u0026rsquo;s name to be cited in one of their papers. \u0026ldquo;There was a student matched with a mentor in Hawaii, and everyone thought that was the luckiest one. I was like, \u0026lsquo;No, I\u0026rsquo;m pretty sure I got the best deal.\u0026rsquo;\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBryant worked on a project in Howard\u0026rsquo;s lab that summer with three other undergrads. Howard was immediately impressed with Bryant because of her unique programming ability.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I remember needing someone to program the robot, and she was just like, \u0026lsquo;Oh, I can do it,\u0026rsquo;\u0026rdquo; Howard said. \u0026ldquo;She impressed me right away, and when it was time for her to choose a graduate program I knew she\u0026rsquo;d fit perfectly in our lab.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETheir work together now is impacting individuals with disabilities, making technology work for everybody including those with motor or visual or hearing impairments. They are investigating robot gendering and its impact on human trust, and work toward inclusivity with programs like \u003Ca href=\u0022http:\/\/ai-4-all.org\/\u0022\u003EAI4All\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBryant is using the inspiration that Howard has provided to her and feels a responsibility to continue that for the next generation of women roboticists. She is humbled by people who now look up to her like she did to Howard, being left speechless by a young student who featured her for a school project on Black History Month.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhen she goes to Grace Hopper, Bryant loves meeting the undergrads and passing on her advice about academics and the challenges women face in the field. She also watches to make sure they are asking questions or calls on those who look like they have a question, but are afraid to ask.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I remember being that person in the room,\u0026rdquo; she said. \u0026ldquo;Women don\u0026rsquo;t just need representation, they need a voice. I want to be their champion, connect them to the right people.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnd her biggest advice to them might sound familiar.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Just start,\u0026rdquo; she said. \u0026ldquo;You\u0026rsquo;ve got to start somewhere.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Ph.D. Student De\u0027Aira Bryant uses the leadership of adviser Ayanna Howard to help guide her and future generations of women in robotics."}],"uid":"33939","created_gmt":"2020-03-25 19:19:03","changed_gmt":"2020-03-25 19:19:03","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-03-25T00:00:00-04:00","iso_date":"2020-03-25T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"622962":{"id":"622962","type":"image","title":"De\u0027Aira Bryant","body":null,"created":"1562099179","gmt_created":"2019-07-02 20:26:19","changed":"1562099179","gmt_changed":"2019-07-02 20:26:19","alt":"","file":{"fid":"237242","name":"unadjustednonraw_thumb_29ba.jpg","image_path":"\/sites\/default\/files\/images\/unadjustednonraw_thumb_29ba.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/unadjustednonraw_thumb_29ba.jpg","mime":"image\/jpeg","size":250014,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/unadjustednonraw_thumb_29ba.jpg?itok=umnmWiYV"}}},"media_ids":["622962"],"related_links":[{"url":"https:\/\/www.cc.gatech.edu\/news\/628437\/startup-zyrobotics-creates-more-opportunities-impact","title":"Startup Zyrobotics Creates More Opportunities for Impact"},{"url":"https:\/\/www.youtube.com\/watch?v=JBg7nZXb1Vo","title":"Ph.D. Student Seeks to Help Children Through Robotics"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182940","name":"cc-research; ic-ai-ml; ic-robotics; ic-hcc"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"633041":{"#nid":"633041","#data":{"type":"news","title":"Georgia Tech Kicks Off Atlanta\u0027s Biggest STEM Party March 6-7","body":[{"value":"\u003Cp\u003EThe 2020 launch of the Atlanta Science Festival, \u0026ldquo;\u003Ca href=\u0022https:\/\/arts.gatech.edu\/content\/2100-climate-odyssey\u0022 target=\u0022_blank\u0022\u003E2100: A Climate Odyssey\u003C\/a\u003E\u0026rdquo; at Georgia Tech\u0026rsquo;s Ferst Center for the Arts, takes place March 6 and\u0026nbsp;is designed as an\u0026nbsp;\u0026quot;immersive theatrical experience that transports audience members to a possible future that looks at life after a century of climate change.\u0026quot;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe ASF opening weekend at Georgia Tech also includes one of the most unique experiences in music performance on March 7. The\u0026nbsp;\u003Ca href=\u0022https:\/\/guthman.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003EGuthman Musical Instrument Competition Concert\u003C\/a\u003E\u0026nbsp;showcases nine\u0026nbsp;global finalists playing unique instruments created for the competition. It\u0026#39;s free and open to the public.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nNew this year as part of\u0026nbsp;the Guthman event is the\u0026nbsp;\u003Ca href=\u0022https:\/\/guthman.gatech.edu\/fair?mc_cid=5630f343d4\u0026amp;mc_eid=%5bUNIQID%5d\u0022 target=\u0022_blank\u0022\u003EMusic, Art, and Technology Fair\u003C\/a\u003E, hosted by the\u0026nbsp;\u003Ca href=\u0022http:\/\/www.music.gatech.edu\/?mc_cid=5630f343d4\u0026amp;mc_eid=%5BUNIQID%5D\u0022 target=\u0022_blank\u0022\u003EGeorgia Tech School of Music\u003C\/a\u003E\u0026nbsp;and\u0026nbsp;\u003Ca href=\u0022http:\/\/www.cycling74.com\/?mc_cid=5630f343d4\u0026amp;mc_eid=%5BUNIQID%5D\u0022 target=\u0022_blank\u0022\u003ECycling \u0026rsquo;74\u003C\/a\u003E. It\u0026#39;s\u0026nbsp;a unique opportunity to share projects at the intersection of art and technology in a hands-on, interactive, science-fair format.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe GVU Center at Georgia Tech created interactive graphics to \u003Ca href=\u0022https:\/\/public.tableau.com\/views\/AtlantaScienceFestival2020\/Dashboard1?%3Adisplay_count=y\u0026amp;%3Aorigin=viz_share_link\u0026amp;%3AshowVizHome=no\u0022\u003Eexplore the two-week festival\u003C\/a\u003E and \u003Ca href=\u0022https:\/\/public.tableau.com\/views\/AtlantaScienceFestival2020-eventsatGeorgiaTech\/Dashboard1?:display_count=y\u0026amp;:origin=viz_share_link:showVizHome=no\u0022\u003Efind specific events connected to Georgia Tech\u003C\/a\u003E. The institute is one of the founding members of ASF.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003EASF events presented by or taking place at Georgia Tech:\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003E\u003Cem\u003EMarch 6\u003C\/em\u003E\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/atlantasciencefestival.org\/launch\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003E2100: A Climate Odyssey\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPresenting Partners: Science ATL, The Weather Channel, the National Weather Service, Peachtree City, Out of Hand Theater.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETime and Location: 8 p.m., Ferst Center for the Arts at Georgia Tech\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003E\u003Cem\u003EMarch 7\u003C\/em\u003E\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/atlantasciencefestival.org\/events-2020\/12-steam-at-tech-day\/\u0022 target=\u0022_blank\u0022\u003ESTEAM at Tech Day\u003C\/a\u003E\u0026nbsp;\u0026nbsp;\u0026nbsp; \u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPresenting Partner: Georgia Tech CEISMC\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETime and Location: 12 p.m., Clough Undergraduate Learning Commons at Georgia Tech\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/atlantasciencefestival.org\/events-2020\/14-pitch-your-future\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EPitch Your Future\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPresenting Partners:\u0026nbsp;Institute for Electronics and Nanotechnology at Georgia Tech, Jimmy Carter Presidential Library\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETime and Location: 1 p.m.,\u0026nbsp;Carter Presidential Library \u0026amp; Museum\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/atlantasciencefestival.org\/events-2020\/154-music-art-tech-fair\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EGuthman Music, Art \u0026amp; Technology Fair\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPresenting Partners: Georgia Tech\u0026rsquo;s School of Music, Cycling \u0026lsquo;74\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETime and Location: 4 p.m., Ferst Center for the Arts at Georgia Tech\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/atlantasciencefestival.org\/events-2020\/21-guthman-musical-instrument-competition\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EGuthman Musical Instrument Competition\u0026nbsp; \u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech\u0026rsquo;s School of Music\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETime and Location: 7 p.m., Ferst Center for the Arts at Georgia Tech\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003E\u003Cem\u003EMarch 10\u003C\/em\u003E\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/atlantasciencefestival.org\/events-2020\/40-project-change\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EProject Change: STEM Teachers @ Tech Day\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPresenting Partner: Georgia Tech CEISMC\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETime and Location: 9 a.m., Georgia Tech Student Center\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/atlantasciencefestival.org\/events-2020\/46-playing-mother-nature\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EPlaying Mother Nature: A Night of Simulating Earth Science Phenomena!\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPresenting Partner: Georgia Tech\u0026rsquo;s School of Earth and Atmospheric Science\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETime and Location: 7 p.m., Manuel\u0026#39;s Tavern\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003E\u003Cem\u003EMarch 12\u003C\/em\u003E\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/atlantasciencefestival.org\/events-2020\/69-sober-science-speakeasy\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003ESober Science Speakeasy\u003C\/strong\u003E\u003C\/a\u003E\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPresenting Partner: Georgia Tech\u0026rsquo;s STEM Comm VIP team\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETime and Location: 7:30 p.m., Coda Building, Midtown\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/atlantasciencefestival.org\/events-2020\/68-science-riot\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EScience Riot\u003C\/strong\u003E\u003C\/a\u003E\u0026nbsp;\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPresenting Partners: Georgia Tech, Science Riot\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETime and Location: 7:30 p.m., Highland Inn Ballroom\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003E\u003Cem\u003EMarch 14\u003C\/em\u003E\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/atlantasciencefestival.org\/events-2020\/83-latino-college-stem-fair\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003E8th Annual Latino College \u0026amp; STEM Fair\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPresenting Partners: Georgia Tech CEISMC Go-STEM\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETime and Location: 9 a.m., Georgia Tech Student Center\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/atlantasciencefestival.org\/events-2020\/85-science-of-star-wars\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EScience of Star Wars\u003C\/strong\u003E\u003C\/a\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPresenting Partner: Institute for Electronics and Nanotechnology at Georgia Tech\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETime and Location: 10 a.m., Marcus Nanotechnology Building Atrium\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/atlantasciencefestival.org\/events-2020\/91-investigating-the-nanoscale\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EInvestigating the Nanoscale\u003C\/strong\u003E\u003C\/a\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPresenting Partner: Institute for Electronics and Nanotechnology at Georgia Tech\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETime and Location: 11 a.m., Marcus Nanotechnology Building Atrium\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003E\u003Cem\u003EMarch 18\u003C\/em\u003E\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/atlantasciencefestival.org\/events-2020\/134-science-improv\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EScience Improv\u003C\/strong\u003E\u003C\/a\u003E\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPresenting Partner: Georgia Institute of Technology\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETime and Location: 7:30 p.m., Whole World Improv Theater\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The two-week Atlanta Science Festival will launch at Georgia Tech and bring diverse STEM programming to campus and metro area."}],"uid":"27592","created_gmt":"2020-02-27 15:48:01","changed_gmt":"2020-03-05 18:03:10","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-02-27T00:00:00-05:00","iso_date":"2020-02-27T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"633045":{"id":"633045","type":"image","title":"ASF @ GT 2020","body":null,"created":"1582819019","gmt_created":"2020-02-27 15:56:59","changed":"1582819087","gmt_changed":"2020-02-27 15:58:07","alt":"","file":{"fid":"240880","name":"ASF_at_GT 2020_2.png","image_path":"\/sites\/default\/files\/images\/ASF_at_GT%202020_2.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/ASF_at_GT%202020_2.png","mime":"image\/png","size":482203,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/ASF_at_GT%202020_2.png?itok=tfvnQcL_"}},"633242":{"id":"633242","type":"image","title":"Guthman Finalists Map 2020","body":null,"created":"1583263209","gmt_created":"2020-03-03 19:20:09","changed":"1583263209","gmt_changed":"2020-03-03 19:20:09","alt":"World map showing where Guthman Competition finalists came from in 2015 through 2020.","file":{"fid":"240941","name":"2020.guthman.contestants.png","image_path":"\/sites\/default\/files\/images\/2020.guthman.contestants.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/2020.guthman.contestants.png","mime":"image\/png","size":678997,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/2020.guthman.contestants.png?itok=0KhCAwK5"}}},"media_ids":["633045","633242"],"related_links":[{"url":"https:\/\/atlantasciencefestival.org\/","title":"Atlanta Science Festival"},{"url":"https:\/\/public.tableau.com\/views\/AtlantaScienceFestival2020\/Dashboard1?%3Adisplay_count=y\u0026%3Aorigin=viz_share_link\u0026%3AshowVizHome=no","title":"INTERACTIVE GRAPHIC: Atlanta Science Festival"},{"url":"https:\/\/public.tableau.com\/views\/Guthman2020\/Dashboarddemo?:showVizHome=no","title":"INTERACTIVE GRAPHIC: Guthman Musical Map"}],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39451","name":"Electronics and Nanotechnology"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\nResearch Communications Manager\u003Cbr \/\u003E\r\n\u003Cem\u003EGVU Center and College of Computing\u003C\/em\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"632082":{"#nid":"632082","#data":{"type":"news","title":"Changing the Conversation: Georgia Tech Researchers Provide New Approach to Automated Story Generation","body":[{"value":"\u003Cp\u003EIt\u0026rsquo;s a situation familiar to anyone who\u0026rsquo;s ever communicated with a voice assistant on a smart device. You pose a request: \u0026ldquo;Hey Voice Assistant, tell me a story about Georgia Tech.\u0026rdquo; More often than not, you get a related response \u0026ndash; \u0026ldquo;Georgia Tech is located in Atlanta, Georgia. Would you like me to provide you with directions?\u0026rdquo; \u0026ndash; but one with slightly unnatural language and only limited information.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDespite the enormous strides made in artificial intelligence to develop systems that can answer simple questions and requests, the kinds of natural conversational language humans have with each other when giving more complex directions or telling stories has thus far been out of reach.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearch from \u003Ca href=\u0022http:\/\/gatech.edu\u0022\u003EGeorgia Tech\u003C\/a\u003E\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E, however, provides a novel approach that improves the combination of automated story generation with natural language. The development is an important step in providing AI assistants the capability to more naturally converse with humans.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Let\u0026rsquo;s think of a future version of Siri or Alexa, where you have a complex task that\u0026rsquo;s not just \u0026lsquo;Look this thing up on the internet,\u0026rsquo; or \u0026lsquo;Tell me what the weather is outside,\u0026rsquo;\u0026rdquo; said Mark Riedl, an associate professor at Georgia Tech and the faculty lead on the research. \u0026ldquo;Maybe you want to plan your day or a birthday party. Think of the response like a little story, a narrative that conveys the requested information.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s a missing capability in AI \u0026ndash; they just don\u0026rsquo;t understand us or communicate with us in the same ways that we understand each other.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERiedl and his team approached the challenge by viewing the exchange of information as stories \u0026ndash; a series of events, one after the other, that leads to some conclusion. Past research on the topic identified patterns in language to identify how stories are constructed \u0026ndash; namely that a verb generally changes the action and conveys a new event in a story.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;By boiling down these stories drawn from the internet to essential verbs and actions, we can extract patterns from stories better,\u0026rdquo; Riedl said. \u0026ldquo;There are a lot of ways to talk about marriage, but at the end of the day someone is marrying someone else.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis paper, the third in the series, took the next step: If you take away all the words to identify the patterns in a story, you need to be able to put them back in naturally and intelligently in a way that humans are accustomed to. Put simply, it\u0026rsquo;s like building an outline and then filling in the details.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe system works by building the outline through a neural network trained on sequencing events. With the help of story examples drawn from the internet, it applies machine learning to produce a series of events, one leading to the most likely next outcome. That outline guides a second neural network that applies natural language \u0026ndash; grammar, syntax, spelling, everything else you need to make the story intelligible \u0026ndash; to produce more elaborate sentences.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If you\u0026rsquo;re asking for directions for how a birthday party should go, you don\u0026rsquo;t want just \u0026lsquo;Jill eats cake; Jill opens presents,\u0026rsquo;\u0026rdquo; Riedl said. \u0026ldquo;You want something more akin to the stories we share as humans. It\u0026rsquo;s actually more difficult for us to process information when it\u0026rsquo;s delivered in a way we\u0026rsquo;re not accustomed to.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe researchers found that an ensemble approach works the best. They use a series of five algorithms, each with different capabilities in accuracy and natural language generation, produces the best stories. Because one algorithm isn\u0026rsquo;t uniformly better at all aspects of the task, it will be run through all five to find the highest confidence level of the sentence.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;One technique might provide bland sentences, but is accurate with the actual content,\u0026rdquo; Riedl said. \u0026ldquo;Another might be very good at putting in a narrative flourish, but they fail more often. You want that nicer sentence, but you also want it to be able to catch mistakes in the content.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe ensemble approach scored significantly higher in human studies than the individual algorithms alone. Human trust in their AI and robot assistants, Riedl said, was key to adoption in the future.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The key is that you want to place that trust in your machine counterpart, but it has to earn that trust on correctness and accuracy,\u0026rdquo; he said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe paper is titled \u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1909.03480\u0022\u003E\u003Cem\u003EStory Realization: Expanding Plot Events into Sentences\u003C\/em\u003E\u003C\/a\u003E, and will be presented at the \u003Ca href=\u0022https:\/\/aaai.org\/Conferences\/AAAI-20\/\u0022\u003E34\u003Csup\u003Eth\u003C\/sup\u003E AAAI Conference on Artificial Intelligence\u003C\/a\u003E on Feb. 7-12 in New York City. The research is funded under a grant from \u003Ca href=\u0022https:\/\/www.darpa.mil\/\u0022\u003EDARPA\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Research from Georgia Tech\u2019s School of Interactive Computing provides a novel approach that improves the combination of automated story generation with natural language."}],"uid":"33939","created_gmt":"2020-02-04 15:56:44","changed_gmt":"2020-02-07 18:44:39","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-02-04T00:00:00-05:00","iso_date":"2020-02-04T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"632081":{"id":"632081","type":"image","title":"Amazon Alexa","body":null,"created":"1580831771","gmt_created":"2020-02-04 15:56:11","changed":"1580831771","gmt_changed":"2020-02-04 15:56:11","alt":"","file":{"fid":"240496","name":"alexa-alexa-talking-amazon-cortana-717235.jpg","image_path":"\/sites\/default\/files\/images\/alexa-alexa-talking-amazon-cortana-717235.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/alexa-alexa-talking-amazon-cortana-717235.jpg","mime":"image\/jpeg","size":52083,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/alexa-alexa-talking-amazon-cortana-717235.jpg?itok=CD6Eyss8"}}},"media_ids":["632081"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1317","name":"News Briefs"}],"categories":[],"keywords":[{"id":"181639","name":"cc-research; ic-ai-ml"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"631901":{"#nid":"631901","#data":{"type":"news","title":"Jill Watson Team Reaches Semifinals in IBM AI XPrize Competition","body":[{"value":"\u003Cp\u003EAlgorithms that help answer the stream of questions college students have each semester might be welcome by any instructor who can offload FAQs to such an artificially intelligent teaching assistant (TA).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EJill Watson \u0026ndash; Georgia Tech\u0026rsquo;s AI designed explicitly for this purpose \u0026ndash; turned four years old this January, with the AI\u0026rsquo;s birthday coinciding with the announcement of the \u003Ca href=\u0022https:\/\/ai.xprize.org\/prizes\/artificial-intelligence\/teams\u0022 target=\u0022_blank\u0022\u003E10 semifinalists for IBM\u0026rsquo;s AI XPrize competition\u003C\/a\u003E. Georgia Tech\u0026rsquo;s \u003Ca href=\u0022https:\/\/ai.xprize.org\/prizes\/artificial-intelligence\/teams\/emprize\u0022 target=\u0022_blank\u0022\u003EemPrize team\u003C\/a\u003E, led by Professor of Interactive Computing\u0026nbsp;\u003Cstrong\u003EAshok Goel \u003C\/strong\u003Eand utilizing Jill Watson as the key technology, was named as one of the semifinalists.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe competition started in 2016, the year of Jill\u0026rsquo;s arrival in a graduate computer science\u0026nbsp;course at Georgia Tech, and has \u0026ldquo;sought to accelerate the adoption of AI technologies and spark creative, innovative, and audacious demonstrations of the technology that are truly scalable to solve societal grand challenges.\u0026rdquo; After nearly four calendar years, XPrize will name a winner in April.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs part of the GT emPrize team\u0026rsquo;s work, the Jill Watson TA not only answers student questions about course requirements but can answer questions about another AI named \u003Ca href=\u0022http:\/\/vera.cc.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003EVERA\u003C\/a\u003E, or the Virtual Ecological Research Assistant.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EJill helps users learn how to use VERA, a system which enables students in GT\u0026rsquo;s Intro to Biology course (and online science seekers) to create their own ecological models from\u0026nbsp;a\u0026nbsp;web browser. Unlike the Jill Watson TA, which is currently used only by GT students, VERA is open to anyone with an internet connection.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnother part of emPrize\u0026nbsp;is the Jill Social Agent, whose lead designer, \u003Cstrong\u003EIda Camacho\u003C\/strong\u003E, is a recent\u0026nbsp;alumna of Georgia Tech\u0026rsquo;s Online Master of Science in Computer Science program (OMSCS) and understands the pressures and uncertainties of online learning.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe Jill Social Agent in essence gives students just starting online courses a chance at \u0026ldquo;speed friending\u0026rdquo;. If online students feel they have more peer support and connections from the start, this might\u0026nbsp;translate into success in the course. Hear from Camacho on the \u003Ca href=\u0022https:\/\/www.spreaker.com\/user\/10751784\/tu-ep10-jill-social-ai-online-learninG\u0022 target=\u0022_blank\u0022\u003ETech Unbound podcast with GVU Center\u003C\/a\u003E\u0026nbsp;as she reveals some of her AI\u0026rsquo;s design and the educational experience that informed her work on emPrize.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELearn more at\u0026nbsp;\u003Ca href=\u0022http:\/\/emprize.gatech.edu\/\u0022\u003Ehttp:\/\/emprize.gatech.edu\/\u003C\/a\u003E\u0026nbsp;or explore a \u003Ca href=\u0022https:\/\/public.tableau.com\/views\/JillWatsonTurns4\/Dashboard?:display_count=y\u0026amp;:origin=viz_share_link:showVizHome=no\u0022 target=\u0022_blank\u0022\u003Etimeline of Jill\u0026#39;s evolution\u003C\/a\u003E.\u0026nbsp;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EJill Watson \u0026ndash; Georgia Tech\u0026rsquo;s AI designed explicitly for answering student questions about specific courses\u0026nbsp;\u0026ndash; turned four years old this January, with the AI\u0026rsquo;s birthday coinciding with the announcement of the 10 semifinalists for IBM\u0026rsquo;s AI XPrize competition. Georgia Tech\u0026rsquo;s emprize team, led by Professor of Interactive Computing\u0026nbsp;\u003Cstrong\u003EAshok Goel \u003C\/strong\u003Eand utilizing Jill Watson as the key technology, was named as one of the semifinalists.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Jill Watson \u2013 Georgia Tech\u2019s AI designed explicitly for answering student questions about specific courses - was named as one of 10 semifinalists in IBM\u2019s AI XPrize competition."}],"uid":"27592","created_gmt":"2020-01-30 17:43:15","changed_gmt":"2020-01-30 18:04:26","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-01-30T00:00:00-05:00","iso_date":"2020-01-30T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"631547":{"id":"631547","type":"image","title":"Timeline: Jill Watson AI at 4","body":null,"created":"1579883925","gmt_created":"2020-01-24 16:38:45","changed":"1580406385","gmt_changed":"2020-01-30 17:46:25","alt":"Timeline: Jill Watson AI at 4yo","file":{"fid":"240330","name":"Jill Timeline at 4yo.png","image_path":"\/sites\/default\/files\/images\/Jill%20Timeline%20at%204yo.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Jill%20Timeline%20at%204yo.png","mime":"image\/png","size":743294,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Jill%20Timeline%20at%204yo.png?itok=omFER7Lg"}}},"media_ids":["631547"],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\nResearch Communications Manager\u003Cbr \/\u003E\r\n\u003Cem\u003EGVU Center and College of Computing\u003C\/em\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"631545":{"#nid":"631545","#data":{"type":"news","title":"Jill Watson, an AI Pioneer in Education, Turns 4","body":[{"value":"\u003Cp\u003EGeorgia Tech\u0026rsquo;s most well-known artificially intelligent teaching assistant, Jill Watson, turns four years old this January. The brainchild of \u003Cstrong\u003EAshok Goel\u003C\/strong\u003E, professor in Interactive Computing, and launched at the start of 2016, the virtual TA was introduced into one of the courses for the then-fledgling Online Master of Science in Computer Science (OMSCS) program, now one of Georgia Tech\u0026rsquo;s largest graduate degree programs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EStudents and faculty would be forgiven in thinking Jill Watson is a single teaching assistant. Each course that utilizes the Jill TA has its own custom \u0026ldquo;knowledge base\u0026rdquo; that the AI leverages to answer basic student questions 24\/7.\u003C\/p\u003E\r\n\r\n\u003Ch5\u003E\u003Ca href=\u0022https:\/\/public.tableau.com\/views\/JillWatsonTurns4\/Dashboard?:display_count=y\u0026amp;:origin=viz_share_link:showVizHome=no\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EExplore the Timeline of Jill\u0026rsquo;s Growth\u003C\/strong\u003E\u003C\/a\u003E\u003C\/h5\u003E\r\n\r\n\u003Cp\u003EIn addition, a new AI, the \u003Cstrong\u003EJill Social Agent\u003C\/strong\u003E, was designed and launched in 2019 to explicitly connect students quickly and get them working together. The agent was developed in part as a response to high attrition rates that plague online learning in general.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe lead architect for the Jill Social Agent, \u003Cstrong\u003EIda Camacho\u003C\/strong\u003E, OMSCS \u0026rsquo;19, discusses\u0026nbsp;the\u0026nbsp;AI\u0026nbsp;on an episode of the\u0026nbsp;\u003Ca href=\u0022https:\/\/gvu.gatech.edu\/tech-unbound-podcast\u0022\u003ETech Unbound Podcast\u003C\/a\u003E from the GVU Center. It\u0026rsquo;s a fascinating inside look at Camacho\u0026rsquo;s approach to building social structures for online education and her own journey as an OMSCS student.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOther major milestones from the Jill TA in 2019:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EIntroduced in residential classroom for first time.\u003C\/li\u003E\r\n\t\u003Cli\u003EDeployed in first non-CS course (Intro to Biology).\u003C\/li\u003E\r\n\t\u003Cli\u003ECustomized to train users on the \u003Ca href=\u0022http:\/\/vera.cc.gatech.edu\/\u0022\u003EVERA AI\u003C\/a\u003E, an ecology modeling system.\u003C\/li\u003E\r\n\t\u003Cli\u003E\u0026nbsp;\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EThe new decade promises more educational advances made possible by the Jill Watson AI framework. Learn more at \u003Ca href=\u0022http:\/\/emprize.gatech.edu\/\u0022\u003Eemprize.gatech.edu\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech\u0026rsquo;s most well-known artificially intelligent teaching assistant, Jill Watson, turns four years old this January.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech\u2019s most well-known artificially intelligent teaching assistant, Jill Watson, turns four years old this January."}],"uid":"27592","created_gmt":"2020-01-24 16:30:24","changed_gmt":"2020-01-24 17:23:58","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-01-24T00:00:00-05:00","iso_date":"2020-01-24T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"631547":{"id":"631547","type":"image","title":"Timeline: Jill Watson AI at 4","body":null,"created":"1579883925","gmt_created":"2020-01-24 16:38:45","changed":"1580406385","gmt_changed":"2020-01-30 17:46:25","alt":"Timeline: Jill Watson AI at 4yo","file":{"fid":"240330","name":"Jill Timeline at 4yo.png","image_path":"\/sites\/default\/files\/images\/Jill%20Timeline%20at%204yo.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Jill%20Timeline%20at%204yo.png","mime":"image\/png","size":743294,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Jill%20Timeline%20at%204yo.png?itok=omFER7Lg"}}},"media_ids":["631547"],"related_links":[{"url":"http:\/\/gvu.gatech.edu\/news\/ai-agent-breaks-down-social-barriers-online-education","title":"A Closer Look at the Jill Social Agent"},{"url":"http:\/\/emprize.gatech.edu\/","title":"Georgia Tech Finalist in IBM AI XPrize Competition"},{"url":"https:\/\/www.spreaker.com\/user\/10751784\/tu-ep10-jill-social-ai-online-learninG","title":"Tech Unbound EP10: Online Education Gets a Social Boost with Artificial Intelligence"}],"groups":[{"id":"1299","name":"GVU Center"},{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39431","name":"Data Engineering and Science"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\nResearch Communications Manager\u003Cbr \/\u003E\r\n\u003Cem\u003EGVU Center and College of Computing\u003C\/em\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"630479":{"#nid":"630479","#data":{"type":"news","title":"ML@GT Adds Six New Associate Directors to Leadership Team","body":[{"value":"\u003Cp\u003EThe \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EMachine Learning Center at Georgia Tech (ML@GT)\u003C\/a\u003E continues to diversify and expand its leadership team. Starting in January the leadership team will add \u003Cstrong\u003EDeven Desai, Polo Chau, Mark Davenport, Yao Xie, Mark Riedl, \u003C\/strong\u003Eand \u003Cstrong\u003EGeorge Lan\u003C\/strong\u003E as associate directors\u003Cstrong\u003E.\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDesai, an associate professor in the \u003Ca href=\u0022https:\/\/www.scheller.gatech.edu\/directory\/faculty\/desai\/index.html\u0022\u003EScheller College of Business\u003C\/a\u003E, will be the center\u0026rsquo;s first associate director for Legal, Policy, Ethics, and Machine Learning. Not a technologist by training, Desai will draw from his experience working at Princeton\u0026#39;s Center for Information Technology Policy and Google as Academic Research Counsel to help policy makers, legal scholars and technologists work better together. This includes helping each party understand how a given technology works and what issues it might raise.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I am excited to be part of ML@GT because of the opportunity to be part of a world class group of thinkers and to connect our work to the world. \u0026nbsp;I believe there is a need to bridge the worlds of technology and law, policy, and ethics,\u0026rdquo; said Desai. \u0026ldquo;ML@GT is poised to increase not only machine learning insights and breakthroughs but also the way in which machine learning is built and used to serve society. I am honored and thrilled to be part of building that future.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EXie, an associate professor in the \u003Ca href=\u0022https:\/\/www.isye.gatech.edu\/\u0022\u003EH. Milton Stewart School of Industrial Systems Engineering (ISyE),\u003C\/a\u003E is the first woman to join the leadership team. She will serve as the associate director for machine learning and data science where she will create better synergy between the ongoing research and education efforts between data science and machine learning as Georgia Tech builds a leading program in these areas.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I am particularly excited to work with the broader community of students and faculty on campus who are interested or involved with machine learning and data science and foster their participation,\u0026rdquo; said Xie.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELan, also an associate professor in ISyE has been appointed as the associate director for machine learning and statistics. In this role, Lan will promote research at the intersections between optimization, statistics, and machine learning and how they also apply in engineering. He will also help better facilitate communications for students coming from different home colleges or schools across campus.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I am excited to be joining the team with active and dynamic academic leaders. I look forward to working with them to address a diverse set of challenges that ML@GT faces, e.g., being adaptive to the priorities and criterions for our affiliated faculty members and students across different academic units,\u0026rdquo; said Lan.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs the associate director for machine learning and artificial intelligence, Riedl, an associate professor in the \u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E, will coordinate ML@GT\u0026rsquo;s strategy with respect to the broader field of artificial intelligence.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Artificial intelligence and machine learning have the potential to radically change virtually every aspect of our lives. With thought and care, these technologies can be a force for good. Georgia Tech is well-positioned to be a major voice in how technology and policy shape the future,\u0026rdquo; said Riedl.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWith more corporations integrating machine learning and artificial intelligence into their businesses, the center\u0026rsquo;s need for managing those relationships has increased significantly. Chau, an associate professor in the \u003Ca href=\u0022https:\/\/cse.gatech.edu\/\u0022\u003ESchool of Computational Science and Engineering\u003C\/a\u003E, will lead those relationships as the associate director for corporate relations for machine learning.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I enjoy bringing people together, connecting industry with Georgia Tech researchers, bridging disciplines and innovating at their intersections. I\u0026rsquo;m excited to begin my new role as it will be a great way to help Georgia Tech further expand its national and global footprint,\u0026rdquo; said Chau.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs the associate director for community and students, Davenport is charged with creating a tight-knit community among faculty and students. Davenport, an associate professor in the \u003Ca href=\u0022https:\/\/www.ece.gatech.edu\/\u0022\u003ESchool of Electrical and Computer Engineering\u003C\/a\u003E, will work closely with the center staff to coordinate events and other opportunities to increase discussion and collaboration between research units.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe six new members will join \u003Ca href=\u0022http:\/\/ml.gatech.edu\/leadership\u0022\u003Eexisting leadership members\u003C\/a\u003E \u003Cstrong\u003EIrfan Essa, Justin Romberg, Zsolt Kira, \u003C\/strong\u003Eand \u003Cstrong\u003ELe Song. \u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Ch4\u003EAbout the Machine Learning Center at Georgia Tech\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EThe Machine Learning Center at Georgia Tech is an interdisciplinary research center bringing together more than 190 faculty members and 60 machine learning Ph.D. students from across the institute for meaningful collaboration and innovation in machine learning and artificial intelligence. Students and faculty are experts in areas including, but not limited, to computer vision, natural language processing, robotics, deep learning, ethics and fairness, computational finance, information security, and logistics and manufacturing. For more information, visit \u003Ca href=\u0022http:\/\/www.ml.gatech.edu\u0022\u003Ewww.ml.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The Machine Learning Center at Georgia Tech enters the new year with an expanded leadership team. "}],"uid":"34773","created_gmt":"2020-01-03 21:55:17","changed_gmt":"2020-01-06 13:00:55","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-01-06T00:00:00-05:00","iso_date":"2020-01-06T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"630495":{"id":"630495","type":"image","title":"ML@GT adds six new associate directors to the leadership team from across the institute.","body":null,"created":"1578314978","gmt_created":"2020-01-06 12:49:38","changed":"1578315834","gmt_changed":"2020-01-06 13:03:54","alt":"ML@GT adds six new associate directors to the leadership team","file":{"fid":"240039","name":"ML_AssociateDirectors.png","image_path":"\/sites\/default\/files\/images\/ML_AssociateDirectors.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/ML_AssociateDirectors.png","mime":"image\/png","size":804221,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/ML_AssociateDirectors.png?itok=iMIhhF4U"}},"630498":{"id":"630498","type":"image","title":"Deven Desai, Associate Director for Legal, Policy, Ethics, and Machine Learning","body":null,"created":"1578315260","gmt_created":"2020-01-06 12:54:20","changed":"1578315260","gmt_changed":"2020-01-06 12:54:20","alt":"Deven Desai","file":{"fid":"240042","name":"desai_deven_profile.jpg","image_path":"\/sites\/default\/files\/images\/desai_deven_profile_0.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/desai_deven_profile_0.jpg","mime":"image\/jpeg","size":73508,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/desai_deven_profile_0.jpg?itok=LyBZKrKM"}},"630501":{"id":"630501","type":"image","title":"Yao Xie, Associate Director for Machine Learning and Data Science ","body":null,"created":"1578315482","gmt_created":"2020-01-06 12:58:02","changed":"1578315482","gmt_changed":"2020-01-06 12:58:02","alt":"Yao Xie","file":{"fid":"240045","name":"yao_xie_2013_3.jpg","image_path":"\/sites\/default\/files\/images\/yao_xie_2013_3.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/yao_xie_2013_3.jpg","mime":"image\/jpeg","size":112071,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/yao_xie_2013_3.jpg?itok=YF1suppd"}},"630499":{"id":"630499","type":"image","title":"George Lan, Associate Director for Machine Learning and Statistics","body":null,"created":"1578315328","gmt_created":"2020-01-06 12:55:28","changed":"1578315328","gmt_changed":"2020-01-06 12:55:28","alt":"George Lan","file":{"fid":"240043","name":"gl_2.jpg","image_path":"\/sites\/default\/files\/images\/gl_2.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/gl_2.jpg","mime":"image\/jpeg","size":63569,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/gl_2.jpg?itok=H7Kg9FBb"}},"630496":{"id":"630496","type":"image","title":"Mark Riedl, Associate Director for Machine Learning and Artificial Intelligence","body":null,"created":"1578315077","gmt_created":"2020-01-06 12:51:17","changed":"1578315077","gmt_changed":"2020-01-06 12:51:17","alt":"Mark Riedl","file":{"fid":"240040","name":"mark_riedl_007.jpg","image_path":"\/sites\/default\/files\/images\/mark_riedl_007.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/mark_riedl_007.jpg","mime":"image\/jpeg","size":213042,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/mark_riedl_007.jpg?itok=cvjhVAME"}},"630500":{"id":"630500","type":"image","title":"Polo Chau, Associate Director for Corporate Relations for Machine Learning","body":null,"created":"1578315397","gmt_created":"2020-01-06 12:56:37","changed":"1578315397","gmt_changed":"2020-01-06 12:56:37","alt":"Polo Chau","file":{"fid":"240044","name":"polo_chau_550x688_01_2.jpg","image_path":"\/sites\/default\/files\/images\/polo_chau_550x688_01_2.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/polo_chau_550x688_01_2.jpg","mime":"image\/jpeg","size":222467,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/polo_chau_550x688_01_2.jpg?itok=A5nX3-qd"}},"630497":{"id":"630497","type":"image","title":"Mark Davenport, Associate Director for Community and Students","body":null,"created":"1578315143","gmt_created":"2020-01-06 12:52:23","changed":"1578315143","gmt_changed":"2020-01-06 12:52:23","alt":"Mark Davenport","file":{"fid":"240041","name":"davenport-square.jpg","image_path":"\/sites\/default\/files\/images\/davenport-square.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/davenport-square.jpg","mime":"image\/jpeg","size":31569,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/davenport-square.jpg?itok=z5EYby4U"}}},"media_ids":["630495","630498","630501","630499","630496","630500","630497"],"groups":[{"id":"576481","name":"ML@GT"},{"id":"47223","name":"College of Computing"},{"id":"37041","name":"Computational Science and Engineering"},{"id":"1299","name":"GVU Center"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"129","name":"Institute and Campus"},{"id":"134","name":"Student and Faculty"}],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u0026nbsp;\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"629748":{"#nid":"629748","#data":{"type":"news","title":"Amazon and Georgia Tech Team Up with Ciara to Inspire Students to Code through Competition to Remix the Singer\/Songwriter\u2019s Song \u201cSET\u201d","body":[{"value":"\u003Cp\u003EAmazon has\u0026nbsp;announced a new addition to its Amazon Future Engineer program \u0026ndash; a\u0026nbsp;\u003Ca href=\u0022https:\/\/cts.businesswire.com\/ct\/CT?id=smartlink\u0026amp;url=https%3A%2F%2Fwww.amazonfutureengineer.com%2Fearsketch\u0026amp;esheet=52132432\u0026amp;newsitemid=20191120005225\u0026amp;lan=en-US\u0026amp;anchor=music+remix+competition\u0026amp;index=1\u0026amp;md5=55e2731a08604d2df275f4575b568720\u0022\u003Emusic remix competition\u003C\/a\u003E\u0026nbsp;that teaches students how to write code that makes music. Alongside\u0026nbsp;Georgia Tech\u0026nbsp;and their learn-to-code-through music platform, EarSketch, participating high school students have the opportunity to win prizes by composing an original remix featuring original music stems from Grammy Award winning singer\/songwriter\u0026nbsp;\u003Ca href=\u0022https:\/\/cts.businesswire.com\/ct\/CT?id=smartlink\u0026amp;url=https%3A%2F%2Ftwitter.com%2Fciara%2Fstatus%2F1196820429748334592\u0026amp;esheet=52132432\u0026amp;newsitemid=20191120005225\u0026amp;lan=en-US\u0026amp;anchor=Ciara\u0026amp;index=2\u0026amp;md5=82ff2e20a316ce9fe5bab3182da6ea6b\u0022\u003ECiara\u003C\/a\u003E\u0026nbsp;and her song, \u0026ldquo;SET\u0026rdquo; from her latest album\u0026nbsp;\u003Cem\u003EBeauty Marks\u003C\/em\u003E. The competition is intended to uniquely activate young people to try computer science and coding. All high school students across the country are encouraged to\u0026nbsp;\u003Ca href=\u0022https:\/\/cts.businesswire.com\/ct\/CT?id=smartlink\u0026amp;url=https%3A%2F%2Fwww.amazonfutureengineer.com%2Fearsketch\u0026amp;esheet=52132432\u0026amp;newsitemid=20191120005225\u0026amp;lan=en-US\u0026amp;anchor=enter\u0026amp;index=3\u0026amp;md5=caa3175e0a1c7088703a02ee88feed24\u0022\u003Eenter\u003C\/a\u003E\u0026nbsp;the competition now through\u0026nbsp;January 20\u003Csup\u003Eth\u003C\/sup\u003E. Teaching guides are available for teachers to bring the competition to their classroom, or as part of their introductory computer science curriculum.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EStudents will use computer science and coding to build their remix using musical samples from Ciara\u0026rsquo;s song \u0026ldquo;SET,\u0026rdquo; as well as other sounds from the EarSketch library. Students will learn looping (repeating) to extend the length of their song, use strings to create new beats, create custom functions representing different song sections, and learn to upload their own sounds to the EarSketch library.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We are excited to support the innovative and unique work\u0026nbsp;Georgia Tech\u0026nbsp;and EarSketch are pioneering to give students across the country more access to computer science, coding, and music,\u0026rdquo; said\u0026nbsp;Jeff Wilke, CEO Worldwide Consumer,\u0026nbsp;Amazon. \u0026ldquo;This competition will give thousands of students from underserved and underrepresented communities the opportunity to try something new and fun. It will build their confidence and, most importantly, encourage them to think creatively.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EStudents and teachers can learn more about the competition details at\u0026nbsp;\u003Ca href=\u0022https:\/\/cts.businesswire.com\/ct\/CT?id=smartlink\u0026amp;url=http%3A%2F%2Fwww.amazonfutureengineer.com%2Fearsketch\u0026amp;esheet=52132432\u0026amp;newsitemid=20191120005225\u0026amp;lan=en-US\u0026amp;anchor=www.amazonfutureengineer.com%2Fearsketch\u0026amp;index=4\u0026amp;md5=8ec92013daecf047f0df18a11b07d6d6\u0022\u003Ewww.amazonfutureengineer.com\/earsketch\u003C\/a\u003E\u0026nbsp;- all high school students are encouraged to participate, either in class or on their own. The top three student winners will each receive an all-expenses-paid trip to Amazon\u0026rsquo;s headquarters in\u0026nbsp;Seattle, Washington\u0026nbsp;to be an \u0026ldquo;Amazon Future Engineer\u0026rdquo; for the day. Additional winners will receive a PreSonus Audiobox 96 Studio and Amazon.com Gift Cards. The competition opens today and will close on\u0026nbsp;Monday, January 20\u0026nbsp;at\u0026nbsp;11:59PM EST.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe Bureau of Labor Statistics\u0026nbsp;projects that by 2020 there will be 1.4 million computer-science-related jobs available and only 400,000 computer science graduates with the skills to apply for those jobs. Computer science is the fastest-growing profession within the Science, Technology, Engineering and Math (STEM) field, but only 8% of STEM graduates earn a computer science degree, with a small percentage from underserved backgrounds. In multiple research studies, students using the EarSketch platform significantly increased their positive attitudes towards computing and their intentions to persist in computing, with particularly significant impacts on students from groups historically underrepresented in the field. Female students expressed even greater gains in computing confidence, motivation, and identity as compared to their male counterparts.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Music and coding are both highly prevalent today, but coding is less apparent\u0026mdash;EarSketch provides an amazing way to experience these creative disciplines simultaneously,\u0026rdquo; said Dr.\u0026nbsp;Roxanne Moore, Senior Research Engineer at\u0026nbsp;Georgia Tech\u0026nbsp;and project lead for the remix competition.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We know that creativity is the key to success in both music and computer science,\u0026rdquo; said Professor\u0026nbsp;Jason Freeman, Chair of the\u0026nbsp;School of Music\u0026nbsp;at\u0026nbsp;Georgia Tech\u0026nbsp;and co-creator of the EarSketch platform. \u0026ldquo;We\u0026rsquo;re thrilled to partner with\u0026nbsp;Amazon\u0026nbsp;to support more students as they unlock their creative potential through EarSketch.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo date, more than 375,000 students in all 50 states and over 100 countries have used EarSketch. Programs like EarSketch serve Georgia Tech\u0026rsquo;s mission to meet the demand for STEM (science, technology, engineering, and mathematics) professionals by opening the eyes of more students to these engaging and important subjects.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELaunched in\u0026nbsp;November 2018, Amazon Future Engineer is a four-part childhood-to-career program intended to inspire, educate, and prepare children and young adults from underrepresented and underserved communities to pursue careers in the fast-growing field of computer science. Each year,\u0026nbsp;\u003Ca href=\u0022https:\/\/cts.businesswire.com\/ct\/CT?id=smartlink\u0026amp;url=https%3A%2F%2Fwww.amazonfutureengineer.com%2F\u0026amp;esheet=52132432\u0026amp;newsitemid=20191120005225\u0026amp;lan=en-US\u0026amp;anchor=Amazon+Future+Engineer\u0026amp;index=5\u0026amp;md5=704ffeac8db4d857b848f0c0044dfe7b\u0022\u003EAmazon Future Engineer\u003C\/a\u003E\u0026nbsp;aims to inspire millions of kids to explore computer science; provides over 100,000 young people in over 2,000 high schools access to Intro or AP Computer Science courses; awards 100 students with four-year\u0026nbsp;$10,000\u0026nbsp;scholarships, as well as offers guaranteed and paid\u0026nbsp;Amazon\u0026nbsp;internships to gain work experience. Amazon Future Engineer is part of Amazon\u0026rsquo;s\u0026nbsp;$50 million\u0026nbsp;investment in computer science\/STEM education. In addition, Amazon Future Engineer has donated more than\u0026nbsp;$10 million\u0026nbsp;to organizations that promote computer science\/STEM education across the country.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAMAZON CONTACT:\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAmazon.com, Inc.\u003Cbr \/\u003E\r\nMedia Hotline\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto:Amazon-pr@amazon.com\u0022\u003EAmazon-pr@amazon.com\u003C\/a\u003E\u003Cbr \/\u003E\r\n\u003Ca href=\u0022https:\/\/cts.businesswire.com\/ct\/CT?id=smartlink\u0026amp;url=http%3A%2F%2Fwww.amazon.com%2Fpr\u0026amp;esheet=52132432\u0026amp;newsitemid=20191120005225\u0026amp;lan=en-US\u0026amp;anchor=www.amazon.com%2Fpr\u0026amp;index=8\u0026amp;md5=eadd66973ffd13c490f7098aa8727897\u0022\u003Ewww.amazon.com\/pr\u003C\/a\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Amazon and Georgia Tech have\u00a0announced a\u00a0music remix competition using coding."}],"uid":"27592","created_gmt":"2019-12-06 14:51:08","changed_gmt":"2019-12-09 21:23:57","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-11-20T00:00:00-05:00","iso_date":"2019-11-20T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"629749":{"id":"629749","type":"image","title":"Ciara Remix Competition","body":null,"created":"1575644056","gmt_created":"2019-12-06 14:54:16","changed":"1575644056","gmt_changed":"2019-12-06 14:54:16","alt":"","file":{"fid":"239811","name":"Ciara Remix Competition.png","image_path":"\/sites\/default\/files\/images\/Ciara%20Remix%20Competition.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Ciara%20Remix%20Competition.png","mime":"image\/png","size":642258,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Ciara%20Remix%20Competition.png?itok=RmOFtLS1"}}},"media_ids":["629749"],"related_links":[{"url":"https:\/\/tabsoft.co\/2PipJDr","title":"Data Viz: Explore and listen to Ciara\u0027s Billboard hits"}],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\nResearch Communications Manager\u003Cbr \/\u003E\r\n\u003Cem\u003EGVU Center and College of Computing\u003C\/em\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"628671":{"#nid":"628671","#data":{"type":"news","title":"FairVis is Helping Data Scientists Discover Societal Biases in their Machine Learning Models ","body":[{"value":"\u003Cp\u003EResearchers at Georgia Tech, Carnegie Mellon University, and University of Washington have developed a data visualization system that can help data scientists discover bias in machine learning algorithms.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1904.05419.pdf\u0022\u003EFairVis\u003C\/a\u003E, presented at\u0026nbsp;\u003Ca href=\u0022http:\/\/ieeevis.org\/year\/2019\/welcome\u0022\u003EIEEE Vis 2019\u003C\/a\u003E\u0026nbsp;in Vancouver, is the first system to integrate a novel technique that allows users to audit the fairness of machine learning models by identifying and comparing different populations in their data sets.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAccording to School of Computational Science and Engineering (CSE) Professor and co-investigator\u0026nbsp;\u003Ca href=\u0022https:\/\/poloclub.github.io\/polochau\/\u0022\u003E\u003Cstrong\u003EPolo Chau\u003C\/strong\u003E\u003C\/a\u003E\u003Cstrong\u003E,\u0026nbsp;\u003C\/strong\u003Ethis feat has never been accomplished by any platform before, and is a major contribution of FairVis to the data science and machine learning communities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Computers are never going to be perfect. So, the question is how to help people prioritize where to look in their data, and then, in a scalable way, enable them to compare these areas to other similar or dissimilar groups in the data. By enabling comparison of groups in a data set,\u0026nbsp;FairVis allows data to become very scannable,\u0026rdquo; he said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn order to do accomplish this, FairVis uses two novel techniques to find subgroups that are statistically similar.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe first technique groups similar items together in the training data set, calculates various performance metrics like accuracy, and then shows users which groups of people the algorithm may be biased against. The second technique uses statistical divergence to measure the distance between subgroups to allow users to compare similar groups and find larger patterns of bias.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThese outputs are then viewed and analyzed through FairVis\u0026rsquo; visual analytics system, which is designed to specifically discover and show intersectional bias.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIntersectional bias, or bias that is found when looking at populations defined by multiple features, is a mounting challenge for scientists to tackle in an increasingly diverse world.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;While a machine learning algorithm may work very well in general, there may be certain groups for which it fails. For example, various face detection algorithms were found to be 30 percent less accurate for darker skinned women than for lighter skinned men. When you look at more specific groups of sex, race, nationality, and more, there can be hundreds or thousands of groups to audit,\u0026rdquo; said\u0026nbsp;Carnegie Mellon University\u0026nbsp;Ph.D. student\u0026nbsp;\u003Ca href=\u0022https:\/\/cabreraalex.com\/\u0022\u003E\u003Cstrong\u003EAlex Cabrera\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECabrera is the primary investigator of FairVis and has been pursuing this problem since he was an undergraduate student at Georgia Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;During the summer of my junior year I had been researching various topics in machine learning, and discovered some recent work showing how machine learning models can encode and worsen societal biases. I quickly realized that not only was this a significant issue, with examples of biased algorithms in everything from hiring systems to self-driving cars, but that my own work during my internship had the possibility to be biased against lower socioeconomic groups.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis is when Cabrera reached out to Chau who then recruited the help of CSE alumni\u0026nbsp;\u003Ca href=\u0022https:\/\/minsuk.com\/\u0022\u003E\u003Cstrong\u003EMinsuk Kahng\u003C\/strong\u003E\u003C\/a\u003E, CSE Ph.D.\u0026nbsp;\u003Ca href=\u0022https:\/\/fredhohman.com\/\u0022\u003E\u003Cstrong\u003EFred Hohman\u003C\/strong\u003E\u003C\/a\u003E\u003Cstrong\u003E,\u0026nbsp;\u003C\/strong\u003ECollege of Computing undergraduate student\u0026nbsp;\u003Ca href=\u0022http:\/\/www.willepperson.com\/\u0022\u003E\u003Cstrong\u003EWill Epperson\u003C\/strong\u003E\u003C\/a\u003E\u003Cstrong\u003E,\u0026nbsp;\u003C\/strong\u003Eand University of Washington Assistant Professor\u0026nbsp;\u003Ca href=\u0022http:\/\/jamiemorgenstern.com\/\u0022\u003E\u003Cstrong\u003EJamie Morgenstern\u003C\/strong\u003E\u003C\/a\u003E\u003Cstrong\u003E.\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMorgenstern is the lead researcher for a number of projects related to fairness in machine learning, including the study Cabrera mentioned about self-driving cars. This particular study shows the potentially\u0026nbsp;\u003Ca href=\u0022https:\/\/www.scs.gatech.edu\/news\/620309\/research-reveals-possibly-fatal-consequences-algorithmic-bias\u0022\u003Efatal consequences of algorithmic bias\u003C\/a\u003E\u0026nbsp;which highlights the severity of software created without fairness embedded into its core.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFairVis is one of the first systems that helps us achieve a dramatic step towards understanding and addressing the problem of fairness in machine learning, and prevents similar headlines from making their way to reality in the future.\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHowever, Cabrera stressed that the solution does not simply end with better data practices.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Fairness is an extremely difficult problem, a so-called \u0026lsquo;wicked problem\u0026rsquo;, that will not be solved by technology alone,\u0026rdquo; he said.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Social scientists, policy makers, and engineers need to work together to make inroads and ensure that our algorithms are equitable for all people. We hope FairVis is a step in this direction and helps people start the conversation about how to tackle and address these issues.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Researchers present FairVis -  a visual analytics system that enables discovery of user subgroups to discover bias in machine learning models."}],"uid":"34540","created_gmt":"2019-11-06 18:04:14","changed_gmt":"2019-12-06 14:44:50","author":"Kristen Perez","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-11-06T00:00:00-05:00","iso_date":"2019-11-06T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"628667":{"id":"628667","type":"image","title":"FairVis","body":null,"created":"1573063180","gmt_created":"2019-11-06 17:59:40","changed":"1573063180","gmt_changed":"2019-11-06 17:59:40","alt":"A screenshot of a\u00a0visual analytics system that enables discovery of user subgroups to discover bias in machine learning models","file":{"fid":"239426","name":"FairVis.jpg","image_path":"\/sites\/default\/files\/images\/FairVis.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/FairVis.jpg","mime":"image\/jpeg","size":29572,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/FairVis.jpg?itok=wsbnkU4a"}}},"media_ids":["628667"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"431631","name":"OMS"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[{"id":"4305","name":"cse"},{"id":"83261","name":"Polo Chau"},{"id":"181315","name":"cse-dse"},{"id":"181220","name":"cse-ml"},{"id":"182995","name":"FairVis"},{"id":"1496","name":"Ethics"},{"id":"9167","name":"machine learning"},{"id":"307","name":"fairness"},{"id":"182996","name":"Alex Cabrera"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:kristen.perez@cc.gatech.edu\u0022\u003EKristen Perez\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["kristen.perez@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"629259":{"#nid":"629259","#data":{"type":"news","title":"Georgia Tech Researchers Explore New Ways to Give Navigation Directions to Robots","body":[{"value":"\u003Cp\u003ERobots can navigate buildings, but how do they know where to go? While some robots can follow pre-programmed routes, or be controlled by setting waypoints on a map, these methods are inflexible and can be unnatural to use. Researchers at Georgia Tech believe the best way to give robots navigation instructions is by talking to them.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Giving natural language instructions to a robot is a fundamental research problem on the critical path to developing more flexible domestic robots that can work with people,\u0026rdquo; said \u003Cstrong\u003EPeter Anderson\u003C\/strong\u003E, a research scientist at Georgia Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn a \u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1907.02022.pdf\u0022\u003Erecent paper\u003C\/a\u003E, Georgia Tech has introduced a new way for robots to reason about navigation instructions in an unknown environment.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe team created a semantic map representation that updates each time the robot moves or sees something new. To reason about navigation instructions using this map, the lab found a way to leverage an algorithm used in classical robotics and apply it to artificial intelligence. The algorithm, called Bayesian state estimation, usually tracks the location of a robot from sensor measurements like lidar and wheel odometry. By manipulating the algorithm, Georgia Tech says their robots can use it model language instruction inputs instead.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe paper got its title \u0026quot;Chasing Ghosts: Instruction Following as Bayesian State Tracking\u0026quot; because rather than tracking a robot from sensor measurements, the team is tracking the likely trajectory taken by an ideal agent or human demonstrator in response to the instructions. In this approach, the sensor measurements are the instructions themselves.\u0026nbsp; This algorithm allows the agent to \u0026ldquo;reason\u0026rdquo; about all the different trajectories it could take and the probability of each trajectory when completing a task. By using an explicit map, researchers are easily able to inspect the model to see where the agent thinks the goal is and where it is likely to move next.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECurrently, the robots move in simulated reconstructions of buildings, and communication is through written text, though some applications and off-the-shelf speech-to-text systems could work in conjunction with the existing system, according to researchers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Spoken language would definitely be more natural in many situations, so we might in the future investigate models that go directly from speech to robot actions,\u0026rdquo; said Anderson.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnderson particularly likes to think about this work in regards to telepresence robots, though it could be applied to any robot.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Telepresence robots are a great idea, but they are not as popular as they could be. Maybe we need smarter, more natural robots that just go where you tell them to go and look at what you ask them to look at,\u0026rdquo; said Anderson.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThink about all of the time that is lost commuting to work and walking to meetings. Imagine how climate change might be positively impacted if people needed to travel less for business. Anderson hopes that this work will allow people to focus more on their meetings, conversations with people, and, perhaps, help with climate change, rather than micromanaging a robot or jetting off around the world.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis work will be presented in December at the \u003Ca href=\u0022https:\/\/neurips.cc\/\u0022\u003EThirty-third Conference on Neural Information Processing Systems (NeurIPS)\u003C\/a\u003E 2019 in Vancouver, British Columbia.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The latest work from Georgia Tech researchers finds a way to give better directions to robots."}],"uid":"34773","created_gmt":"2019-11-22 16:00:25","changed_gmt":"2019-12-06 14:41:00","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-11-22T00:00:00-05:00","iso_date":"2019-11-22T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"629258":{"id":"629258","type":"image","title":"Anderson and his co-authors will present this work at at the Thirty-third Conference on Neural Information Processing Systems (NeurIPS) 2019 in Vancouver, British Columbia.","body":null,"created":"1574438198","gmt_created":"2019-11-22 15:56:38","changed":"1574438198","gmt_changed":"2019-11-22 15:56:38","alt":"Map of robot moving through building","file":{"fid":"239649","name":"Screen Shot 2019-11-08 at 10.53.41 AM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202019-11-08%20at%2010.53.41%20AM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202019-11-08%20at%2010.53.41%20AM.png","mime":"image\/png","size":2435837,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202019-11-08%20at%2010.53.41%20AM.png?itok=X1EqTTVJ"}}},"media_ids":["629258"],"groups":[{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"629306":{"#nid":"629306","#data":{"type":"news","title":"ML@GT Displays Diverse Research Interests at NeurIPS","body":[{"value":"\u003Cp\u003EWith 30\u0026nbsp;papers to present, the \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EMachine Learning Center at Georgia Tech (ML@GT)\u003C\/a\u003E will make a strong showing at this year\u0026rsquo;s Neural Information Processing Systems (NeurIPS) conference, Dec. 8-14 in Vancouver, British Columbia.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe conference fosters the exchange of research on the theoretical, technological, biological, and mathematical aspects of neural information processing systems. ML@GT research spans all of the categories, including work on \u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1908.07896.pdf\u0022\u003Eneural data\u003C\/a\u003E, \u003Ca href=\u0022https:\/\/b.gatech.edu\/2NS3Bz9\u0022\u003Efairness in machine learning algorithms\u003C\/a\u003E, and \u003Ca href=\u0022http:\/\/bit.ly\/2NEH1Lr\u0022\u003Eteaching artificial intelligence to work in changing environments\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;NeurIPS continues to be an exciting conference to attend because of the diverse research that is being presented each year. It is one of the most sought-after and anticipated conferences every year, and it\u0026rsquo;s good to see ML@GT have a good variety of papers being accepted,\u0026rdquo; said \u003Cstrong\u003ETuo Zhao\u003C\/strong\u003E, an assistant professor in the \u003Ca href=\u0022https:\/\/www.isye.gatech.edu\/\u0022\u003EH. Milton Stewart School of Industrial and Systems Engineering (ISyE)\u003C\/a\u003E. Zhao has three accepted papers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENeurIPS also continues to be a hotspot for major technology companies like Google, Microsoft, Facebook and to recruit new talent.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo see a full list and recaps of ML@GT\u0026rsquo;s accepted papers \u003Ca href=\u0022http:\/\/bit.ly\/2WTlnGo\u0022\u003Eclick here\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech will present 30 papers at one of the hottest conferences in artificial intelligence."}],"uid":"34773","created_gmt":"2019-11-25 13:50:19","changed_gmt":"2019-11-25 13:50:19","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-11-25T00:00:00-05:00","iso_date":"2019-11-25T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"628944":{"id":"628944","type":"image","title":"Georgia Tech will present 30 papers at the Thirty-third Conference on Neural Information Processing Systems","body":null,"created":"1573672076","gmt_created":"2019-11-13 19:07:56","changed":"1573672217","gmt_changed":"2019-11-13 19:10:17","alt":"NeurIPS 2019","file":{"fid":"239533","name":"NeurIPS 2019_Twitter.png","image_path":"\/sites\/default\/files\/images\/NeurIPS%202019_Twitter_0.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/NeurIPS%202019_Twitter_0.png","mime":"image\/png","size":764596,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/NeurIPS%202019_Twitter_0.png?itok=fHpwKoXh"}}},"media_ids":["628944"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"628444":{"#nid":"628444","#data":{"type":"news","title":"Keep Forgetting Your Password? Try This Novel Virtual Authentication Technique","body":[{"value":"\u003Ch3\u003E\u003Cem\u003EFirst-person Virtual Maze Offers More Memorable, Harder-to Break Passwords for Infrequent Authentication\u003C\/em\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EWe\u0026rsquo;ve all been there. For the first time in months, you\u0026rsquo;ve been logged out of your social media account and need to log back in. The problem is it\u0026rsquo;s been so long since your last log in that you don\u0026rsquo;t remember your password. You try every combination of baby and pet name, sister\u0026rsquo;s birthday, childhood street address \u0026ndash; nothing works, and now you\u0026rsquo;re locked out.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIf only there was a better way to remember these passwords after extended periods of disuse.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELuckily, researchers at \u003Ca href=\u0022http:\/\/gatech.edu\u0022 target=\u0022_blank\u0022\u003EGeorgia Tech\u003C\/a\u003E have come up with a novel solution to this longstanding problem, applying an old memory technique to new technology to offer users a more effective authentication method. Known as \u0026lsquo;the Memory Palace, the new tool is a three-dimensional virtual labyrinth navigated in the first-person perspective.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn cases of infrequent authentication, the Memory Palace works in place of an account\u0026rsquo;s password. Users create their own personal path with multiple left or right turns through a maze that must then be recreated to log in to their account. If the user makes it through the maze, similar to the one found in the old Windows three-dimensional labyrinth screensaver, they gain access.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EStudies evaluating the technique showed that visual-spatial secrets were most memorable if navigated in the three-dimensional first-person perspective. They also showed that, in comparison to Android\u0026rsquo;s 9-dot pattern lock, the Memory Palace was significantly more memorable after one week, was harder to break through shoulder surfing (capturing passwords by looking over someone\u0026rsquo;s shoulders), and were not significantly slower to enter.\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.youtube.com\/watch?v=I02XDR7Mg0\u0022\u003EVIDEO: Explore \u0026#39;The Memory Palace\u0026#39;\u003C\/a\u003E\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Humans have evolved with remarkably persistent and fast-imprinting spatial memories, owing in no small part to our nomadic history,\u0026rdquo; said \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022 target=\u0022_blank\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E Assistant Professor \u003Cstrong\u003ESauvik Das\u003C\/strong\u003E, the lead researcher on the project. \u0026ldquo;Many people can, for example, clearly visualize and mentally walk through their childhood homes, even if they haven\u0026rsquo;t stepped foot in it for decades. They may only need to be shown once or twice how to drive to a new part of a familiar city.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Our key insight was simple: Why not co-opt this incredibly strong spatial memory system for infrequent authentication?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis visual-spacial authentication is based upon an old memory technique of the same name, also called the \u0026ldquo;method of loci.\u0026rdquo; That approach uses visualizations with the use of spatial memory, familiar information about one\u0026rsquo;s environment, to quickly and efficiently recall information. World Memory champions have applied this technique in competition for years, associating vivid images along a specific path with digits, letters, or playing cards they are required to memorize. In fact, the technique dates all the way back to ancient Greeks and Romans.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhen developing their program, researchers focused on a few keys to their method. In addition to security against common attacks like random guessing or shoulder surfing, they needed the authentication secret to be memorable without much practice or reinforcement and they needed it to be deployable to the public.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Users are unlikely to accept a solution that requires significant upfront training or effort,\u0026rdquo; said Das, an expert in a field dubbed social cybersecurity that examines social norms that impact the adoption or rejection of security techniques. \u0026ldquo;Also, the solution should be cost-effective and not require specialized hardware. Many authentication solutions have been proposed, but most fail to be widely adopted for these reasons.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EExisting solutions fall short in these requirements. Biometrics, like a thumb print or facial recognition, require specialized hardware that can be expensive for infrequent use cases. PINs and graphical passwords have problems in long-term memorability without frequent reinforcement, or are otherwise vulnerable to shoulder surfing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The Memory Palace addresses each of these concerns with a proven memory technique that can hold up over time but is not easily stolen,\u0026rdquo; Das said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDas provided a handful of potential instances of infrequent authentication. Perhaps a session persists for a long period of time, like social media accounts, or a user must log in on a different device than normal, like a Netflix account on a web browser versus a smart TV. Other situations include occasionally-accessed resources, like a conference room secured with a smart lock, or as a fallback authentication method where a secondary secret is needed to recover access to an account where the primary secret has been compromised.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo deploy to the public, an app could implement the Memory Palace as a means of authenticating users. Alternatively, an operating system like Android could implement it as a means of authenticating into a device and automatically handle authenticating into any existing apps on the device.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis work was presented in a paper, titled \u003Cem\u003ET\u003Ca href=\u0022https:\/\/sauvikdas.com\/uploads\/paper\/pdf\/22\/file.pdf\u0022 target=\u0022_blank\u0022\u003Ehe Memory Palace: Exploring Visual-Spatial Paths for Strong, Memorable, Infrequent Authentication\u003C\/a\u003E\u003C\/em\u003E (Sauvik Das, David Lu, Taehoon Lee, Joanne Lo, Jason I. Hong), at the \u003Ca href=\u0022https:\/\/uist.acm.org\/uist2019\/\u0022 target=\u0022_blank\u0022\u003EACM Symposium on User Interface Software and Technology\u003C\/a\u003E (UIST 2019), which was held\u0026nbsp;Oct. 20-23 in New Orleans.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"This first-person virtual maze offers more memorable, harder-to-break passwords for infrequent authentication."}],"uid":"33939","created_gmt":"2019-10-31 18:40:16","changed_gmt":"2019-10-31 18:40:16","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-10-31T00:00:00-04:00","iso_date":"2019-10-31T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"628443":{"id":"628443","type":"image","title":"The Memory Palace","body":null,"created":"1572547175","gmt_created":"2019-10-31 18:39:35","changed":"1572547175","gmt_changed":"2019-10-31 18:39:35","alt":"The Memory Palace - A person navigates a virtual maze on a smartphone","file":{"fid":"239338","name":"Screen Shot 2019-10-31 at 2.38.07 PM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202019-10-31%20at%202.38.07%20PM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202019-10-31%20at%202.38.07%20PM.png","mime":"image\/png","size":434428,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202019-10-31%20at%202.38.07%20PM.png?itok=utXQSFpn"}}},"media_ids":["628443"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182941","name":"cc-research; ic-cybersecurity; ic-hcc"}],"core_research_areas":[{"id":"145171","name":"Cybersecurity"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"628437":{"#nid":"628437","#data":{"type":"news","title":"Opportunities for Impact: Startup Zyrobotics Helped Ayanna Howard Reach More People","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003EAyanna Howard\u003C\/strong\u003E was not thinking about starting a business. Working as a professor in \u003Ca href=\u0022http:\/\/gatech.edu\u0022 target=\u0022_blank\u0022\u003EGeorgia Tech\u003C\/a\u003E\u0026rsquo;s \u003Ca href=\u0022http:\/\/ece.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003ESchool of Electrical and Computer Engineering\u003C\/a\u003E (ECE) in 2013, her focus was on her research into assistive robotics and therapy gaming applications for children, not launching a startup outside of her lab.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDoing research in an environment like Georgia Tech\u0026rsquo;s, however, where entrepreneurship and risk-taking is not only encouraged but required, has a way of making even the clearest of plans veer off in varying unforeseen directions. Thus, out of her lab came \u003Ca href=\u0022http:\/\/zyrobotics.com\/\u0022 target=\u0022_blank\u0022\u003EZyrobotics\u003C\/a\u003E, a technology company that develops educational technologies for children with differing abilities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor the past six years, Zyrobotics has developed personalized technologies that stimulate social, cognitive, and motor skill development using fun and educational applications. Now, there are five products, three hardware and two software. The software comprises about 15 different programs in math, robot, and coding education. There have been over 600,000 downloads and about 80 distributors using or distributing the products in clinics and school systems.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;As researchers, we\u0026rsquo;re not only concerned with development,\u0026rdquo; said Howard, now the Chair of Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022 target=\u0022_blank\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC). \u0026ldquo;We want to know the impact. What Zyrobotics has done is allowed the research we were doing in the lab to touch so many more people than we otherwise would have done.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch3\u003EA proof of concept\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EIt started as the work of one of her graduate students in ECE. \u003Cstrong\u003EHae Won Park\u003C\/strong\u003E was finishing up her Ph.D. when she came to a bit of a crossroads. Trying to decide whether to pursue a career in academia or to, perhaps, go into industry, she looked to Howard for some guidance.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;There was an opportunity with the \u003Ca href=\u0022https:\/\/www.nsf.gov\/\u0022 target=\u0022_blank\u0022\u003ENational Science Foundation\u003C\/a\u003E \u003Ca href=\u0022https:\/\/www.nsf.gov\/news\/special_reports\/i-corps\/\u0022 target=\u0022_blank\u0022\u003EI-Corps grant\u003C\/a\u003E where you have to write a proposal, put your ideas down, defend to a program manager, et cetera,\u0026rdquo; Howard said. \u0026ldquo;It seemed like a good program that would allow her to experience all of these aspects in a low-risk way. If it didn\u0026rsquo;t work out, oh well.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPark\u0026rsquo;s research examined methods for utilizing touchscreen interfaces for accessible human-robot interaction. It was a project called \u003Ca href=\u0022http:\/\/tabaccess.com\/\u0022 target=\u0022_blank\u0022\u003ETabAccess\u003C\/a\u003E, an assistive technology that provides alternative switch inputs to control smartphones and tablets for users with motor impairments.\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Ca href=\u0022https:\/\/www.youtube.com\/watch?v=z3q5C2yTxU8\u0022 target=\u0022_blank\u0022\u003EVIDEO: How does TabAccess work?\u003C\/a\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EThroughout the course of customer discovery, where Park and Howard spoke with varying professionals and potential users, Howard said she realized just how big of a difference the technology could make outside of the lab.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;A year later, it was enough of a concept,\u0026rdquo; Howard said. \u0026ldquo;It looked like we could design something that made sense. The company was founded, and then it went off and did its own thing.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch3\u003EA broader impact\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EIt was the impact that led Howard to push forward on the project as a startup. At the time, she had been doing robotics educational STEM camps focused on children with special needs. Students, who had primarily visual and motor impairments, were taught how to code robots.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe camps were successful, but the touch points, as Howard called them, were relatively few.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The touch points were just the kids I was able to recruit along with my clinical collaborator,\u0026rdquo; she said. \u0026ldquo;My touch point was: If I show up, I touched. If I didn\u0026rsquo;t, there was nothing going on. Whereas, in customer discovery, you weren\u0026rsquo;t necessarily speaking with the people you were impacting \u0026ndash; the kids \u0026ndash; but you were speaking with the teachers who interact with kids.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESuddenly, the impact in her mind shifted from the 1-to-1 relationship of STEM camps to 1-to-100, 1-to-1,000, and more.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;My workshop on a good day had maybe 10 kids,\u0026rdquo; she said. \u0026ldquo;I did these in a good year maybe twice. So, maybe like 20 kids in a year. You can\u0026rsquo;t possibly do what we\u0026rsquo;re doing now without Zyrobotics.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u0026#39;For students, (entrepreneurship) is a no-brainer\u0026#39;\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EHoward said it\u0026rsquo;s this mindset that sets Georgia Tech apart.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Students have always thought about the impact of what they\u0026rsquo;re doing,\u0026rdquo; she said. \u0026ldquo;Socially-responsible engineering. That\u0026rsquo;s always been the core mission. Being an entrepreneur has this aspect of knowing the exact problems you want to attack, versus maybe going into industry and working on someone else\u0026rsquo;s.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt\u0026rsquo;s important, she said, that academics continue to have a place in the technological market. If left to major tech conglomerates, we up against group think and hesitating to take necessary risks.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;For students, it\u0026rsquo;s a no-brainer to engage in some entrepreneurial pursuit,\u0026rdquo; she said. \u0026ldquo;That mindset of thinking about problems and impacts allows you to view it differently.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We need to be willing to make mistakes. The probability is that your startup will fail. But students understand that and still do it. We need to get rid of that fear of failure, or else we\u0026rsquo;ll never make significant change.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"For the past six years, Zyrobotics has developed personalized technologies that stimulate social, cognitive, and motor skill development using fun and educational applications."}],"uid":"33939","created_gmt":"2019-10-31 18:18:45","changed_gmt":"2019-10-31 18:18:45","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-10-31T00:00:00-04:00","iso_date":"2019-10-31T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"628356":{"id":"628356","type":"image","title":"Ayanna Howard\u0027s Zyrobotics","body":null,"created":"1572456199","gmt_created":"2019-10-30 17:23:19","changed":"1572456199","gmt_changed":"2019-10-30 17:23:19","alt":"Ayanna Howard\u0027s Zyrobotics","file":{"fid":"239303","name":"ayanna_zyrobotics.png","image_path":"\/sites\/default\/files\/images\/ayanna_zyrobotics.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/ayanna_zyrobotics.png","mime":"image\/png","size":150863,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/ayanna_zyrobotics.png?itok=BOCKp8ID"}}},"media_ids":["628356"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182940","name":"cc-research; ic-ai-ml; ic-robotics; ic-hcc"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"627626":{"#nid":"627626","#data":{"type":"news","title":"AI Agent Breaks Down Social Barriers in Online Education","body":[{"value":"\u003Ch5\u003E\u003Cstrong\u003EOn the internet, students are able to take courses on the couch (or anywhere) and set their own pace for learning. These and other factors have contributed to an explosion in web enrollments.\u003C\/strong\u003E\u003C\/h5\u003E\r\n\r\n\u003Cp\u003EBut online convenience can come at a cost to building social connections. The lack of human interaction and support can be a direct cause for high dropout rates in web courses.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo directly address social barriers in virtual classes, an artificially intelligent system from Georgia Tech has been designed to connect online students quickly to their peers. It is being deployed in the institute\u0026rsquo;s Online Master of Science in Computer Science program (OMSCS) as well as two campus classes fall semester 2019.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing the \u003Ca href=\u0022http:\/\/emprize.gatech.edu\/\u0022\u003E\u003Cstrong\u003EJill Watson AI framework\u003C\/strong\u003E\u003C\/a\u003E, the social agent is intended ultimately to help support students from different walks of life adapt more quickly to rigorous course requirements and foster a community where students can build their own support structures.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In previous semesters, we had what we called an introduction agent that responded to student introductions and greeted students. Now we have a more fully realized social AI agent that can help students connect virtually and in real life,\u0026rdquo; said \u003Cstrong\u003EIda Camacho\u003C\/strong\u003E, the lead engineer for the redesigned AI.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEncouraging social interactions among students using the Jill social agent demanded Camacho rethink the AI\u0026rsquo;s construct. Questions of privacy came up early on, and researchers found out from testing that if the social agent was too personal, students might get distracting playing with it.\u003C\/p\u003E\r\n\r\n\u003Cblockquote\u003E\r\n\u003Ch5\u003E\u003Cem\u003E\u003Cstrong\u003EWe wanted students to feel like they are part of the community without giving up their anonymity.\u003C\/strong\u003E - Ida Camacho, Lead AI Designer\u003C\/em\u003E\u003C\/h5\u003E\r\n\u003C\/blockquote\u003E\r\n\r\n\u003Cp\u003EUsing student introductions in the online forum, researchers prompted students to share personal details in order to help build a model for the agent. Using this unstructured data presented its own challenges, such as when the system encountered certain words, such as Paris, and had to parse out whether it was a location or it was referring to a certain blonde celebrity.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne of Camacho\u0026rsquo;s insightful designs centered on creating summaries of student information that is viewable by those enrolled.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhen students enter the forum and introduce themselves now, the Jill social agent can immediately share top results by percent of classmates based on location, timezone, other courses being taken, and primary interests.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We wanted students to feel like they are part of the community without giving up their anonymity,\u0026rdquo; said Camacho. \u0026ldquo;Increasing student engagement and creating micro-communities are two of our primary goals.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EStudents can also choose to join conversations based on any area of interest (location, hobbies, etc.) using the hashtag #ConnectMe, which allows them to see and click on individual student names for those who opt in.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBased on the responses, students have already taken to the new and improved Jill.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;One thing that surprised me was that students started trying to connect IRL, or in real life,\u0026rdquo; said Camacho. \u0026ldquo;They wanted to set up study groups and meet each other. This was happening all over the place, like New York City, Austin, and Tokyo.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECamacho suspected there might be a hunger for more social interactions in online courses, and she is fully committed to delivering the best student experience on this front.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I envision the social agent being used more than just at the start of classes,\u0026rdquo; she said. \u0026ldquo;It\u0026rsquo;s already creating some social glue, getting students to talk right away so they don\u0026rsquo;t feel like they\u0026rsquo;re in this all alone.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Jill can keep the conversation going, and I\u0026rsquo;m planning for the AI at the end of the semester to share recommendations by students on courses they\u0026rsquo;ve taken.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECamacho in one sense is the ideal person to head the Design \u0026amp; Intelligence Lab\u0026rsquo;s new Jill social agent initiative. The Fresno, Calif., resident is a recent alumna of the OMSCS program and knows how important it was to stay engaged with her peers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I feel like if I hadn\u0026rsquo;t met anyone I might not have been as successful. My community-building started when others invited me into their study groups and I became a TA.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EShe jokes that when she was a student, she enrolled in the program\u0026rsquo;s Knowledge-Based AI course \u0026ndash; where the Jill Watson virtual TA was deployed using a pseudonym \u0026ndash; to see if she could pick out the AI amongst the human TAs helping students in the online forums.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I guessed wrong,\u0026rdquo; she said laughing. \u0026ldquo;I ended up thinking that it was the head TA. He was online all the time answering student questions. How can someone be online that much?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EJill Watson being indistinguishable from its human counterparts might be taken as a good sign by the lab\u0026rsquo;s researchers as they continue to build the future of AI and help students from around the world succeed in pursuing online learning.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003ETo directly address social barriers in virtual classes, an artificially intelligent system from Georgia Tech has been designed to connect online students quickly to their peers. It is being deployed in the institute\u0026rsquo;s Online Master of Science in Computer Science program (OMSCS) as well as two campus classes fall semester 2019.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"To directly address social barriers in virtual classes, an artificially intelligent system from Georgia Tech has been designed to connect online students quickly to their peers. "}],"uid":"27592","created_gmt":"2019-10-16 13:14:25","changed_gmt":"2019-10-16 20:40:17","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-10-16T00:00:00-04:00","iso_date":"2019-10-16T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"627694":{"id":"627694","type":"image","title":"Ida Camacho, Lead Architect for Jill Watson AI Social Agent","body":null,"created":"1571258375","gmt_created":"2019-10-16 20:39:35","changed":"1571258851","gmt_changed":"2019-10-16 20:47:31","alt":"","file":{"fid":"239009","name":"ida_camacho_web.jpg","image_path":"\/sites\/default\/files\/images\/ida_camacho_web.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/ida_camacho_web.jpg","mime":"image\/jpeg","size":190315,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/ida_camacho_web.jpg?itok=7IVkc0lb"}},"627627":{"id":"627627","type":"image","title":"Jill Watson AI Social Agent","body":null,"created":"1571232643","gmt_created":"2019-10-16 13:30:43","changed":"1571232643","gmt_changed":"2019-10-16 13:30:43","alt":"","file":{"fid":"238961","name":"socialAgent 500x500.png","image_path":"\/sites\/default\/files\/images\/socialAgent%20500x500.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/socialAgent%20500x500.png","mime":"image\/png","size":307445,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/socialAgent%20500x500.png?itok=iKNRK2P7"}}},"media_ids":["627694","627627"],"related_links":[{"url":"http:\/\/emprize.gatech.edu\/","title":"What\u0027s New with Jill? This fall\u0027s AI hat trick at Georgia Tech"}],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\nResearch Communications Manager\u003Cbr \/\u003E\r\n\u003Cem\u003EGVU Center and College of Computing\u003C\/em\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"627578":{"#nid":"627578","#data":{"type":"news","title":"Jill Watson Now Fielding Questions on New AI-enabled Research Tool","body":[{"value":"\u003Cp\u003EA new artificially intelligent (AI) research tool that harnesses the power of the Smithsonian Institution\u0026rsquo;s massive\u0026nbsp;\u003Ca href=\u0022https:\/\/eol.org\u0022 target=\u0022_blank\u0022\u003EEncyclopedia of Life\u003C\/a\u003E\u0026nbsp;(EOL) ecological database debuted this semester at Georgia Tech.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe \u003Ca href=\u0022http:\/\/vera.cc.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003Evirtual ecological research assistant, known as VERA\u003C\/a\u003E, was developed at Georgia Tech and enables students to perform virtual experiments to explain existing ecological systems or to predict possible outcomes based on variables they input into the tool.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003EGetting to Know VERA\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;People using VERA have access to the EOL and can test a hypothesis using countless organisms, make as many changes to variables as they want, and study the effects on any ecosystem through real-time modeling,\u0026rdquo; said\u0026nbsp;\u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/sungeun-an-89730063\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003ESungeun An\u003C\/strong\u003E, human-centered computing Ph.D. student\u003C\/a\u003E and lead developer of the AI system.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This is a unique opportunity that doesn\u0026rsquo;t exist anywhere else.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlthough the EOL has extensive data entries for more than two million species, An says that VERA has an intuitive user interface and design that is relatively easy to use.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Students don\u0026rsquo;t need extensive scientific knowledge or programming and math skills to use VERA. They can build a conceptual model with simple visual cues on the computer screen, such as dragging elements or selecting input options,\u0026rdquo; said An.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003ECombining the Strength\u0026nbsp;of Two AIs\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EHowever, to get the most out of VERA, An says there can be a learning curve.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo flatten the\u0026nbsp;curve and help students optimize their experience with VERA, An and her fellow researchers turned to\u0026nbsp;Jill Watson, the \u003Ca href=\u0022https:\/\/www.wsj.com\/articles\/if-your-teacher-sounds-like-a-robot-you-might-be-on-to-something-1462546621\u0022 target=\u0022_blank\u0022\u003Efamed AI-enabled virtual teaching assistant (TA) that premiered in 2016\u003C\/a\u003E\u0026nbsp;supporting Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/www.omscs.gatech.edu\u0022 target=\u0022_blank\u0022\u003Eonline Master of Science in Computer Science (OMSCS) program\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EJill Watson\u0026nbsp;answers student questions about VERA via the collaborative messaging app, Slack. These range from technical questions about the tool \u0026ndash; \u0026ldquo;How do I add a new project\u0026rdquo; \u0026ndash; to subject matter questions \u0026ndash; \u0026ldquo;What is consumption rate?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Leveraging the Jill Watson virtual TA and VERA together is a powerful demonstration of how to scale technology to serve more populations and provide access to the world\u0026rsquo;s scientific knowledge,\u0026rdquo; said\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/ashok-goel\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EAshok Goel\u003C\/strong\u003E, professor of Interactive Computing\u003C\/a\u003E and director of the \u003Ca href=\u0022http:\/\/dilab.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003EDesign \u0026amp; Intelligence Lab, which created both AI agents\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECombining the strength of the two AI agents, said Goel, is part of \u003Ca href=\u0022https:\/\/emprize.gatech.edu\u0022 target=\u0022_blank\u0022\u003Ean intentional approach to rethinking instructional design for online learning\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;VERA is a significant advancement for artificial intelligence in science education and meant to be used anywhere by anyone interested in science exploration, so making it as accessible as possible is key to the system\u0026rsquo;s adoption,\u0026rdquo; Goel said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EStudents and others using VERA \u0026ndash; it\u0026rsquo;s also publicly available\u0026nbsp;and linked on the Smithsonian\u0026rsquo;s EOL homepage \u0026shy;\u0026ndash; can learn more through\u0026nbsp;a \u003Ca href=\u0022https:\/\/www.youtube.com\/playlist?list=PLwXogtSxXaLCP4AXU_VFUP92TVmotGLMv\u0022 target=\u0022_blank\u0022\u003Evideo series produced by Georgia Tech\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe videos demonstrate VERA\u0026rsquo;s capabilities using kudzu growth in the southeastern United States as an example. The videos are co-hosted by\u0026nbsp;\u003Ca href=\u0022http:\/\/www.emilygweigelphd.com\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EEmily Weigel\u003C\/strong\u003E, School of Biological Sciences\u003C\/a\u003E instructor for the biology course using VERA, and \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/fac\/Spencer.Rugaber\/\u0022 target=\u0022_blank\u0022\u003ECollege of Computing faculty member \u003Cstrong\u003ESpencer Rugaber\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EVERA research is funded by a grant from the National Science Foundation, #NSF-1636848.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor more information about Georgia Tech\u0026#39;s emPRIZE, contact\u0026nbsp;\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu?subject=Jill%20Watson%20Helping%20With%20Questions%20on%20New%20Research%20AI\u0022\u003EJoshua Preston, research communications manager\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A new AI-enabled research tool powered by the Smithsonian debuted in an undergraduate biology class at Georgia Tech this semester."}],"uid":"32045","created_gmt":"2019-10-14 19:31:05","changed_gmt":"2019-10-15 23:47:10","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-10-14T00:00:00-04:00","iso_date":"2019-10-14T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"627580":{"id":"627580","type":"image","title":"Jill Watson 2019 AI Teaching Assistant","body":null,"created":"1571083583","gmt_created":"2019-10-14 20:06:23","changed":"1571083583","gmt_changed":"2019-10-14 20:06:23","alt":"Stock image of personified female AI looking at reflection in mirror","file":{"fid":"238946","name":"093086626-technology-and-science-abstrac.jpeg","image_path":"\/sites\/default\/files\/images\/093086626-technology-and-science-abstrac.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/093086626-technology-and-science-abstrac.jpeg","mime":"image\/jpeg","size":638699,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/093086626-technology-and-science-abstrac.jpeg?itok=rnZrpPTP"}},"627584":{"id":"627584","type":"image","title":"Sungeun An - Ph.D. Human-Centered Computing Student","body":null,"created":"1571086076","gmt_created":"2019-10-14 20:47:56","changed":"1571086076","gmt_changed":"2019-10-14 20:47:56","alt":"Sungeun An, Georgia Tech human-centered computing PhD student","file":{"fid":"238949","name":"Sungeun An_human-centered-computingPhD.-student-2019.jpg","image_path":"\/sites\/default\/files\/images\/Sungeun%20An_human-centered-computingPhD.-student-2019.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Sungeun%20An_human-centered-computingPhD.-student-2019.jpg","mime":"image\/jpeg","size":51969,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Sungeun%20An_human-centered-computingPhD.-student-2019.jpg?itok=5zUyAGmu"}}},"media_ids":["627580","627584"],"related_links":[{"url":"https:\/\/emprize.gatech.edu","title":"Georgia Tech\u2019s emPRIZE: AI-Powered Learning. Anytime. Anywhere."}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"66442","name":"MS HCI"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[{"id":"2556","name":"artificial intelligence"},{"id":"9167","name":"machine learning"},{"id":"182669","name":"VERA"},{"id":"169183","name":"Jill Watson"},{"id":"182670","name":"goel"},{"id":"168873","name":"Smithsonian"},{"id":"182671","name":"encyclopedia of life"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAlbert Snedeker, Communications Manager\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto:albert.snedeker@cc.gatech.edu?subject=Jill%20Watson%20Answering%20Questions%20on%20Research%20AI%20Tool\u0022\u003Ealbert.snedeker@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EJoshua Preston, Research Communications Manager\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto: jpreston@cc.gatech.edu\u0022\u003Ejpreston@cc.gatech.edu\u003C\/a\u003E\u003Cbr \/\u003E\r\n\u0026nbsp;\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["albert.snedeker@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"627425":{"#nid":"627425","#data":{"type":"news","title":"Premier Computer Vision Conference Accepts 10 Georgia Tech Papers","body":[{"value":"\u003Cp\u003EFrom helping chair umpires make better line calls in professional tennis to teaching robots to \u0026ldquo;see\u0026rdquo;, the field of computer vision continues to expand and become a part of people\u0026rsquo;s everyday lives. A subfield of artificial intelligence, computer vision teaches computers to understand and interpret the visual world through photos or videos.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe \u003Ca href=\u0022http:\/\/iccv2019.thecvf.com\/\u0022\u003EInternational Conference on Computer Vision (ICCV)\u003C\/a\u003E takes place from Oct. 27 to Nov. 2 and brings together researchers from Georgia Tech and around the world to discuss recent breakthroughs and research in the field. Researchers in the \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EMachine Learning Center at Georgia Tech (ML@GT)\u003C\/a\u003E have ten accepted papers in the conference.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003ESchool of Interactive Computing (IC)\u003C\/a\u003E and ML@GT associate professor \u003Cstrong\u003EDevi Parikh\u003C\/strong\u003E leads with seven research papers. Her work spans from \u003Ca href=\u0022https:\/\/www.voguebusiness.com\/technology\/facebook-ai-fashion-styling\u0022\u003Eusing artificial intelligence (AI) to help people make more stylish outfit choices\u003C\/a\u003E to \u003Ca href=\u0022http:\/\/bit.ly\/2ndC6qv\u0022\u003Eembodied visual recognition\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIC assistant professor \u003Cstrong\u003EJudy Hoffman \u003C\/strong\u003Eand professor \u003Cstrong\u003EJames Rehg\u003C\/strong\u003E are 2019 area chairs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;As the computer vision field continues to expand and create novel ideas, conferences like ICCV become increasingly important. There was a lot of impressive work submitted to the conference this year. With computer vision being one of ML@GT\u0026rsquo;s strongest areas, I\u0026rsquo;m thrilled to see the center\u0026rsquo;s presence in this premier conference,\u0026rdquo; said Hoffman.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOther work from Georgia Tech includes papers on \u003Ca href=\u0022https:\/\/mlatgt.blog\/2019\/09\/10\/overcoming-large-scale-annotation-requirements-for-understanding-videos-in-the-wild\/\u0022\u003Elessening the need for additional annotation in videos\u003C\/a\u003E, making vision and language models more grounded, and \u003Ca href=\u0022http:\/\/bit.ly\/2ndC6qv\u0022\u003Eagents learning to move to better perceive objects.\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026quot;Having a paper accepted, especially as an oral presentation, especially in a top conference gives me lots of confidence and encouragement for my Ph.D. research. I can\u0026#39;t wait to attend ICCV to share my work, talk with other talented people, and learn other interesting topics in both academic and industrial areas,\u0026quot; said \u003Cstrong\u003EMin-Hung Chen\u003C\/strong\u003E, a sixth-year electrical and computer engineering Ph.D. student.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOrganized by IEEE, ICCV is one of the premier international computer vision conferences and will take place at the COEX Convention Center in Seoul, South Korea.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor more information on ML@GT\u0026rsquo;s involvement with the conference, visit \u003Ca href=\u0022http:\/\/bit.ly\/339BYaS\u0022\u003Ehttp:\/\/bit.ly\/339BYaS\u003C\/a\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The Machine Learning Center will make a splash at the International Conference on Computer Vision later this month."}],"uid":"34773","created_gmt":"2019-10-09 19:54:48","changed_gmt":"2019-10-10 12:11:43","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-10-10T00:00:00-04:00","iso_date":"2019-10-10T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"627424":{"id":"627424","type":"image","title":"Seoul, South Korea","body":null,"created":"1570650742","gmt_created":"2019-10-09 19:52:22","changed":"1570650742","gmt_changed":"2019-10-09 19:52:22","alt":"","file":{"fid":"238886","name":"sunyu-kim-HjsWTyyVDgg-unsplash.jpg","image_path":"\/sites\/default\/files\/images\/sunyu-kim-HjsWTyyVDgg-unsplash.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/sunyu-kim-HjsWTyyVDgg-unsplash.jpg","mime":"image\/jpeg","size":317658,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/sunyu-kim-HjsWTyyVDgg-unsplash.jpg?itok=00vn_fSV"}}},"media_ids":["627424"],"groups":[{"id":"576481","name":"ML@GT"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["allie.mcfadden@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"625193":{"#nid":"625193","#data":{"type":"news","title":"Firefighters Have Mixed Response to Wearable Tech for Emergency Work","body":[{"value":"\u003Cp\u003EComputing technology has shaped modern offices and retooled how many businesses operate. Now, as technology gets smaller, cheaper, and more connected, jobs that aren\u0026rsquo;t bound to a desk are seeing similar changes.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA new study from Georgia Tech shows how advanced computing tech worn by firefighters impacts the nature of work for emergency responders, and how front-line firefighters and their commanders view its usefulness.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearchers sought to establish the effects of a wearable device used by firefighters on the job as well as how companies might better design devices for more physical types of labor.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;For emergency responders, work is carried out with the pressure of someone\u0026rsquo;s life or property on the line based on how well the job is done,\u0026rdquo; said \u003Cstrong\u003EAlyssa Rumsey\u003C\/strong\u003E, PhD student in Digital Media, who conducted the study. \u0026ldquo;Firefighting is unlike office work, which is typically examined in human-computer interaction research; simply put, the stakes are higher.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe findings showed that the wearable device went through many iterations \u0026ndash; originally it was a heads-up display (HUD) that projected situational information onto a firefighter\u0026rsquo;s mask. This proved too distracting, so then only firefighter\u0026rsquo;s biometric information was displayed.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUltimately the tool became a wrist-worn device that gave the biometric information to the on-scene commander, rather than the front-line firefighters, to see the vital signs of the emergency responders in real-time through a web interface.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Firefighters reportedly didn\u0026rsquo;t have time to react to the information during the fire suppression because their attention was focused solely on the task at hand,\u0026rdquo; Rumsey said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe final device could be used to remotely measure the physical conditions of firefighters in the field. The device was not tested in any live fires but was used extensively on \u0026ldquo;job duty courses\u0026rdquo; at two Georgia fire departments where routine training exercises by firefighters in full gear took place. Exercises included a series of obstacles such as ladder runs, hose drags, tire pulls, and crawling.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe device allowed managers to identify personnel who were pushing too hard and those who weren\u0026rsquo;t pushing hard enough based on heartrate spikes and overall activity during exercises. Supervisors could then call out those individuals and issue them commands either in person or via radio communications.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWith overexertion and stress accounting for more than 50 percent of all firefighter deaths, the real-time biometric data potentially allows commanders to assess who on their teams is in trouble and be able to pull them from a scene.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERumsey found that this clashed with some firefighters\u0026rsquo; sense of identity, one that valued getting the job done and putting the safety of others above their own. Some participants rejected the idea of using the wearable tech in a real emergency fire.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs one participant put it: \u0026ldquo;It would be a way for the chief to see someone\u0026rsquo;s heart rate, and be like, \u0026lsquo;Yeah I know this person, he\u0026#39;s gung ho, he\u0026#39;s not gonna quit. It\u0026#39;s time to pull him out.\u0026rsquo;\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFirefighters discussed how overreliance on technology sometimes makes even seasoned pros forget the basics. One participant vividly described an incident where firefighters relied on a thermal imaging camera to view temperatures in a room and assess its safety:\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;He scans the room, sees the floors there. Takes off walking through the middle of the living room floor and never went back to the basics of \u0026lsquo;sounding the floor.\u0026rsquo; Him and his other two guys with him, fell through the floor and burned to death.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile the wearable device was viewed favorably in training as a way to improve physical fitness, reduce obesity, and generate comradery, how it might be implemented beyond that was not clear.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We\u0026rsquo;ve seen that introducing this technology impacts identity, power dynamics, and organizational structures within fire departments,\u0026rdquo; Rumsey said. \u0026ldquo;Smart technology in safety-critical settings, such as fire scenarios, can exacerbate risks rather than lessen them. Understanding an organization is key to implementing technology in some of these more physically demanding jobs.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERumsey and co-investigator \u003Cstrong\u003EChris LeDantec\u003C\/strong\u003E, assistant professor of digital media, published their findings in the paper \u003Cem\u003E\u003Ca href=\u0022https:\/\/ledantec.net\/wp-content\/uploads\/2019\/05\/disfp1096-rumsey.pdf\u0022\u003EClearing the Smoke: The Changing Identities and Work in Firefighting\u003C\/a\u003E\u003C\/em\u003E\u0026nbsp;in\u0026nbsp;the Proceedings of the Association for Computing Machinery\u0026rsquo;s 2019 Conference on\u0026nbsp;Designing Interactive Systems.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EA new study from Georgia Tech shows how advanced computing tech worn by firefighters impacts the nature of work for emergency responders, and how front-line firefighters and their commanders view its usefulness.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"A new study from Georgia Tech shows how advanced computing tech worn by firefighters impacts the nature of work for emergency responders, and how front-line firefighters and their commanders view its usefulness."}],"uid":"27592","created_gmt":"2019-08-27 13:31:39","changed_gmt":"2019-10-08 21:12:38","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-08-27T00:00:00-04:00","iso_date":"2019-08-27T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"625196":{"id":"625196","type":"image","title":"Firefighter on emergency scene","body":null,"created":"1566913056","gmt_created":"2019-08-27 13:37:36","changed":"1566913056","gmt_changed":"2019-08-27 13:37:36","alt":"","file":{"fid":"238021","name":"firefight pic_web.png","image_path":"\/sites\/default\/files\/images\/firefight%20pic_web.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/firefight%20pic_web.png","mime":"image\/png","size":339194,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/firefight%20pic_web.png?itok=7BcHbrR-"}},"625197":{"id":"625197","type":"image","title":"Alyssa Rumsey","body":null,"created":"1566913095","gmt_created":"2019-08-27 13:38:15","changed":"1566913095","gmt_changed":"2019-08-27 13:38:15","alt":"","file":{"fid":"238022","name":"Rumsey, Alyssa_web.png","image_path":"\/sites\/default\/files\/images\/Rumsey%2C%20Alyssa_web.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Rumsey%2C%20Alyssa_web.png","mime":"image\/png","size":207252,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Rumsey%2C%20Alyssa_web.png?itok=ZSaHJTF3"}},"625198":{"id":"625198","type":"image","title":"Chris Le Dantec","body":null,"created":"1566913139","gmt_created":"2019-08-27 13:38:59","changed":"1566913139","gmt_changed":"2019-08-27 13:38:59","alt":"","file":{"fid":"238023","name":"Le Dantec, Chris_web.png","image_path":"\/sites\/default\/files\/images\/Le%20Dantec%2C%20Chris_web.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Le%20Dantec%2C%20Chris_web.png","mime":"image\/png","size":176701,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Le%20Dantec%2C%20Chris_web.png?itok=jVhI5at-"}}},"media_ids":["625196","625197","625198"],"related_links":[{"url":"https:\/\/www.spreaker.com\/user\/10751784\/ep8-don-t-get-burned-understanding-tech-","title":"Tech Unbound EP8: Don\u2019t Get Burned. Understanding Tech Adoption Among Firefighters"}],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\nResearch Communications Manager\u003Cbr \/\u003E\r\n\u003Cem\u003EGVU Center and College of Computing\u003C\/em\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"627023":{"#nid":"627023","#data":{"type":"news","title":"New $1.2 Million NSF Grant Aims to Improve Treatment for PTSD Patients","body":[{"value":"\u003Cp\u003EPost-traumatic stress disorder (PTSD), particularly among veterans returning from combat zones or other troubling situations, is a devastating mental condition with tremendous individual and societal costs. About 12 percent of Gulf War veterans and 15 percent of Vietnam veterans suffer from PTSD according to a 2019 article in \u003Cem\u003EU.S. News and World Report\u003C\/em\u003E. While recovery is possible, it requires intensive therapeutic engagement that less than 50 percent of affected veterans actually seek out.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.nsf.gov\/awardsearch\/showAward?AWD_ID=1915504\u0026amp;HistoricalAwards=false\u0022 target=\u0022_blank\u0022\u003EA new four-year, $1.2 million grant\u003C\/a\u003E from the \u003Ca href=\u0022http:\/\/nsf.gov\u0022 target=\u0022_blank\u0022\u003ENational Science Foundation\u003C\/a\u003E to a team of researchers from Georgia Tech, Emory University, and the University of Rochester will help bridge this gap by funding the development of a computational assessment toolkit for PTSD patients and clinicians, called PE Collective Sensing System (PECSS). PECSS, which will sit atop the PE Coach App developed by the Veterans Health Administration and the Department of Defense, will aim to improve current treatment practices and increase the number of veterans who seek treatment.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;PECSS will allow clinicians to use automated predictions to deliver better therapeutic treatment and individualized feedback, and patients to better understand the progress they are making and how to improve their exposure exercises,\u0026rdquo; said \u003Cstrong\u003ERosa Arriaga\u003C\/strong\u003E, a Senior Research Scientist in \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022 target=\u0022_blank\u0022\u003EGeorgia Tech\u0026rsquo;s School of Interactive Computing\u003C\/a\u003E and the principal investigator on the project.\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Ca href=\u0022https:\/\/podcasts.apple.com\/us\/podcast\/is-technology-game-changer-for-care-ptsd-patients-rosa\/id1435564422?i=1000451292353\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003E[THE INTERACTION HOUR PODCAST: IS TECHNOLOGY A GAME CHANGER FOR CARE OF PTSD PATIENTS?, FEATURING DR. ROSA ARRIAGA]\u003C\/strong\u003E\u003C\/a\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003ECurrently, the most common and empirically-supported treatment for PTSD is \u0026ldquo;prolonged exposure\u0026rdquo; (PE) therapy. The treatment consists of imaginal exposure, where patients imagine themselves and narrate their traumatic event, and in-vivo exposure to real-world stimuli in safe but challenging environments.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThere are, however, challenges in data collection and extraction, which is often subjective and narrow. This project will address those challenges by developing a novel, user-tailored sensing system that can record and transfer information from exercises, continuously monitoring patients and clinicians\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Clinicians are in urgent need of methods, tools, and data to efficiently track, assess, and respond to mental health needs throughout the treatment process,\u0026rdquo; Arriaga said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe project will involve insights from experts in multiple fields \u0026ndash; ubiquitous computing, human-computer interaction, applied machine learning, psychology, and more. When complete, the system will be deployed at the \u003Ca href=\u0022https:\/\/www.emoryhealthcare.org\/centers-programs\/veterans-program\/index.html\u0022 target=\u0022_blank\u0022\u003EEmory Healthcare Veterans Program\u003C\/a\u003E, a nationally-renowned initiative that treats members of the military suffering from PTSD.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The Trauma and Anxiety Recovery Program that includes the Emory Veterans Program has been on the cutting edge in using technology to advance the care of people suffering with anxiety since it was founded by Dr. \u003Cstrong\u003EBarbara Rothbaum\u003C\/strong\u003E over 25 years ago,\u0026rdquo; said \u003Cstrong\u003ESheila Rauch\u003C\/strong\u003E, an associate professor in Emory\u0026rsquo;s Department of Psychiatry and Behavioral Sciences and a co-principal investigator on the project.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;As a team of international experts in PTSD treatment, we integrate technology to speed response to treatment and help patients to visualize the changes as they respond to care. Our aim is to use this real-time data to find tune practice for the individual patient and learn across patients how we can improve care.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Mental health clinicians and their patients are in urgent need of 21\u003Csup\u003Est\u003C\/sup\u003E-centural methods, tools, and objective data to optimize therapy,\u0026rdquo; added Emory Assistant Professor \u003Cstrong\u003EAndrew Sherrill\u003C\/strong\u003E, another co-principal investigator. \u0026ldquo;This partnership will bring together innovators in HCI and evidence-based psychotherapy to transform mental health care for PTSD patients.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis grant is provided under the \u003Ca href=\u0022https:\/\/www.nsf.gov\/funding\/pgm_summ.jsp?pims_id=504739\u0022 target=\u0022_blank\u0022\u003ENSF Smart and Connect Health Funding Program\u003C\/a\u003E in its \u003Ca href=\u0022https:\/\/www.nsf.gov\/div\/index.jsp?div=IIS\u0022 target=\u0022_blank\u0022\u003EDivision of Information and Intelligent Systems\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The grant -- which includes Georgia Tech, Emory, and the University of Rochester -- will fund the development of a computational assessment toolkit for patients and clinicians."}],"uid":"33939","created_gmt":"2019-10-02 16:44:21","changed_gmt":"2019-10-04 12:56:17","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-10-02T00:00:00-04:00","iso_date":"2019-10-02T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"627021":{"id":"627021","type":"image","title":"Veteran battling PTSD","body":null,"created":"1570033840","gmt_created":"2019-10-02 16:30:40","changed":"1570033840","gmt_changed":"2019-10-02 16:30:40","alt":"Veteran battling PTSD with head in hands","file":{"fid":"238743","name":"Battling_PTSD_(4949341330).jpg","image_path":"\/sites\/default\/files\/images\/Battling_PTSD_%284949341330%29.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Battling_PTSD_%284949341330%29.jpg","mime":"image\/jpeg","size":370770,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Battling_PTSD_%284949341330%29.jpg?itok=zsgS4tt0"}}},"media_ids":["627021"],"related_links":[{"url":"https:\/\/www.ic.gatech.edu\/academics\/human-centered-computing-phd-program","title":"Human-Centered Computing at Georgia Tech"},{"url":"https:\/\/podcasts.apple.com\/us\/podcast\/is-technology-game-changer-for-care-ptsd-patients-rosa\/id1435564422?i=1000451292353","title":"The Interaction Hour: Is Technology a Game Changer for Care of PTSD Patients?"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181216","name":"cc-research"},{"id":"181214","name":"ic-hcc"},{"id":"182582","name":"ic-ai-ml"},{"id":"181949","name":"PTSD"},{"id":"55581","name":"military veterans"},{"id":"10681","name":"veterans"},{"id":"11178","name":"Rosa Arriaga"},{"id":"166848","name":"School of Interactive Computing"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71891","name":"Health and Medicine"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"626926":{"#nid":"626926","#data":{"type":"news","title":"Cleaning Up the Community: Shagun Jhaver Explores Impact of Content Moderation Practices on Social Media","body":[{"value":"\u003Cp\u003EOnline communities like Reddit or Twitter act like town halls, where opinions are shared and everyone, in theory, has a voice. Only, it doesn\u0026rsquo;t always work like that. What was once optimistically viewed as a solution to public discourse, offering promises of open and logical discussions where anyone with a keyboard and an internet connection could speak their piece, has instead become a bit of a Wild West. Message boards have degraded into sources of harassment, misinformation, radicalization, and more.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENow, the largely techno-utopian view has been adjusted, and moderation of content has become the norm. The question is: how can you moderate, while also maintaining the promise of free speech? Also, how can you avoid discouraging posters whose content was moderated or removed while encouraging them to remain a part of the public discourse?\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThese are just a few of the questions being posed and pursued by \u003Cstrong\u003EShagun Jhaver\u003C\/strong\u003E, a Ph.D. student in \u003Ca href=\u0022http:\/\/gatech.edu\u0022 target=\u0022_blank\u0022\u003EGeorgia Tech\u003C\/a\u003E\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022 target=\u0022_blank\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC), whose papers at the upcoming \u003Ca href=\u0022http:\/\/cscw.acm.org\/2019\/\u0022 target=\u0022_blank\u0022\u003EComputer-Supported Cooperative Work and Social Computing\u003C\/a\u003E (CSCW) conference provide some context and, perhaps, solutions.\u003C\/p\u003E\r\n\r\n\u003Ch3\u003EFairness, accountability, and transparency\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EJhaver is a computer scientist at heart. He earned his bachelor\u0026rsquo;s degree in India in electrical engineering and then studied computer science for his master\u0026rsquo;s at the University of Texas at Dallas. Like most in IC, though, his primary focus is on humans.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;One of the main attractions to our School was that, although it is a computer science school, I am able to do interviews and surveys with people,\u0026rdquo; Jhaver explained. \u0026ldquo;What good are technological developments if they don\u0026rsquo;t work for humans, if they don\u0026rsquo;t improve society? In order to understand the interactions between technology and society, I wanted to develop a mixed-methods background, and the resources and faculty here are perfect for that.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne of his first projects as a graduate student was investigating communication on social media around the Black Lives Matter movement.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I wanted to understand the emergent collective participation around this movement and what people were feeling on the ground in the moment,\u0026rdquo; he said. \u0026ldquo;That\u0026rsquo;s how I entered this area of social computing.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESocial computing is an area of computer science that focuses on the intersection between social behavior and computational systems. Integral to Jhaver\u0026rsquo;s study was how social media and the data gathered within those systems reflected what was happening within society as a whole.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThere may be no more adequate reflection of this phenomena than on Reddit and Twitter, two communities his research has looked at. At CSCW, he\u0026rsquo;ll present a handful of studies that have examined the topic of content moderation. One of the papers, titled \u003Ca href=\u0022https:\/\/medium.com\/acm-cscw\/does-transparency-in-moderation-really-matter-b86bab9b4810\u0022 target=\u0022_blank\u0022\u003E\u003Cem\u003EDoes Transparency in Moderation Really Matter?: User Behavior After Content Removal Explanations on Reddit\u003C\/em\u003E\u003C\/a\u003E, earned a best paper award. Another, titled \u003Ca href=\u0022https:\/\/medium.com\/acm-cscw\/did-you-suspect-the-post-would-be-removed-1dd1839277cb\u0022 target=\u0022_blank\u0022\u003E\u003Cem\u003EDid You Suspect the Post Would be Removed?: Understanding User Reactions to Content Removals on Reddit\u003C\/em\u003E\u003C\/a\u003E, earned an honorable mention.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHow, he wonders, do you develop good moderation practices that enforce community rules while also maintaining the free expression of ideas? And, what practices improve how posters feel about their moderated content and encourage them to continue participating in these forums?\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Content moderation is more nuanced than just editing and removing content,\u0026rdquo; Jhaver said. \u0026ldquo;It\u0026rsquo;s about the overall experience of the user and the community and how they interact.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHis research came to a few conclusions:\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne, fairness matters; two, accountability is important; three, the platforms should be transparent in their decisions. From the perspective of end users, that means that rules are clear and easy to follow, and when the post is removed they are notified and given a clear explanation of why. If they appeal, they are given an appropriate response.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut there are multiple stakeholders involved in the exchange, and who determines what is fair?\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;These Reddit moderators are volunteers,\u0026rdquo; Jhaver said. \u0026ldquo;Is it fair for us to expect them to take on these increased responsibilities for providing explanations?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn other words, these issues are much more nuanced than they would seem to many casual participants. \u003Cstrong\u003EAmy Bruckman\u003C\/strong\u003E, a professor in IC and Jhaver\u0026rsquo;s co-advisor (with IC adjunct faculty \u003Cstrong\u003EEric Gilbert\u003C\/strong\u003E), said she can\u0026rsquo;t think of other research that has examined this aspect of social communities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I don\u0026rsquo;t think it has been studied \u0026ndash; okay, your content was just removed, so how do you feel about that?\u0026rdquo; she said. \u0026ldquo;Taking that other side of it is unique.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch3\u003EGiving everyone a voice\u003C\/h3\u003E\r\n\r\n\u003Cp\u003ESo, why do these explanations even matter? Why not just remove bad content and move on?\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;But free speech is interesting,\u0026rdquo; Jhaver said. \u0026ldquo;There\u0026rsquo;s this dichotomy where if you are free to harass certain people over their race, gender, or other aspects of identity, then you are preventing them from having the voice to speak their truth. So, you are infringing on their freedom of speech. That\u0026rsquo;s why there\u0026rsquo;s this need.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhatever the case, these issues are not going away. Methods of communication will continue to change over time, particularly as technology continues to advance. But, Jhaver said, these conversations aren\u0026rsquo;t anything new either.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;These are age old problems,\u0026rdquo; he said. \u0026ldquo;Harassment, free speech, suppression of free speech. These topics have always been discussed, but the internet has changed the way we see them and changed how they manifest themselves.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I want my research to help minorities and other vulnerable groups have a greater voice in society,\u0026rdquo; Jhaver said. \u0026ldquo;I want to contribute to the design of more equitable, inclusive, and participatory technologies.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Online communities, once thought to be a place everyone had a voice, has instead become a Wild West. Understanding the impact of content moderation on user behavior could improve the free flow of ideas."}],"uid":"33939","created_gmt":"2019-09-30 19:34:51","changed_gmt":"2019-09-30 19:34:51","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-09-30T00:00:00-04:00","iso_date":"2019-09-30T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"626923":{"id":"626923","type":"image","title":"Shagun Jhaver","body":null,"created":"1569871871","gmt_created":"2019-09-30 19:31:11","changed":"1569871871","gmt_changed":"2019-09-30 19:31:11","alt":"Shagun Jhaver","file":{"fid":"238699","name":"Shagun_Jhaver.JPG","image_path":"\/sites\/default\/files\/images\/Shagun_Jhaver.JPG","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Shagun_Jhaver.JPG","mime":"image\/jpeg","size":168715,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Shagun_Jhaver.JPG?itok=LZnt5yPA"}}},"media_ids":["626923"],"related_links":[{"url":"https:\/\/www.ic.gatech.edu\/content\/human-centered-computing-cognitive-science","title":"Human-Centered Computing at Georgia Tech"},{"url":"https:\/\/www.ic.gatech.edu\/content\/social-computing-computational-journalism","title":"Social Computing at Georgia Tech"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182508","name":"cc-research; ic-hcc; ic-social-computing"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"625602":{"#nid":"625602","#data":{"type":"news","title":"The Google Internship That Almost Wasn\u2019t","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003ESam Harvey\u003C\/strong\u003E, a master of science student in human-computer interaction, was presented with possibly a once-in-a-lifetime opportunity to work on a major product at Google, but one thing stood in his way \u0026ndash; he didn\u0026rsquo;t have an online portfolio the recruiter was searching for.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;When I got that email, I responded within 30 milliseconds and said I\u0026rsquo;d have a website up in two days,\u0026rdquo; recalls Harvey, who is now well into a four-month internship at Google\u0026rsquo;s Switzerland campus in Zurich.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMaking good on his promise helped Harvey secure a spot in the interview process, which spanned several early-morning video calls to Zurich (six hours ahead of Atlanta) and two rounds of vetting over about 45 days.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHarvey is now experiencing firsthand the culture that such a selective hiring process helps cultivate. He jokes about trying not to become too \u0026ldquo;googley\u0026rdquo; \u0026ndash; an enigmatic term that hints at Google\u0026rsquo;s sometimes utopian ideals \u0026ndash; but he was soon struck with the weight of the responsibility given to him.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s incredibly humbling to be working on Google Flights,\u0026rdquo; Harvey says, referring to the product that has been his focus. The tool is part of Google Travel, designed to be a comprehensive resource for planning trips.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Releasing a bad product impacts a lot of people. When designing, I might think of grandparents who want to fly out to visit their grandchildren,\u0026rdquo; Harvey says. \u0026ldquo;How do I help them?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;What I\u0026rsquo;m designing can either make the experience easier or harder. Imagine making something that will ruin the day for a million grandmothers. I don\u0026rsquo;t want to do that.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHarvey\u0026rsquo;s internship as a UX designer \u0026ndash; short for user experience designer \u0026ndash; is what Harvey himself wants to make of it. The Google culture that includes 24\/7 free food, nap rooms, and flex schedules (just a few of the perks) is designed to \u0026ldquo;let you maximize your full potential,\u0026rdquo; as Harvey puts it.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUX designers, as the name implies, often focus on how to design an experience that differentiates a product from its competitors. Harvey approaches his work by starting from a place of empathy and figuring out how he would respond to a product. He then evaluates evidence-based designs to quantify what works, and finally, he sets out on the long, chaotic journey to build something truly special.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThere are no shortcuts in the process, especially not at Google. A glimpse into Harvey\u0026rsquo;s experience shows this.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I listen \u0026ndash; and I listen hard \u0026ndash; to the user researchers and product experts. I turn off the part of my brain where I want to talk over people because I think I have something brilliant to say. I switch off my ego.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnalyzing Google\u0026rsquo;s unmatched volume of user data is helping Harvey to identify some of the most vexing problems in online flight planning. He\u0026rsquo;ll let the information he gathers marinate on his brain for a long time, then start sketching out on paper as many solutions as possible \u0026ndash; literally anything that might solve the given challenge.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Three percent of the ideas are gonna make it out of the furnace,\u0026rdquo; he jokes, referring to the process where team members kindly discard the concepts that don\u0026rsquo;t pass muster.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhat survives undergoes even closer scrutiny, and the intensity of the process leaves only those product designs that might work well as part of a sprawling Google ecosystem operating around the clock.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHarvey still marvels that a team of some of the most talented people he\u0026rsquo;s ever met \u0026ndash; working together on the same challenges \u0026ndash; doesn\u0026rsquo;t create a toxic culture of competing alphas. Rather, it\u0026rsquo;s the opposite.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This place is conducive to making some awesome stuff and not making people feel small,\u0026rdquo; he said. \u0026ldquo;My favorite part of being here is being pushed every day and striving to be a contributing member. I\u0026rsquo;d recommend coming to Google just for the growth potential.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHarvey mentions the benefit of Georgia Tech\u0026rsquo;s MS-HCI program, the guidance of program director Dick Henneman, and how these both prepared him for Google. Taking the summer job delayed graduation for him, but Harvey is OK with this, realizing that this rare opportunity will shape the rest of his career.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":[{"value":"MS-HCI Student Lands at the Search Giant and Learns That Only the Best Product Ideas Survive"}],"field_summary":[{"value":"\u003Cp\u003ESam Harvey, MS student in human-computer interaction, is working on Google Flights and making sure his design decisions don\u0026#39;t ruin your travel plans, or those for a million or so grandmothers.\u0026nbsp;\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Sam Harvey, MS student in human-computer interaction, is working on Google Flights and making sure his design decisions don\u0027t ruin your travel plans, or those for a million or so grandmothers. "}],"uid":"27592","created_gmt":"2019-09-04 17:04:41","changed_gmt":"2019-09-05 13:55:22","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-09-05T00:00:00-04:00","iso_date":"2019-09-05T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"625651":{"id":"625651","type":"image","title":"Sam Harvey","body":null,"created":"1567691094","gmt_created":"2019-09-05 13:44:54","changed":"1567691094","gmt_changed":"2019-09-05 13:44:54","alt":"","file":{"fid":"238183","name":"Sam_Harvey_zurich.jpg","image_path":"\/sites\/default\/files\/images\/Sam_Harvey_zurich.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Sam_Harvey_zurich.jpg","mime":"image\/jpeg","size":554054,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Sam_Harvey_zurich.jpg?itok=TqifyLoB"}}},"media_ids":["625651"],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\nResearch Communications Manager\u003Cbr \/\u003E\r\n\u003Cem\u003EGVU Center and College of Computing\u003C\/em\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"624901":{"#nid":"624901","#data":{"type":"news","title":"Researchers Use Social Media to Help Measure Outcomes of Psychiatric Medication","body":[{"value":"\u003Cp\u003ESocial media posts are becoming a vital tool to assessing the effects of psychiatric medication, according to a new study from researchers in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022 target=\u0022_blank\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC). The approach offers clinicians a more effective method to measure mental health outcomes in a notoriously imprecise space.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn treating mental illness, clinicians are often forced into a trial-and-error approach to prescribing medication to patients. Each patient may react differently \u0026ndash; oftentimes with negative outcomes \u0026ndash; to drugs that have been matched with conditions based on incomplete and potentially biased data from clinical trials.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In most non-mental health treatment where particular symptoms like a fever or chronic pain might indicate a specific physical condition, there exists a more definitive matching approach to prescription,\u0026rdquo; said \u003Cstrong\u003EKoustuv Saha\u003C\/strong\u003E, an IC Ph.D. student who led the study. \u0026ldquo;In psychiatric care, that matching approach is unknown.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPatients taking the wrong medication could experience increased depression or anxiety, suicidal ideation, or other symptoms like fluctuations in sleep and weight. In many cases, they are forced to return to their clinician for a change in medication or, in worse cases, may lose trust in the medication entirely and stop using it.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Considering that five of the top 50 drugs sold in the United States are psychiatric medications, it\u0026rsquo;s extremely important to understand how they actually work on individuals,\u0026rdquo; Saha said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn the past, clinical trials have taken a disease-centered approach that attempts to prescribe specific medications with psychiatric symptoms, neglecting those psychoactive effects of the drug. Trials are conducted for smaller cohorts over shorter periods of time, eliminate some individuals who experience more extreme symptoms, and are often biased, being conducted by the drug companies themselves.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAdopting a \u0026ldquo;patient-centered\u0026rdquo; model that considers individual outcomes for patients using a specific medication, this study leveraged longitudinal and large-scale social media data to achieve a form of digital-based matching of patients to medications.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe researchers collected a list of medications approved by the Food and Drug Administration, then collected Tweets that mentioned these medications between 2015-16. From that, they collected over 600,000 Tweets that identified users of a specific medication. Interestingly enough, their data matched the top four prescription psychiatric medications in that period: Sertraline (Zoloft), Escitalopram (Lexapro), Fluoxetine (Prozac), and Duloxetine (Cymbalta).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing a control group of random Twitter users that did not take the medication and building on prior work that showed the ability for language found in social posts to predict mental health conditions, researchers could match specific medications with their outcomes, positive or negative, after use.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe findings indicated that Selective Serotonin Reuptake Inhibitors (Sertraline, Escitalopram, Fluoxetine) \u0026ndash; three of the most popular prescription medications \u0026ndash; are actually associated with worsening symptoms. Tricyclic Antidepressants like Dosulepin, Imipramine, and Clomipramine, by comparison, were more associated with improving conditions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Clinically, our findings reveal signals of the most common effects of the psychiatric medications over a large population, with the potential for improved characterization of their occurrence,\u0026rdquo; Saha writes in the paper. \u0026ldquo;Technologically, we show the potential of novel technologies in digital therapeutics.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis research, Saha said, exists as a proof of concept to show levels of a specific condition \u0026ndash; before and after medication use \u0026ndash; using digital data. He stressed it is not a replacement for clinical care, only a way to help augment treatment using additional available data.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe work was presented at the \u003Ca href=\u0022https:\/\/www.icwsm.org\/2019\/\u0022\u003E13\u003C\/a\u003E\u003Ca href=\u0022https:\/\/www.icwsm.org\/2019\/\u0022 target=\u0022_blank\u0022\u003E\u003Csup\u003Eth\u003C\/sup\u003E\u003C\/a\u003E International AAAI Conference on Web and Social Media\u003Ca href=\u0022https:\/\/www.icwsm.org\/2019\/\u0022 target=\u0022_blank\u0022\u003E in a paper titled \u003Cem\u003EA Social Media Study on the Effects of Psychiatric Medication Use\u003C\/em\u003E\u003C\/a\u003E (Koustuv Saha, \u003Cstrong\u003EBenjamin Sugar\u003C\/strong\u003E, \u003Cstrong\u003EJohn Torous\u003C\/strong\u003E, \u003Cstrong\u003EBruno Abrahao\u003C\/strong\u003E, \u003Cstrong\u003EEmre Kiciman\u003C\/strong\u003E, \u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E). It was awarded Outstanding Study Design Paper at the conference. It is funded in part by a grant from the \u003Ca href=\u0022http:\/\/www.nih.gov\u0022 target=\u0022_blank\u0022\u003ENational Institutes of Health\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"This research exists as a proof of concept to show levels of a specific condition \u2013 before and after medication use \u2013 using digital data."}],"uid":"33939","created_gmt":"2019-08-21 16:58:57","changed_gmt":"2019-08-21 16:58:57","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-08-21T00:00:00-04:00","iso_date":"2019-08-21T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"624519":{"id":"624519","type":"image","title":"Social Media Logos","body":null,"created":"1565805908","gmt_created":"2019-08-14 18:05:08","changed":"1565805908","gmt_changed":"2019-08-14 18:05:08","alt":"A keyboard featuring different social media logos","file":{"fid":"237806","name":"Social Media logos.jpg","image_path":"\/sites\/default\/files\/images\/Social%20Media%20logos.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Social%20Media%20logos.jpg","mime":"image\/jpeg","size":215846,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Social%20Media%20logos.jpg?itok=G7qWkSGs"}}},"media_ids":["624519"],"related_links":[{"url":"https:\/\/www.ic.gatech.edu\/content\/social-computing-computational-journalism","title":"Social Computing Research at Georgia Tech"}],"groups":[{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182015","name":"cc-research; ic-ai-ml; ic-hcc; ic-social-computing"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"624538":{"#nid":"624538","#data":{"type":"news","title":"Researchers Use Social Media to Help Measure Outcomes of Psychiatric Medication","body":[{"value":"\u003Cp\u003ESocial media posts are becoming a vital tool to assessing the effects of psychiatric medication, according to a new study from researchers in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022 target=\u0022_blank\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC). The approach offers clinicians a more effective method to measure mental health outcomes in a notoriously imprecise space.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn treating mental illness, clinicians are often forced into a trial-and-error approach to prescribing medication to patients. Each patient may react differently \u0026ndash; oftentimes with negative outcomes \u0026ndash; to drugs that have been matched with conditions based on incomplete and potentially biased data from clinical trials.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In most non-mental health treatment where particular symptoms like a fever or chronic pain might indicate a specific physical condition, there exists a more definitive matching approach to prescription,\u0026rdquo; said \u003Cstrong\u003EKoustuv Saha\u003C\/strong\u003E, an IC Ph.D. student who led the study. \u0026ldquo;In psychiatric care, that matching approach is unknown.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPatients taking the wrong medication could experience increased depression or anxiety, suicidal ideation, or other symptoms like fluctuations in sleep and weight. In many cases, they are forced to return to their clinician for a change in medication or, in worse cases, may lose trust in the medication entirely and stop using it.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Considering that five of the top 50 drugs sold in the United States are psychiatric medications, it\u0026rsquo;s extremely important to understand how they actually work on individuals,\u0026rdquo; Saha said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn the past, clinical trials have taken a disease-centered approach that attempts to prescribe specific medications with psychiatric symptoms, neglecting those psychoactive effects of the drug. Trials are conducted for smaller cohorts over shorter periods of time, eliminate some individuals who experience more extreme symptoms, and are often biased, being conducted by the drug companies themselves.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAdopting a \u0026ldquo;patient-centered\u0026rdquo; model that considers individual outcomes for patients using a specific medication, this study leveraged longitudinal and large-scale social media data to achieve a form of digital-based matching of patients to medications.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe researchers collected a list of medications approved by the Food and Drug Administration, then collected Tweets that mentioned these medications between 2015-16. From that, they collected over 600,000 Tweets that identified users of a specific medication. Interestingly enough, their data matched the top four prescription psychiatric medications in that period: Sertraline (Zoloft), Escitalopram (Lexapro), Fluoxetine (Prozac), and Duloxetine (Cymbalta).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing a control group of random Twitter users that did not take the medication and building on prior work that showed the ability for language found in social posts to predict mental health conditions, researchers could match specific medications with their outcomes, positive or negative, after use.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe findings indicated that Selective Serotonin Reuptake Inhibitors (Sertraline, Escitalopram, Fluoxetine) \u0026ndash; three of the most popular prescription medications \u0026ndash; are actually associated with worsening symptoms. Tricyclic Antidepressants like Dosulepin, Imipramine, and Clomipramine, by comparison, were more associated with improving conditions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Clinically, our findings reveal signals of the most common effects of the psychiatric medications over a large population, with the potential for improved characterization of their occurrence,\u0026rdquo; Saha writes in the paper. \u0026ldquo;Technologically, we show the potential of novel technologies in digital therapeutics.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis research, Saha said, exists as a proof of concept to show levels of a specific condition \u0026ndash; before and after medication use \u0026ndash; using digital data. He stressed it is not a replacement for clinical care, only a way to help augment treatment using additional available data.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe work was presented at the \u003Ca href=\u0022https:\/\/www.icwsm.org\/2019\/\u0022\u003E13\u003C\/a\u003E\u003Ca href=\u0022https:\/\/www.icwsm.org\/2019\/\u0022 target=\u0022_blank\u0022\u003E\u003Csup\u003Eth\u003C\/sup\u003E\u003C\/a\u003E International AAAI Conference on Web and Social Media\u003Ca href=\u0022https:\/\/www.icwsm.org\/2019\/\u0022 target=\u0022_blank\u0022\u003E in a paper titled \u003Cem\u003EA Social Media Study on the Effects of Psychiatric Medication Use\u003C\/em\u003E\u003C\/a\u003E (Koustuv Saha, \u003Cstrong\u003EBenjamin Sugar\u003C\/strong\u003E, \u003Cstrong\u003EJohn Torous\u003C\/strong\u003E, \u003Cstrong\u003EBruno Abrahao\u003C\/strong\u003E, \u003Cstrong\u003EEmre Kiciman\u003C\/strong\u003E, \u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E). It was awarded Outstanding Study Design Paper at the conference. It is funded in part by a grant from the \u003Ca href=\u0022http:\/\/www.nih.gov\u0022 target=\u0022_blank\u0022\u003ENational Institutes of Health\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"This research exists as a proof of concept to show levels of a specific condition \u2013 before and after medication use \u2013 using digital data."}],"uid":"33939","created_gmt":"2019-08-14 19:16:13","changed_gmt":"2019-08-19 21:09:48","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-08-14T00:00:00-04:00","iso_date":"2019-08-14T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"624519":{"id":"624519","type":"image","title":"Social Media Logos","body":null,"created":"1565805908","gmt_created":"2019-08-14 18:05:08","changed":"1565805908","gmt_changed":"2019-08-14 18:05:08","alt":"A keyboard featuring different social media logos","file":{"fid":"237806","name":"Social Media logos.jpg","image_path":"\/sites\/default\/files\/images\/Social%20Media%20logos.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Social%20Media%20logos.jpg","mime":"image\/jpeg","size":215846,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Social%20Media%20logos.jpg?itok=G7qWkSGs"}}},"media_ids":["624519"],"related_links":[{"url":"https:\/\/www.ic.gatech.edu\/content\/social-computing-computational-journalism","title":"Social Computing Research at Georgia Tech"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182015","name":"cc-research; ic-ai-ml; ic-hcc; ic-social-computing"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"624130":{"#nid":"624130","#data":{"type":"news","title":"\u0027MacGyver\u0027-like Robot Can Build Own Tools By Assessing Form, Function of Supplies","body":[{"value":"\u003Cp\u003EThanks to new technology that enables them to create simple tools, robots may be on the verge of their own version of the Stone Age.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing a novel capability to reason about shape, function, and attachment of unrelated parts, researchers have for the first time successfully trained an intelligent agent to create basic tools by combining objects.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe breakthrough comes from Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/www.rail.gatech.edu\/\u0022\u003ERobot Autonomy and Interactive Learning\u003C\/a\u003E (RAIL) research lab and is a significant step toward enabling intelligent agents to devise more advanced tools that could prove useful in hazardous \u0026ndash; and potentially life-threatening \u0026ndash; environments.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe concept may sound familiar. It\u0026rsquo;s called \u0026ldquo;MacGyvering,\u0026rdquo; based off the name of a 1980s \u0026mdash; and recently rebooted \u0026mdash; television series. In the series, the title character is known for his unconventional problem-solving ability using differing resources available to him.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor years, computer scientists and others have been working to provide robots with similar capabilities. In their new robot-MacGyvering work, RAIL lab researchers led by Associate Professor \u003Cstrong\u003ESonia Chernova\u003C\/strong\u003E used as a starting point a robotics technique previously developed by former Georgia Tech Professor \u003Cstrong\u003EMike Stilman\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn this latest work, a robot trained using the team\u0026rsquo;s novel approach is given a set of optional parts and told to make a specific tool. Much like its human counterparts, the robot first examines the shapes of each part and how one might be attached to another.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing machine learning, the robot is trained to match form to function \u0026ndash; which object shapes facilitate a particular outcome \u0026ndash; from numerous examples of everyday objects. For example, by learning that the concavity of bowls enables them to hold liquids, it makes use of this knowledge when constructing a spoon. Similarly, the robots were taught how to attach objects together from examples of materials that could be pierced or grasped.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn the study, researchers successfully created hammers, spatulas, scoops, squeegees, and screwdrivers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The screwdriver was particularly interesting because the robot combined pliers and a coin,\u0026rdquo; said \u003Cstrong\u003ELakshmi Nair\u003C\/strong\u003E, a Ph.D. student in the \u003Ca href=\u0022http:\/\/www.ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E and one of the researchers on the project. \u0026ldquo;It reasoned that the pliers were able to grasp something and said that the coin sort of matched the head of a screwdriver. Put them together, and it creates an effective tool.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECurrently, the robot is limited only to the shape and attachment. It cannot yet effectively reason about particular material properties, a crucial step in advancing to a real-world scenario.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/news\/623044\/robot-able-instantly-identify-household-materials-using-near-infrared-light\u0022\u003E\u003Cstrong\u003E[RELATED: Robot Able to Instantly Identify Household Materials Using Near-Infrared Light]\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;People reason that hammers are sturdy and strong, so you wouldn\u0026rsquo;t make a hammer out of foam blocks,\u0026rdquo; Nair said. \u0026ldquo;We want to reach that level of reasoning in our work, which is something we\u0026rsquo;re working on now.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe inspiration for the work comes from the popular story of Apollo 13, the doomed seventh crewed flight of the Apollo space program. After an oxygen tank in the ship\u0026rsquo;s service module exploded two days into the mission, crew members were forced to make makeshift modifications to the carbon dioxide removal system.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDespite a dangerously tight window of time and extremely high tension among all aboard and at mission control, the rescue proved successful. Nair and collaborators hope this research will prove foundational to future robotics technology that could reason faster and without the burden of stress.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;They were able to make this filter, but the solution took a long time to come up with,\u0026rdquo; Nair said. \u0026ldquo;We want to make robots that can assist humans in these kinds of scenarios to take the pressure off of them to come up with innovative solutions and potentially save their lives.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis work was presented at the 2019 Robotics: Science and Systems conference in a paper titled \u003Ca href=\u0022http:\/\/www.roboticsproceedings.org\/rss15\/p09.pdf\u0022\u003E\u003Cem\u003EAutonomous Tool Construction Using Part Shape and Attachment Prediction \u003C\/em\u003E\u003C\/a\u003E(Lakshmi Nair, \u003Cstrong\u003ENithin Shrivatsav\u003C\/strong\u003E, \u003Cstrong\u003EZackory Erickson\u003C\/strong\u003E, Sonia Chernova). It is supported in part by grants from the \u003Ca href=\u0022https:\/\/www.nsf.gov\/\u0022\u003ENational Science Foundation\u003C\/a\u003E and the \u003Ca href=\u0022https:\/\/www.onr.navy.mil\/\u0022\u003EOffice of Naval Research\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The breakthrough is a significant step toward enabling intelligent agents to devise more advanced tools that could prove useful in hazardous and potentially life-threatening environments."}],"uid":"33939","created_gmt":"2019-08-07 21:04:09","changed_gmt":"2019-08-12 20:08:25","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-08-07T00:00:00-04:00","iso_date":"2019-08-07T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"624128":{"id":"624128","type":"image","title":"Robot MacGyvering - Lakshmi Nair 1","body":null,"created":"1565210646","gmt_created":"2019-08-07 20:44:06","changed":"1565210646","gmt_changed":"2019-08-07 20:44:06","alt":"Lakshmi Nair stands next to a robotic arm with tool parts on a table","file":{"fid":"237702","name":"Macgyvering MAIN.jpg","image_path":"\/sites\/default\/files\/images\/Macgyvering%20MAIN.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Macgyvering%20MAIN.jpg","mime":"image\/jpeg","size":200873,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Macgyvering%20MAIN.jpg?itok=iU3IpDzd"}}},"media_ids":["624128"],"related_links":[{"url":"http:\/\/rail.gatech.edu","title":"Robot Autonomy and Interactive Learning Lab"},{"url":"https:\/\/www.ic.gatech.edu\/content\/robotics-computational-perception","title":"Robotics and Computational Perception Research at Georgia Tech"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181920","name":"cc-research; ic-ai-ml; ic-robotics"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"624042":{"#nid":"624042","#data":{"type":"news","title":"Civic Data Science Pairs with Smart Cities for Sixth Summer","body":[{"value":"\u003Cp\u003EStudents presented data science solutions for problems like climate change and traffic at the \u003Ca href=\u0022https:\/\/civicdatascience.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003ECivic Data Science\u003C\/a\u003E (CDS) finale on July 28. This was the first year the National Science Foundation\u0026ndash;funded summer program partnered with the \u003Ca href=\u0022https:\/\/smartcities.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003EGeorgia Smart Communities Challenge\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESince 2013, undergraduates from colleges across the country come to campus for the 10-week program, where they learn how to use data science to tackle civic problems. This year, CDS paired with Smart Cities\u0026rsquo; \u003Ca href=\u0022https:\/\/www.news.gatech.edu\/2019\/06\/18\/georgia-smart-communities-challenge-selects-four-new-community-projects\u0022 target=\u0022_blank\u0022\u003ESmart Communities\u003C\/a\u003E, an initiative that integrates technology-based research with a community\u0026rsquo;s goals.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis year\u0026rsquo;s CDS projects were:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022http:\/\/smartcities.ipat.gatech.edu\/chatham-county\u0022 target=\u0022_blank\u0022\u003ESmart Sea Level Tools for Emergency Planning and Response\u003C\/a\u003E, in which students found a better way to conduct maintenance for 30 smart sea level sensors that are part of a program run by School of Computer Science and \u003Ca href=\u0022http:\/\/ipat.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003EInstitute for People and Technology Senior Research Scientist\u003C\/a\u003E \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/fac\/Russell.Clark\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003ERussell Clark\u003C\/strong\u003E\u003C\/a\u003E in Savannah, Georgia.\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022http:\/\/smartcities.ipat.gatech.edu\/city-albany\u0022 target=\u0022_blank\u0022\u003EAlbany Housing Data Initiative\u003C\/a\u003E, where students cleaned city data from disparate sources and created a database to help the city of Albany, Georgia, understanding the effect of programs to reduce energy costs.\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022http:\/\/smartcities.ipat.gatech.edu\/gwinnett-county\u0022 target=\u0022_blank\u0022\u003EConnected Vehicle Technology Master Plan\u003C\/a\u003E, in which students analyzed data to better handle the flow of traffic in Gwinnett county for emergency vehicles.\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EThe program\u0026rsquo;s co-director and SCS Professor \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~ewz\/Welcome.html\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EEllen Zegura\u003C\/strong\u003E\u003C\/a\u003E believes students connected to these projects more because of their real-world application.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s a pleasure to watch the work progress from the early first days to getting to see how much you all have learned and how much you all understand the context of the projects you\u0026rsquo;re doing,\u0026rdquo; she said during the finale ceremony held in the Technology Square Research Building.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s not just that you built a database, but here\u0026rsquo;s what a sensor looks like and here\u0026rsquo;s how it can go wrong.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe students agree. \u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/angelalau15\/\u0022 target=\u0022_blank\u0022\u003EAngela Lau\u003C\/a\u003E\u003C\/strong\u003E, a rising sophomore at Cornell Unviersity, wanted an internship that coud help the community.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I was really interested in this program because of the local applicability of the projects,\u0026rdquo; she said. \u0026ldquo;It surprised me how real it was and how we could help a community over a few weeks.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWorking with real data also presented unique learning experiences that students wouldn\u0026rsquo;t normally encounter in a classroom setting.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;There were a lot of challenges working with a real data,\u0026rdquo; said \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/kutub-gandhi-83439514b\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EKutub Gandhi\u003C\/strong\u003E\u003C\/a\u003E, a rising senior at Rice University. \u0026ldquo;Our entire project was figuring out what was wrong with the data collected from sea level sensors.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor many students, this was their first time learning data science skills that they can now use throughout their career.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I had heard of data visualization but didn\u0026rsquo;t know much about it,\u0026rdquo; said \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/david-s-li\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EDavid Li\u003C\/strong\u003E\u003C\/a\u003E, a rising senior at Stony Brook University. \u0026ldquo;But by the end I realized, \u0026lsquo;Wow I learned this and I never knew I could do this before!\u0026rsquo;\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Students presented data science solutions for problems like climate change and traffic at the Civic Data Science (CDS) finale on July 28. "}],"uid":"34541","created_gmt":"2019-08-06 17:57:38","changed_gmt":"2019-08-12 17:59:10","author":"Tess Malone","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-08-06T00:00:00-04:00","iso_date":"2019-08-06T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"624043":{"id":"624043","type":"image","title":"CDS 2019","body":null,"created":"1565117881","gmt_created":"2019-08-06 18:58:01","changed":"1565117881","gmt_changed":"2019-08-06 18:58:01","alt":"CDS students","file":{"fid":"237676","name":"IMG_8550.jpg","image_path":"\/sites\/default\/files\/images\/IMG_8550.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/IMG_8550.jpg","mime":"image\/jpeg","size":655710,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/IMG_8550.jpg?itok=xDtoUfD2"}}},"media_ids":["624043"],"groups":[{"id":"50875","name":"School of Computer Science"},{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ETess Malone, Communications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:tess.malone@cc.gatech.edu\u0022\u003Etess.malone@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["tess.malone@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"621721":{"#nid":"621721","#data":{"type":"news","title":"AIs and Humans Become \u2018Creative Equals\u2019 with New Design Tool","body":[{"value":"\u003Cp\u003EGeorgia Tech researchers have created software with a built-in AI agent that works alongside human designers in real time to create game levels. The software, dubbed MorAI Maker in a nod to Nintendo\u0026rsquo;s game Mario Maker, uses new machine learning techniques for game content generation that allows humans and an\u0026nbsp;AI agent\u0026nbsp;to work in a turn-based fashion on the same digital canvas. This is the first such tool of its kind.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThrough two studies with more than 100 game hobbyists and practicing game developers, the Georgia Tech team found that people varied significantly in how they used the AI.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We did not explicitly structure any roles into our machine learning models, but we still found that users naturally projected different roles onto the same AI and took corresponding roles,\u0026rdquo; said \u003Cstrong\u003EMatthew Guzdial\u003C\/strong\u003E, Ph.D. student in computer science and lead researcher.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAccording to researchers, after refining the machine learning model, the AI agent was capable of picking up on users\u0026rsquo; preferences for level structures. A majority of game developers reported that they would use the AI co-designer in the software, which was developed in Unity.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearchers observed four major categories of roles that people assigned their virtual partners.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESome participants viewed the AI as a friend. One participant prompted the AI to begin the level design, forfeiting her own turn and stating, \u0026ldquo;Let\u0026rsquo;s see what my friend comes up with.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESome participants wanted an equal design partner (collaborator), others seemed to expect the AI to adhere to their specific design beliefs or instructions (student), and some designers followed the AI\u0026rsquo;s lead or expected to be evaluated on their design (manager).\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Human designers in the study demonstrated a willingness to adapt their own design practices to the AI, sometimes as a means of attempting to determine how best to interact with it,\u0026rdquo; said Guzdial.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EConversely, every participant had at least one interaction where the AI adapted to the human designs. For some, this was the exception rather than the rule. \u0026ldquo;The [AI] agent placed objects fairly arbitrarily, in places where it didn\u0026rsquo;t really affect gameplay, just looked weird,\u0026rdquo; said another participating professional designer.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe AI agent embedded in the game design software was trained on implicit feedback from the user. If a user kept the AI\u0026rsquo;s game level additions, the AI received a \u0026ldquo;reward,\u0026rdquo; and if the user removed them a \u0026ldquo;penalty\u0026rdquo; was given to the AI. The AI was not allowed to remove human-generated elements.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne designer said, \u0026ldquo;It was nice to be surprised by the AI partner. It prompted conversation\/discussion in my head.\u0026rdquo; Another said, \u0026ldquo;I was running out of ideas, then prompted the AI for help, and I said, \u0026lsquo;Oh yeah I forgot about these things!\u0026rsquo;\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDespite mostly positive feedback, not everyone found the tool to be consistently valuable. As one participant put it, \u0026ldquo;I could see using this tool as a way to give myself inspiration. But, if I had more specific goals in mind... I would have found it more inhibiting than useful.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGuzdial says MorAI Maker is intended as a design aide, not as a replacement for designers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The AI system is developed in favor of augmenting, not replacing, creative work,\u0026rdquo; he said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe full research,\u0026nbsp;\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1901.06417.pdf\u0022\u003E\u003Cem\u003EFriend, collaborator, student, manager: How design of an AI-driven game level editor affects creators\u003C\/em\u003E\u003C\/a\u003E, is published in the 2019 Proceedings of the ACM Conference on Humans Factors in Computing Systems.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research is based upon work supported by the National Science Foundation under Grant No. IIS-1525967. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":[{"value":"Video Game Developers Use an AI partner In Wildly Different Ways, From Friend to Boss"}],"field_summary":[{"value":"\u003Cp\u003EWill video game developers welcome AI assistance in their workflow? In short, yes, and in wildly different ways, based on research from Georgia Tech published this month.\u0026nbsp;\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Will video game developers welcome AI assistance in their workflow? In short, yes, and in wildly different ways, based on research from Georgia Tech published this month. "}],"uid":"27592","created_gmt":"2019-05-16 11:37:38","changed_gmt":"2019-08-12 14:50:52","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-05-16T00:00:00-04:00","iso_date":"2019-05-16T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"621722":{"id":"621722","type":"image","title":"MorAI Maker Game Design Tool","body":null,"created":"1558007459","gmt_created":"2019-05-16 11:50:59","changed":"1558007477","gmt_changed":"2019-05-16 11:51:17","alt":"","file":{"fid":"236823","name":"MorAI Maker creations.png","image_path":"\/sites\/default\/files\/images\/MorAI%20Maker%20creations.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/MorAI%20Maker%20creations.png","mime":"image\/png","size":706743,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/MorAI%20Maker%20creations.png?itok=33_P6TT9"}}},"media_ids":["621722"],"related_links":[{"url":"https:\/\/www.youtube.com\/watch?v=UkMeM5Ty1lA\u0026feature=youtu.be\u0026t=563","title":"VIDEO: Early Interaction with AI Creative Partner"},{"url":"https:\/\/www.spreaker.com\/user\/10751784\/tu-ep6-video-game-devs-react-to-ai","title":"Tech Unbound Podcast EP6: Video Game Developers React in Wildly Different Ways to AI-Enabled Software"}],"groups":[{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\nResearch Communications Manager\u003Cbr \/\u003E\r\n\u003Cem\u003ECollege of Computing and GVU Center\u003C\/em\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"624291":{"#nid":"624291","#data":{"type":"news","title":"AI \u0027Performers\u0027 Take Center Stage and Get Creative with People in Public Spaces","body":[{"value":"\u003Cp\u003EResearchers at Georgia Tech are seeking to improve \u0026ldquo;artificial intelligence literacy\u0026rdquo; and give people opportunities to engage directly with AI systems in order to understand the potential and capabilities of the technology.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAI-assisted tech is increasingly common, but actions by these autonomous programs are often hard to spot in people\u0026rsquo;s daily use of devices and online services.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech\u0026rsquo;s Expressive Machinery Lab has developed exhibitions where the AI agents are front-and-center and people are able to create with them in public spaces. These AIs have included a dance partner, visual storyteller, music maker, and comedic improv performer.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;There are common misconceptions about what AI is, what it is capable of, and how it works,\u0026rdquo; said \u003Cstrong\u003EBrian Magerko\u003C\/strong\u003E, professor of digital media and director of the Expressive Machinery Lab. \u0026ldquo;AI systems in public spaces that can engage as active participants in co-creative activities have the potential to serve as avenues for AI literacy. We believe this work pushes these efforts forward considerably.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe exhibitions\u0026nbsp;involving live interactions between people and AIs \u0026ndash; what the researchers call co-creative experiences \u0026ndash; have taken place across the country since 2013 at academic conferences, art festivals, museums, and other venues. \u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe multi-year endeavor has resulted in a design blueprint developed by the researchers that shows how to build AI experiences for public spaces where audiences or performers can create with an AI partner.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Museums and other public spaces can serve as alternative venues for AI literacy initiatives, complementing formal education and broadening access to opportunities to interact with and learn about AI by both adults and children who may not have AI devices in their homes or schools,\u0026rdquo; said \u003Cstrong\u003EDuri Long\u003C\/strong\u003E, human-centered computing Ph.D. student at Georgia Tech and a researcher involved in the work.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearchers encountered challenges unique to making \u0026ldquo;creative AIs\u0026rdquo;, such as how to build systems that engage people with different tastes, AIs that perform over sustained periods of time, and AIs being able to adapt to unpredictable human behavior.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor example, the AI dance partner, known as LuminAI and the oldest of the group, doesn\u0026rsquo;t have fingers so any naughty hand gestures aren\u0026rsquo;t processed in the AI\u0026rsquo;s dance routine.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Our AI agents are unlike many other AIs, which usually have a specific task to accomplish,\u0026rdquo; Long said. \u0026ldquo;Our work involves open-ended co-creative AI installations where there is not a single clear goal or other reward function to optimize the AI\u0026rsquo;s behavior. Our AIs are meant to create or collaborate with a human counterpart, and that looks different every time.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile AIs in general often have large databases of sensor data (images, temperature readings, etc.) to improve their understanding of the world, in creative areas such as dance, theater, and other performing arts there is limited data from which AIs can pull.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe researchers overcame this in part by having their AIs learn from human partners in real-time and decide what might be a suitable action. For professional performers, who want a greater degree of control, they could perhaps take turns with the AI partner to have a more structured performance. Conversely, an AI as part of a museum exhibit might guide participants on how to start an activity in order to engage people early on. \u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESocial interaction was also important to consider and, counter to some technology trends, the researchers discovered that human-to-human interaction could increase as a result of AI involvement.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELuminAI, the dancing AI, prompted a couple to do the salsa, two friends to start a synchronized dance routine, and a group of teenagers to perform in a dance circle.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe comedic AI in the roster, called Robot Improv Circus, allows an audience to watch someone interacting in VR with the AI agent and provide feedback to the person by using voice prompts and gestures to trigger in-game reward systems. This led to several groups of friends encouraging each other to try different actions with the comedic AI.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research was published in the Proceedings of the Creativity \u0026amp; Cognition Conference 2019. The paper \u003Cem\u003EDesigning Co-Creative AI for Public Spaces\u003C\/em\u003E was co-authored by Duri Long, Mikhail Jacob, and Brian Magerko.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech\u2019s Expressive Machinery Lab has developed exhibitions where the AI agents are front-and-center and people are able to create with them. These AIs have included a dance partner, visual storyteller, music maker, and improv comedian."}],"uid":"27592","created_gmt":"2019-08-09 17:13:35","changed_gmt":"2019-08-09 17:23:15","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-08-09T00:00:00-04:00","iso_date":"2019-08-09T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"624288":{"id":"624288","type":"image","title":"AI Performers","body":null,"created":"1565370405","gmt_created":"2019-08-09 17:06:45","changed":"1565370439","gmt_changed":"2019-08-09 17:07:19","alt":"","file":{"fid":"237732","name":"Expressive Machinery Lab AIs.png","image_path":"\/sites\/default\/files\/images\/Expressive%20Machinery%20Lab%20AIs_0.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Expressive%20Machinery%20Lab%20AIs_0.png","mime":"image\/png","size":537485,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Expressive%20Machinery%20Lab%20AIs_0.png?itok=9glDJfQR"}},"624289":{"id":"624289","type":"image","title":"Duri Long","body":null,"created":"1565370460","gmt_created":"2019-08-09 17:07:40","changed":"1565370460","gmt_changed":"2019-08-09 17:07:40","alt":"","file":{"fid":"237733","name":"Duri Long.png","image_path":"\/sites\/default\/files\/images\/Duri%20Long.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Duri%20Long.png","mime":"image\/png","size":68901,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Duri%20Long.png?itok=sq3wsG9G"}},"624287":{"id":"624287","type":"image","title":"Brian Magerko","body":null,"created":"1565370308","gmt_created":"2019-08-09 17:05:08","changed":"1565370308","gmt_changed":"2019-08-09 17:05:08","alt":"","file":{"fid":"237731","name":"Brian Magerko.png","image_path":"\/sites\/default\/files\/images\/Brian%20Magerko.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Brian%20Magerko.png","mime":"image\/png","size":84568,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Brian%20Magerko.png?itok=DfGeRTdl"}}},"media_ids":["624288","624289","624287"],"related_links":[{"url":"https:\/\/www.youtube.com\/watch?v=K1juBtnJjTk\u0026list=PLqbYO_bYE2ClHihmAEMrP2FtqE6qpXnSF","title":"AI Dance Partner "}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"143","name":"Digital Media and Entertainment"},{"id":"148","name":"Music and Music Technology"},{"id":"151","name":"Policy, Social Sciences, and Liberal Arts"}],"keywords":[],"core_research_areas":[{"id":"39431","name":"Data Engineering and Science"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\nResearch Communications Manager\u003Cbr \/\u003E\r\nGVU Center and College of Computing\u003Cbr \/\u003E\r\n678.231.0787\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"624192":{"#nid":"624192","#data":{"type":"news","title":"ML@GT Announces Fall Seminar Series Speakers","body":[{"value":"\u003Cp\u003EEach semester, hundreds of students, faculty, and external guests are treated to talks by some of the world\u0026rsquo;s most renowned scientists. This fall, the \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EMachine Learning Center at Georgia Tech (ML@GT)\u003C\/a\u003E will host five talks as a part of its Fall Seminar Series.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESpeakers come from industry and academia, giving attendees exposure to problems being solved by both entities. Talks touch on current topics in machine learning and artificial intelligence, applications for technologies, and related insights and experiences. Past speakers have included the likes of \u003Cstrong\u003EPieter Abbeel, \u003C\/strong\u003E\u003Ca href=\u0022http:\/\/bit.ly\/2MJtYbA\u0022\u003EMagic Leap\u0026rsquo;s\u003C\/a\u003E\u003Cstrong\u003E\u003Ca href=\u0022http:\/\/bit.ly\/2MJtYbA\u0022\u003E Ashwin Swaminathan and Prateek Singhal\u003C\/a\u003E, Hugo Larochelle\u003C\/strong\u003E, and \u003Ca href=\u0022https:\/\/mlatgt.blog\/2019\/05\/06\/13-questions-with-manuela-veloso\/\u0022\u003E\u003Cstrong\u003EManuela Veloso.\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We are proud to be able to bring world-class researchers to our campus to further explore different areas of machine learning and artificial intelligence. Talks like these are important for continuing to grow the ML community and broadening the public\u0026rsquo;s awareness about where the field is headed. We\u0026rsquo;re looking forward to another great semester of exciting talks,\u0026rdquo; said \u003Cstrong\u003EIrfan Essa\u003C\/strong\u003E, director of ML@GT.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe series kicks off on Sept. 4 with \u003Cstrong\u003EGalen Reeves\u003C\/strong\u003E, an assistant professor from Duke University. Talks will be given every other Wednesday at 12:15 p.m. in the Marcus Nanotechnology Building unless otherwise noted. All talks are open to the public.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EFall Seminar Series Schedule\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESept. 4 \u0026ndash; \u003Ca href=\u0022http:\/\/ml.gatech.edu\/events\/mlgt-fall-seminar-galen-reeves-duke-university\u0022\u003EGalen Reeves, Duke University\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESept. 18 \u0026ndash; \u003Ca href=\u0022http:\/\/ml.gatech.edu\/events\/mlgt-fall-seminar-chandrajit-bajaj-university-texas\u0022\u003EChandrajit Bajaj, University of Texas\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOct. 2 \u0026ndash; \u003Ca href=\u0022http:\/\/ml.gatech.edu\/events\/mlgt-seminar-vijay-subramamian-university-michigan\u0022\u003EVijay Suvramamian, University of Michigan\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOct. 23 \u0026ndash; \u003Ca href=\u0022http:\/\/ml.gatech.edu\/events\/mlgt-fall-seminar-aleksandra-faust-google-brain-robotics\u0022\u003EAleksandra Faust, Google Brain Robotics\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENov. 20 \u0026ndash; Speaker to be announced soon\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor the most up to date information on the seminar series, visit \u003Ca href=\u0022http:\/\/ml.gatech.edu\/seminars\u0022\u003Ehttp:\/\/ml.gatech.edu\/seminars\u003C\/a\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The Machine Learning Center at Georgia Tech will host five speakers this fall for their fall seminar series."}],"uid":"34773","created_gmt":"2019-08-08 17:52:01","changed_gmt":"2019-08-09 14:25:07","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-08-09T00:00:00-04:00","iso_date":"2019-08-09T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"63128":{"id":"63128","type":"image","title":"Georgia Tech\u0027s Marcus Nanotechnology Building","body":null,"created":"1449176649","gmt_created":"2015-12-03 21:04:09","changed":"1475894552","gmt_changed":"2016-10-08 02:42:32","alt":"Georgia Tech\u0027s Marcus Nanotechnology Building","file":{"fid":"191745","name":"thx89611.jpg","image_path":"\/sites\/default\/files\/images\/thx89611_0.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/thx89611_0.jpg","mime":"image\/jpeg","size":1570424,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/thx89611_0.jpg?itok=9ZrKME0d"}}},"media_ids":["63128"],"groups":[{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["allie.mcfadden@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"623821":{"#nid":"623821","#data":{"type":"news","title":"Georgia Tech Faculty, Students, and Alumni Take Part in 41st Meeting of the Cognitive Science Society","body":[{"value":"\u003Cp\u003EMembers of the Georgia Tech research community were present last week at the \u003Ca href=\u0022https:\/\/cognitivesciencesociety.org\/cogsci-2019\/\u0022\u003E2019 Annual Meeting of the Cognitive Science Society\u003C\/a\u003E in Montreal, Canada. This year, the conference highlighted research on the theme \u003Cem\u003ECreativity+Cognition+Computation\u003C\/em\u003E, as well as the full breadth of research topics offered by the society\u0026rsquo;s membership.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMany Georgia Tech faculty, students, and alumni participated among the leadership for the conference.\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EProfessor \u003Cstrong\u003EAshok Goel\u003C\/strong\u003E served as the conference\u0026rsquo;s co-chair;\u003C\/li\u003E\r\n\t\u003Cli\u003EProfessor \u003Cstrong\u003EKeith McGreggor\u003C\/strong\u003E was the sponsorship chair;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EWendy Newstetter\u003C\/strong\u003E of the \u003Ca href=\u0022http:\/\/www.coe.gatech.edu\u0022\u003ECollege of Engineering\u003C\/a\u003E and \u003Ca href=\u0022https:\/\/c21u.gatech.edu\/\u0022\u003ECenter for 21st Century Universities\u003C\/a\u003E served on the awards committee;\u003C\/li\u003E\r\n\t\u003Cli\u003EGeorgia Tech alum \u003Cstrong\u003EJim Davies\u003C\/strong\u003E was co-chair for publication-based talks;\u003C\/li\u003E\r\n\t\u003Cli\u003EGeorgia Tech alum \u003Cstrong\u003EMaithilee Kunda\u003C\/strong\u003E was co-chair for member abstracts;\u003C\/li\u003E\r\n\t\u003Cli\u003EGeorgia Tech alum \u003Cstrong\u003ESwaroop Vattam\u003C\/strong\u003E served on the workshops and tutorials committee.\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E adjunct professors \u003Cstrong\u003EBrian Magerko\u003C\/strong\u003E and \u003Cstrong\u003EGil Weinberg\u003C\/strong\u003E, primarily of the \u003Ca href=\u0022https:\/\/www.iac.gatech.edu\/\u0022\u003EIvan Allen College of Liberal Arts\u003C\/a\u003E and \u003Ca href=\u0022https:\/\/music.gatech.edu\/\u0022\u003ESchool of Music\u003C\/a\u003E, respectively, were also part of a panel on Creativity in the Arts.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPh.D. students \u003Cstrong\u003ESungeun An\u003C\/strong\u003E presented a poster paper at the conference titled \u003Cem\u003ELearning by Doing: Supporting Experimentation in Inquiry-Driven Modeling\u003C\/em\u003E (Sungeun An, \u003Cstrong\u003ERobert Bates\u003C\/strong\u003E, \u003Cstrong\u003EJennifer Hammock\u003C\/strong\u003E, \u003Cstrong\u003ESpencer Rugaber\u003C\/strong\u003E, \u003Cstrong\u003EEmily Weigel\u003C\/strong\u003E, Ashok Goel), and \u003Cstrong\u003EMarissa Gonzales\u003C\/strong\u003E another titled \u003Cem\u003EWhy are Some Online Education Programs Successful: Student Cognition and Success\u003C\/em\u003E (Marissa Gonzales, Ashok Goel).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor more information about this year\u0026rsquo;s conference and to stay up-to-date on news about future conferences, visit \u003Ca href=\u0022https:\/\/cognitivesciencesociety.org\/\u0022\u003Ehttps:\/\/cognitivesciencesociety.org\/\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"This year, the conference highlighted research on the theme Creativity+Cognition+Computation."}],"uid":"33939","created_gmt":"2019-07-30 16:34:53","changed_gmt":"2019-07-30 16:34:53","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-07-30T00:00:00-04:00","iso_date":"2019-07-30T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"623820":{"id":"623820","type":"image","title":"CogSci 2019","body":null,"created":"1564504438","gmt_created":"2019-07-30 16:33:58","changed":"1564504438","gmt_changed":"2019-07-30 16:33:58","alt":"CogSci 2019 banner","file":{"fid":"237591","name":"MontrealSideBanner-sm.jpg","image_path":"\/sites\/default\/files\/images\/MontrealSideBanner-sm.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/MontrealSideBanner-sm.jpg","mime":"image\/jpeg","size":191308,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/MontrealSideBanner-sm.jpg?itok=p_CgqQd5"}}},"media_ids":["623820"],"groups":[{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"623044":{"#nid":"623044","#data":{"type":"news","title":"Robot Able to Instantly Identify Household Materials Using Near-Infrared Light ","body":[{"value":"\u003Cp\u003ERobots aren\u0026rsquo;t yet household fixtures, but Georgia Tech researchers have already come up with a way domestic bots might recognize materials around the home.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing near-infrared light, similar to what\u0026rsquo;s used in TV remotes, the robot can identify common materials used in household objects to better inform its actions. This might allow intelligent machines to understand, for example, the right bowl (paper versus metal) to put in a microwave or how hard to grasp a cup made of glass versus plastic.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo classify materials, the researchers first determined hundreds of light wavelengths reflected from five common materials \u0026ndash; paper, wood, plastic, metal, and fabric. With this information, they trained a neural network on 10,000 examples in order to create a machine-learning (ML) model that could be used by a robot to quickly identify a material.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAccording to the researchers, a robot using their new ML model can identify materials without it first having to touch an object, a useful function for handling potentially fragile items. To do so, the robot holds a small spectrometer near an object to get a quick light measurement, which is then processed to identify the material.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Robots currently use conventional cameras or haptic sensing - the sense of touch - to estimate a material type,\u0026rdquo; said \u003Cstrong\u003EZackory Erickson\u003C\/strong\u003E, the first author on the research paper detailing the new work and Georgia Tech robotics Ph.D. student.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This is the first time that we know of that spectroscopy and machine learning have been used for material classification in robotics research, and our accuracy is on par with existing methods.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe team\u0026rsquo;s new ML model yielded the best results using spectrometer measurements from near-infrared light. In fact, the accuracy was 99.9 percent with the full dataset of 10,000 measurements from 50 objects that the model had been trained on.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;While human eyes typically use three color receptors to see the world, our robot can be thought of as using hundreds of color receptors to recognize materials,\u0026rdquo; said \u003Cstrong\u003ECharlie Kemp\u003C\/strong\u003E, associate professor in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University and part of the research team. \u0026ldquo;Instead of a conventional color camera that measures red, green, and blue light, our robot uses a spectrometer that measures light at hundreds of different wavelengths, some outside of the range of human vision.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo see how results would compare using only a single light reading from each object, the team also trained the model on just 50 measurements, one from each object. Interestingly, accuracy in identifying the correct material only dropped to 95 percent. When using a spectrometer reading from objects the machine learning model had never seen, the robot still achieved an 81.6 percent success rate.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Spectroscopy presents a reliable and effective way for robots to estimate materials of household objects,\u0026rdquo; Erickson said. \u0026ldquo;We\u0026rsquo;ve demonstrated how a robot can use near-infrared spectroscopy to infer the materials of everyday objects like cups, bowls, and garments.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research is published in the Proceedings of the 2019 International Conference on Robotics and Automation (ICRA) in the paper titled \u003Cem\u003EClassification of Household Materials via Spectroscopy\u003C\/em\u003E co-authored by \u003Ca href=\u0022http:\/\/zackory.com\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EZackory Erickson\u003C\/strong\u003E\u003C\/a\u003E, \u003Cstrong\u003ENathan Luskey\u003C\/strong\u003E, \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~chernova\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003ESonia Chernova\u003C\/strong\u003E\u003C\/a\u003E, and \u003Ca href=\u0022http:\/\/charliekemp.com\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003ECharlie Kemp\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor more Georgia Tech research published at ICRA, as well as the entire conference program,\u0026nbsp;explore this \u003Ca href=\u0022https:\/\/public.tableau.com\/shared\/J22YXRJXM?:display_count=yes\u0026amp;:origin=viz_share_link\u0026amp;:showVizHome=no\u0022 target=\u0022_blank\u0022\u003Einteractive visualization\u003C\/a\u003E\u0026nbsp;from the GVU Center at Georgia Tech.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":[{"value":"No Contact is Required with Objects by Using Inexpensive, Handheld \u0027Light-Reading\u0027 Device"}],"field_summary":"","field_summary_sentence":[{"value":"Robots aren\u2019t yet household fixtures, but Georgia Tech researchers have already come up with a way domestic bots might recognize materials around the home."}],"uid":"27592","created_gmt":"2019-07-08 17:32:11","changed_gmt":"2019-07-17 20:55:36","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-07-08T00:00:00-04:00","iso_date":"2019-07-08T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"623045":{"id":"623045","type":"image","title":"Robot Classifies Materials of Household Objects Using \u0027Light-Reading\u0027 Device","body":null,"created":"1562609057","gmt_created":"2019-07-08 18:04:17","changed":"1562609089","gmt_changed":"2019-07-08 18:04:49","alt":"","file":{"fid":"237265","name":"Robot classifies materials of household objects.png","image_path":"\/sites\/default\/files\/images\/Robot%20classifies%20materials%20of%20household%20objects.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Robot%20classifies%20materials%20of%20household%20objects.png","mime":"image\/png","size":1829642,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Robot%20classifies%20materials%20of%20household%20objects.png?itok=9t0CWmic"}}},"media_ids":["623045"],"related_links":[{"url":"https:\/\/www.youtube.com\/watch?v=fBv_xEai2AU","title":"VIDEO: Watch how GT researchers are bringing domestic bots one step closer to reality"},{"url":"https:\/\/www.spreaker.com\/user\/10751784\/tu-ep5-robot-instantly-identifies-materials","title":"Tech Unbound Podcast EP5: Robot Able to Instantly Identify Household Materials Without Touching Objects"}],"groups":[{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"667","name":"robotics"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\nResearch Communications Manager, GVU Center\u003Cbr \/\u003E\r\n678.231.0787\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"582978":{"#nid":"582978","#data":{"type":"news","title":"CIC Returns With Three New Categories for Fall Semester","body":[{"value":"\u003Cp\u003EThe \u003Ca href=\u0022http:\/\/cic.gatech.edu\u0022 target=\u0022_blank\u0022\u003EConvergence Innovation Competition (CIC)\u003C\/a\u003E is back for another semester, and we\u0026rsquo;re looking for innovative student ideas in three new categories.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nThe CIC, produced by IPaT and the \u003Ca href=\u0022http:\/\/rnoc.gatech.edu\u0022 target=\u0022_blank\u0022\u003EGeorgia Tech Research Network Operations Center (GT-RNOC)\u003C\/a\u003E, is a bi-annual competition dedicated to helping students create products and experiences with the support of campus resources and industry sponsors. The Fall competition is campus-focused, and categories are determined by our campus partners. Categories for the Fall 2016 competition, which are aligned with IPaT\u0026rsquo;s research priorities, include:\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\n\u003Cstrong\u003ELifelong Health and Wellbeing\u003C\/strong\u003E\u003Cbr \/\u003E\r\nEntries should focus on new or reimagined solutions for patients, communities, and\/or those involved in the continuum of care (caregivers, doctors, hospitals, insurers, employers).\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\n\u003Cstrong\u003ESmart Cities and Healthy Communities\u003C\/strong\u003E\u003Cbr \/\u003E\r\nEntries should focus on solutions for individuals, communities, business and community stakeholders, and government service providers.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\n\u003Cstrong\u003ESocio-Technical Systems and Human-Technology Frontier Innovation\u003C\/strong\u003E\u003Cbr \/\u003E\r\nEntries will demonstrate new platforms, services, and devices ranging from the Internet of Things (IoT), Software Defined Networking (SDN), automotive and wearable computing devices, mixed and augmented Reality, data science and analytics, collaboration and communication tools.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nWhile the CIC is not tied to any specific Georgia Tech course, students are often able to take advantage of class partnerships where lecture and lab content and projects are aligned with competition categories. GT-RNOC research assistants provide technical support and guide teams through the competition process.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nCIC entries are due on November 11th; teams will create a project name, logo and webpage, plus a supporting video that demonstrates their project in action.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\n\u0026quot;Finalists in the CIC are judged across multiple criteria, and winning projects showcase innovation, user experience and viability in the real world,\u0026quot; said Siva Jayaraman,\u0026nbsp;IPaT Strategic Partnerships Manager.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nFinalists will present their projects on November 16th at a demo and judging event held at IPaT. Past CIC winners have gone on to commercialization, other competitions, as well as internship and job opportunities strengthened by their competition experience.\u0026nbsp;To learn more about the CIC, including how to submit your project or become a sponsor, visit the competition website at \u003Ca href=\u0022http:\/\/cic.gatech.edu\u0022 target=\u0022_blank\u0022\u003Ecic.gatech.edu\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EThe Convergence Innovation Competition (CIC) is back for another semester, and we\u0026rsquo;re looking for innovative student ideas in three new categories.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"The Convergence Innovation Competition (CIC) is back for another semester, and we\u2019re looking for innovative student ideas in three new categories."}],"uid":"27980","created_gmt":"2016-10-24 14:29:40","changed_gmt":"2019-07-11 13:13:39","author":"Alyson Key","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2016-10-24T00:00:00-04:00","iso_date":"2016-10-24T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"582976":{"id":"582976","type":"image","title":"Fall 2016 Convergence Innovation Competition","body":null,"created":"1477319200","gmt_created":"2016-10-24 14:26:40","changed":"1477319200","gmt_changed":"2016-10-24 14:26:40","alt":"","file":{"fid":"222234","name":"cic-banner-ipat.jpg","image_path":"\/sites\/default\/files\/images\/cic-banner-ipat.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/cic-banner-ipat.jpg","mime":"image\/jpeg","size":521000,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/cic-banner-ipat.jpg?itok=b35l7JMi"}}},"media_ids":["582976"],"groups":[{"id":"69599","name":"IPaT"},{"id":"1299","name":"GVU Center"}],"categories":[{"id":"129","name":"Institute and Campus"},{"id":"42901","name":"Community"},{"id":"133","name":"Special Events and Guest Speakers"},{"id":"134","name":"Student and Faculty"},{"id":"8862","name":"Student Research"},{"id":"135","name":"Research"}],"keywords":[{"id":"63931","name":"CIC"},{"id":"63951","name":"Convergence Innovation Competition"},{"id":"181703","name":"HTF"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAlyson Powell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer, Institute for People and Technology\u003C\/p\u003E\r\n\r\n\u003Cp\u003Ealyson.powell@ipat.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"623011":{"#nid":"623011","#data":{"type":"news","title":"IC\u0027s Dhruv Batra Named PECASE Winner, One of Three at Georgia Tech","body":[{"value":"\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E Assistant Professor \u003Cstrong\u003EDhruv Batra\u003C\/strong\u003E was awarded the prestigious Presidential Early Career Award for Scientists and Engineers (PECASE) on Wednesday in an announcement by President Donald Trump. The PECASE is the highest honor bestowed by the United States government to outstanding scientists and engineers beginning independent research careers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBatra is one of three Georgia Tech faculty members this year to earn the award, giving the Institute a total of 18 in its history. The other two awardees in this class are Associate Professor Mark Davenport of the School of Electrical and Computer Engineering and Assistant Professor Matthew McDowell of the School of Materials Science and Engineering.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlong with the Department of Defense, the White House Office of Science and Technology Policy will provide $1 million over the course of five years to support Batra\u0026rsquo;s research to make artificial intelligence (AI) systems more transparent, explainable, and trustworthy. The award comes as a result of Batra\u0026rsquo;s selection for a similar early-career award by the Army Research Office Young Investigator Program in 2014.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research Batra\u0026rsquo;s lab will pursue with the funding addresses a fundamental challenge in development of AI \u0026ndash; their \u0026ldquo;black-box\u0026rdquo; nature, the consequent difficulty humans face in identifying why or how AI systems fail, and how to improve upon those technologies. When a self-driving car from a major tech company, for example, suffered its first fatality in 2015, legal and regulatory agencies understandably questioned what went wrong. The challenge at the time was providing a sufficient answer to that question.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Your response can\u0026rsquo;t just be, \u0026lsquo;Well, there was this machine learning box in there, and it just didn\u0026rsquo;t detect the car. We don\u0026rsquo;t know why,\u0026rsquo;\u0026rdquo; Batra said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBatra\u0026rsquo;s research aims to create AI systems that can more readily explain what they do and why. This could come in the form of natural language or visual explanations, both of which \u0026ndash; computer vision and natural language processing \u0026ndash; are central areas of focus in Batra\u0026rsquo;s lab. The machine could, for example, identify regions in image that provide support for its predictions, potentially assisting a user\u0026rsquo;s understanding of what the machine can or cannot do.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt\u0026rsquo;s an important area of study for a few reasons, Batra said. He classifies AI technology into three levels of maturity:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003ELevel 1 is technology that is in its infancy. It is not near deployment to everyday users, and the consumers of the technology are researchers. The goal for transparency and explanation is to help researchers and developers to understand the failure modes and current limitations, and deduce how to improve the technology \u0026ndash; \u0026ldquo;actionable insight,\u0026rdquo; as Batra called it.\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003ELevel 2 is when things are working to a degree, enough so that the technology can and has been deployed.\u003Cbr \/\u003E\r\n\t\u003Cbr \/\u003E\r\n\t\u0026ldquo;The technology may be mature in a narrow range, and you can ship the product,\u0026rdquo; Batra said. \u0026ldquo;Like face detection or fingerprint technology. It\u0026rsquo;s built into products and being used at agencies, airports, or other places.\u0026rdquo;\u003Cbr \/\u003E\r\n\t\u003Cbr \/\u003E\r\n\tIn such cases, you want explanations and interpretability that helps build appropriate trust with users. Users can understand when the system reliably works and when it might not work \u0026ndash; face detection in bad lighting, for example \u0026ndash; and make efforts to use in a more appropriate setting.\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003ELevel 3 is typically a fairly narrow category where the AI is better \u0026ndash; sometimes significantly so \u0026ndash; than the human. Batra used chess-playing and Go-playing bots as an example. The best chess-playing bots convincingly outperform the best humans and reliably hand a resounding defeat to the average human player.\u003Cbr \/\u003E\r\n\t\u003Cbr \/\u003E\r\n\t\u0026ldquo;We already know bots play much better than humans,\u0026rdquo; he said. \u0026ldquo;In such cases, you don\u0026rsquo;t need to improve the machine and you already trust its skill level. You want the machine to give you explanations not so that you can improve the AI, but so that you can improve yourself.\u0026rdquo;\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EBatra envisions scenarios where the techniques his lab develops could assist at all three levels, but the experiments will take place between Levels 1 and 2. They will work in Visual Question Answering, which are agents that answer natural language questions about visual content, and other areas of maturity that may reach the product level in five or more years.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBatra has served as an assistant professor at Georgia Tech since Fall 2016. \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~dbatra\/\u0022\u003EVisit his website for more information about his research.\u003C\/a\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The PECASE is the highest honor bestowed by the United States government to outstanding scientists and engineers beginning independent research careers."}],"uid":"33939","created_gmt":"2019-07-05 16:18:17","changed_gmt":"2019-07-05 16:18:17","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-07-05T00:00:00-04:00","iso_date":"2019-07-05T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"586461":{"id":"586461","type":"image","title":"Dhruv Batra","body":null,"created":"1485377710","gmt_created":"2017-01-25 20:55:10","changed":"1485377710","gmt_changed":"2017-01-25 20:55:10","alt":"","file":{"fid":"223509","name":"DhruvBatra.jpg","image_path":"\/sites\/default\/files\/images\/DhruvBatra.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/DhruvBatra.jpg","mime":"image\/jpeg","size":82240,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/DhruvBatra.jpg?itok=D762Jyi-"}}},"media_ids":["586461"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181639","name":"cc-research; ic-ai-ml"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"622864":{"#nid":"622864","#data":{"type":"news","title":"IC Researchers Earn 2018 IJRR Paper of the Year for Impactful Robotics Research","body":[{"value":"\u003Cp\u003EA paper published in the \u003Cem\u003EI\u003Ca href=\u0022http:\/\/www.ijrr.org\/\u0022\u003Enternational Journal of Robotics Research\u003C\/a\u003E\u003C\/em\u003E (IJRR) by researchers in the \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC) was selected as the 2018 IJRR Paper of the Year.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EChosen from a shortlist considered by the IJRR Executive Committee, the paper, \u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1707.07383\u0022\u003E\u003Cem\u003EContinuous-time Gaussian Process Motion Planning via Probabilistic Inference\u003C\/em\u003E\u003C\/a\u003E, was recognized for its technical rigor, relevance, and potential for impact in the robotics research community. The research comes from IC Ph.D. students \u003Cstrong\u003EMustafa Mukadam\u003C\/strong\u003E and \u003Cstrong\u003EJing Dong\u003C\/strong\u003E, master\u0026rsquo;s student \u003Cstrong\u003EXinyan Yan\u003C\/strong\u003E, and advisors Professor \u003Cstrong\u003EFrank Dellaert\u003C\/strong\u003E and Assistant Professor \u003Cstrong\u003EByron Boots\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis paper introduces a novel formulation of motion planning that treats the problem of finding an efficient, feasible path between two points as probabilistic inference with Gaussian Processes. Motion planning is a hard problem, and state-of-the art sampling-based and trajectory optimization algorithms have well-known drawbacks. The former can effectively find feasible trajectories but often exhibits jerky and redundant motion, and the latter requires a fine approximation of the trajectory to reason about thin obstacles or tight constraints.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn their paper, the team of researchers adopts a continuous-time representation of trajectories, viewing them as functions that map time to robot state. Combing this representation with fast approaches to probabilistic inference, they developed a computationally-efficient gradient-based optimization algorithm called a Gaussian Process Motion Planner that can overcome large computational costs associated with fine discretization, while still maintaining smoothness of motion in the result.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWith the award comes a $1,000 prize. Boots attended the \u003Ca href=\u0022http:\/\/www.roboticsconference.org\/\u0022\u003ERobotics: Science and Systems\u003C\/a\u003E (RSS) conference in the Freiburg, Germany, this week, where he accepted the award on behalf of his team.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnother paper involving Boots was also awarded a Best Student Paper Award at RSS. Titled \u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1902.08967\u0022\u003E\u003Cem\u003EAn Online Learning Approach to Model Predictive Control\u003C\/em\u003E\u003C\/a\u003E, the paper was written by Robotics Ph.D. students \u003Cstrong\u003ENolan Wagener\u003C\/strong\u003E, \u003Cstrong\u003EChing-An Cheng\u003C\/strong\u003E, and \u003Cstrong\u003EJacob Sacks\u003C\/strong\u003E, along with Boots.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt shows that there exists a close connection between model predictive control (MPC), a popular technique for solving dynamic control tasks, and online learning, an abstract theoretical framework for analyzing online decision making. This new perspective provides a foundation for leveraging powerful online learning algorithms to design MPC algorithms. Toward this end, the researchers propose a generic framework for synthesizing new MPC algorithms called Dynamic Mirror Decent Model Predictive Control.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe framework exposes key design choices that can help practitioners easily develop new control algorithms tailored to the challenges of their specific task. The approach is validated by developing new MPC algorithms that consistently match or outperform the state-of-the-art on several tasks including an aggressive driving problem with the goal of racing an autonomous car around a dirt track under computational resource constraints.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"With the award comes a $1,000 prize. Boots attended the Robotics: Science and Systems (RSS) conference in the Freiburg, Germany, this week, where he accepted the award on behalf of his team."}],"uid":"33939","created_gmt":"2019-06-28 21:45:13","changed_gmt":"2019-06-28 21:45:13","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-06-28T00:00:00-04:00","iso_date":"2019-06-28T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"622863":{"id":"622863","type":"image","title":"IJRR Paper of the Year","body":null,"created":"1561757769","gmt_created":"2019-06-28 21:36:09","changed":"1561757769","gmt_changed":"2019-06-28 21:36:09","alt":"Byron Boots accepts the IJRR Paper of the Year Award at RSS 2019","file":{"fid":"237211","name":"IJRR Paper of the Year.jpeg","image_path":"\/sites\/default\/files\/images\/IJRR%20Paper%20of%20the%20Year.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/IJRR%20Paper%20of%20the%20Year.jpeg","mime":"image\/jpeg","size":214672,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/IJRR%20Paper%20of%20the%20Year.jpeg?itok=3Ab2q1qk"}},"622862":{"id":"622862","type":"image","title":"RSS Best Student Paper","body":null,"created":"1561757679","gmt_created":"2019-06-28 21:34:39","changed":"1561757679","gmt_changed":"2019-06-28 21:34:39","alt":"A team of researchers accepts the Best Student Paper award at RSS 2019","file":{"fid":"237210","name":"RSS Best Student Paper.jpeg","image_path":"\/sites\/default\/files\/images\/RSS%20Best%20Student%20Paper.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/RSS%20Best%20Student%20Paper.jpeg","mime":"image\/jpeg","size":181785,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/RSS%20Best%20Student%20Paper.jpeg?itok=kKN86uwy"}}},"media_ids":["622863","622862"],"related_links":[{"url":"https:\/\/www.ic.gatech.edu\/content\/robotics-computational-perception","title":"Robotics and Computational Perception Research at Georgia Tech"}],"groups":[{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181602","name":"ic-robotics"},{"id":"181216","name":"cc-research"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"622859":{"#nid":"622859","#data":{"type":"news","title":"Georgia Tech Team Wins New Fetch Robot at ICRA\u0027s FetchIt! Mobile Manipulation Challenge","body":[{"value":"\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~chernova\/\u0022\u003E\u003Cstrong\u003ESonia Chernova\u003C\/strong\u003E\u003C\/a\u003E\u0026rsquo;s \u003Ca href=\u0022http:\/\/www.rail.gatech.edu\/\u0022\u003ERobot Autonomy and Interactive Learning\u003C\/a\u003E (RAIL) lab is adding a new member this summer after a successful foray into the \u003Ca href=\u0022https:\/\/opensource.fetchrobotics.com\/competition\u0022\u003E\u003Cem\u003EFetchIt!\u003C\/em\u003E\u003Cem\u003E Mobile Manipulation Challenge\u003C\/em\u003E\u003C\/a\u003E at the \u003Ca href=\u0022https:\/\/www.icra2019.org\/\u0022\u003EInternational Conference on Robotics and Automation\u003C\/a\u003E (ICRA) last month.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA team of Georgia Tech master\u0026rsquo;s and Ph.D. students, advised by Chernova, won the challenge by successfully assembling three kits with its robot in 39 minutes. It was the only team in the competition to complete the task, with the second-place finisher failing to score a point.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor its victory, the RAIL lab will receive a new mobile manipulation robot from Fetch Robotics, its second. Along with the other robots already in the lab\u0026rsquo;s possession, the newcomer will provide RAIL researchers new opportunities to pursue multi-robot applications. The prize package also includes items from the event\u0026rsquo;s co-sponsors EandM Robotics, Schunk, SICK Sensor Intelligence, and The Construct, to go with the $100,000 robot.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E[VIDEO::https:\/\/youtu.be\/G_ur71h4CNQ]\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This is a long-term benefit,\u0026rdquo; said Chernova, an associate professor in the \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E. \u0026ldquo;This is one of the most capable mobile manipulation platforms out there, and to now have two of them will enable us to enhance the capabilities of the robot and pursue new lines of research in our lab.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe allure of a new state-of-the-art robot would be enough to entice most teams to take part in the competition, but for Chernova and her participating students it was more about the opportunity to explore specific applications that aligned with their research initiatives, past and present.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe lab has done past work in grasping, semantic reasoning and mapping, and fault diagnosis, the latter of which has become a focus over the past six months. The competition, Ph.D. student \u003Cstrong\u003EDavid Kent\u003C\/strong\u003E said, came at a good time because the particular challenges it presented are often in this domain.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This particular setup was particularly challenging because there was just enough variability where it wasn\u0026rsquo;t going to work every time,\u0026rdquo; he said. \u0026ldquo;There would always be something going wrong, so fault recovery ended up being very central.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo win the competition, not only did Georgia Tech\u0026rsquo;s team have to come in first place, it had to do so by scoring at least 14 points. To put that into context, Georgia Tech was the only team in the competition to finish with any points. Teams scored points by successfully collecting items laid out at different stations to assemble three kits. They were awarded eight points for each completed kit. Any kit that was missing a piece, however, resulted in zero points awarded, and any kit with extra pieces would have points deducted.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If you drop one screw along the way and you don\u0026rsquo;t notice \u0026ndash; which is actually very easy to do \u0026ndash; you go away with nothing,\u0026rdquo; Chernova said. \u0026ldquo;In the real world, a partial kit is useless.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech achieved its first 15 points and elected to complete its third kit without official scoring to ensure it wouldn\u0026rsquo;t drop below the threshold needed to win the robot. Officially the team scored 15, but a completed third kit gave it an unofficial 23 points, after bonuses were added.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It was a lot of fun to be able to work with my lab on a single project and see it come together,\u0026rdquo; said Ph.D. student \u003Cstrong\u003EWeiyu Liu\u003C\/strong\u003E, another member of the team. \u0026ldquo;It was a really great opportunity to try out some of the code we had written and also to see others\u0026rsquo; code and other research projects.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlready, the team has turned the experience into a submitted paper, which they hope to have accepted and published in the future. The focus is on mobile manipulation, which is a particularly challenging aspect of robotics because of what Chernova calls \u0026ldquo;an explosion of uncertainty.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Manipulation in many ways is a solved problem,\u0026rdquo; she said. \u0026ldquo;Navigation in many ways is a solved problem. When you put those two solved problems together, though \u0026ndash; when you take the wheels and put the arm on it \u0026ndash; it becomes a much more challenging problem, one our research will continue to tackle with the aid of Fetch in the coming years.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMembers of the team included: Chernova, Kent, Liu, \u003Cstrong\u003ESiddhartha Banerjee\u003C\/strong\u003E, \u003Cstrong\u003EAngel Daruna\u003C\/strong\u003E, \u003Cstrong\u003EJonathan Balloch\u003C\/strong\u003E, \u003Cstrong\u003EAbhinav Jain\u003C\/strong\u003E, \u003Cstrong\u003EAkshay Krishnan\u003C\/strong\u003E, \u003Cstrong\u003EMuhammad Asif Rana\u003C\/strong\u003E, \u003Cstrong\u003EHarish Ravichandar\u003C\/strong\u003E, \u003Cstrong\u003EBinit Shah\u003C\/strong\u003E, and \u003Cstrong\u003ENithin Shrivatsav\u003C\/strong\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A team of Georgia Tech master\u2019s and Ph.D. students, advised by Sonia Chernova, won the challenge by successfully assembling three kits with its robot in 39 minutes."}],"uid":"33939","created_gmt":"2019-06-28 19:55:14","changed_gmt":"2019-06-28 19:55:14","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-06-28T00:00:00-04:00","iso_date":"2019-06-28T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"622858":{"id":"622858","type":"image","title":"Georgia Tech FetchIt! Win","body":null,"created":"1561750984","gmt_created":"2019-06-28 19:43:04","changed":"1561750984","gmt_changed":"2019-06-28 19:43:04","alt":"The Georgia Tech RAIL lab celebrates a win in the FetchIt Mobile Manipulation Challenge at ICRA","file":{"fid":"237208","name":"Fetch.jpeg","image_path":"\/sites\/default\/files\/images\/Fetch.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Fetch.jpeg","mime":"image\/jpeg","size":296705,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Fetch.jpeg?itok=dXBkErUl"}}},"media_ids":["622858"],"related_links":[{"url":"http:\/\/rail.gatech.edu","title":"Robot Autonomy and Interactive Learning Lab"},{"url":"https:\/\/www.ic.gatech.edu\/content\/robotics-computational-perception","title":"Robotics and Computational Perception Research at Georgia Tech"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181602","name":"ic-robotics"},{"id":"181216","name":"cc-research"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"622523":{"#nid":"622523","#data":{"type":"news","title":"IC Researchers Awarded Outstanding Study Design Paper Award at ICWSM-19","body":[{"value":"\u003Cp\u003EA team of researchers that included individuals from Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E were awarded the Outstanding Study Design Paper award at the \u003Ca href=\u0022https:\/\/www.icwsm.org\/2019\/index.php\u0022\u003EInternational AAAI Conference on Web and Social Media\u003C\/a\u003E (ICWSM 2019) this week in Munich, Germany.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe paper, titled \u003Cem\u003E\u003Ca href=\u0022http:\/\/www.munmund.net\/pubs\/ICWSM19_DrugEffects.pdf\u0022\u003EA Social Media Study on the Effects of Psychiatric Medication Use\u003C\/a\u003E\u003C\/em\u003E, was presented by IC Ph.D. student \u003Cstrong\u003EKoustuv Saha\u003C\/strong\u003E and included fellow IC Ph.D. student \u003Cstrong\u003EBenjamin Sugar\u003C\/strong\u003E and IC Assistant Professor \u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E. Collaborators from Microsoft Research, Harvard Medical School, and New York University-Shanghai were also involved with the research.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research addresses a challenge in understanding the effects of psychiatric medications during mental health treatment. While clinical trials help to evaluate effects of the medication, there are challenges in generalizing trials to broader populations. Using a list of common approved and regulated psychiatric medications and a Twitter dataset of 300 million posts from 30,000 individuals, researchers developed machine learning models to first assess effects relating to mood, cognition, depression, anxiety, psychosis, and suicidal ideation and then, based on a score, observe how use of specific drugs are associated with characteristic changes in an individual\u0026rsquo;s psychopathology.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe goal of this research is a deeper understanding of effects and how to situate those with treatment outcomes.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EICWSM is a forum for researchers from multiple disciplines to come together to share knowledge, discuss ideas, exchange information, and learn about cutting-edge research in diverse fields with the common theme of online social media. This includes social theories, as well as computational algorithms for analyzing social media. In its 13\u003Csup\u003Eth\u003C\/sup\u003E year of existence, the conference has become one of the premier venues for computational social science.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The paper, titled A Social Media Study on the Effects of Psychiatric Medication Use, was presented by IC Ph.D. student Koustuv Saha and included fellow IC Ph.D. student Benjamin Sugar and IC Assistant Professor Munmun De Choudhury."}],"uid":"33939","created_gmt":"2019-06-14 19:51:15","changed_gmt":"2019-06-14 19:51:15","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-06-14T00:00:00-04:00","iso_date":"2019-06-14T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"622522":{"id":"622522","type":"image","title":"Koustuv Saha ICWSM","body":null,"created":"1560541641","gmt_created":"2019-06-14 19:47:21","changed":"1560541641","gmt_changed":"2019-06-14 19:47:21","alt":"Koustuv Saha presents a paper at ICWSM","file":{"fid":"237101","name":"Screen Shot 2019-06-14 at 3.46.50 PM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202019-06-14%20at%203.46.50%20PM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202019-06-14%20at%203.46.50%20PM.png","mime":"image\/png","size":769188,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202019-06-14%20at%203.46.50%20PM.png?itok=adXluiTA"}}},"media_ids":["622522"],"groups":[{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181216","name":"cc-research"},{"id":"181214","name":"ic-hcc"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"622225":{"#nid":"622225","#data":{"type":"news","title":"ICML 2019: Georgia Tech Researchers Present at Global Machine Learning Conference","body":[{"value":"\u003Cp\u003EThis year, Long Beach, Calif. will host the \u003Ca href=\u0022https:\/\/icml.cc\/Conferences\/2019\u0022\u003EThirty-Sixth International Conference on Machine Learning (ICML)\u003C\/a\u003E. The conference is the premier gathering for artificial intelligence (AI) professionals who specialize in the branch of AI known as machine learning.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech researchers will present 18 research papers at this year\u0026rsquo;s event. The papers touch on a variety of aspects of machine learning including \u003Ca href=\u0022https:\/\/mlatgt.blog\/2019\/05\/29\/mixing-frank-wolfe-and-gradient-descent\/?utm_source=mailchimp\u0026amp;utm_campaign=030010e6e1f0\u0026amp;utm_medium=page\u0022\u003Eblended unconditional gradients\u003C\/a\u003E, \u003Ca href=\u0022https:\/\/scs.gatech.edu\/news\/622219\/new-machine-learning-algorithms-keep-group-data-diverse\u0022\u003Eclustering with fairness constraints\u003C\/a\u003E, and \u003Ca href=\u0022http:\/\/ml.gatech.edu\/hg\/item\/622215\u0022\u003Eobservational agents.\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E assistant professor, \u003Cstrong\u003EByron Boots \u003C\/strong\u003Eis a 2019 area chair. Boots is also the co-organizer of the \u003Cem\u003EReal-World Sequential Decision Making: Reinforcement Learning and Beyond\u003C\/em\u003E workshop and a guest speaker at the \u003Cem\u003EGenerative Modeling and Model-Based Reasoning for Robotics and AI.\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;ICML is globally renowned as one of the best conferences for machine learning research. Year after year, cutting edge research is presented and published and it\u0026rsquo;s a sign of ML@GT\u0026rsquo;s strength that Georgia Tech is consistently a top contributor in the accepted papers.\u0026rdquo; \u003Cstrong\u003EJustin Romberg, \u003C\/strong\u003E\u003Ca href=\u0022https:\/\/www.ece.gatech.edu\/\u0022\u003ESchool of Electrical and Computer Engineering\u003C\/a\u003E Schlumberger Professor and associate director of the \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EMachine Learning Center at Georgia Tech (ML@GT).\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHosted June 9 through 15 at the Long Beach Convention and Entertainment Center, ICML is one of the fastest growing conferences in the world. It will bring together over 8,000 participants including entrepreneurs, engineers, graduate students, postdocs, and academic and industrial researchers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlong with Georgia Tech papers, other accepted papers will include work in closely related fields like statistics, data science, and artificial intelligence, and important application areas like speech recognition, robotics, and machine vision.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor a full list of Georgia Tech\u0026rsquo;s research papers and more information about Georgia Tech\u0026rsquo;s presence at the conference, please \u003Ca href=\u0022http:\/\/bit.ly\/ICML2019\u0022\u003Eclick here.\u003C\/a\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech will present 18 papers at the International Conference on Machine Learning."}],"uid":"34773","created_gmt":"2019-06-04 17:47:27","changed_gmt":"2019-06-04 17:47:27","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-06-04T00:00:00-04:00","iso_date":"2019-06-04T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"622050":{"id":"622050","type":"image","title":"ICML 2019","body":null,"created":"1559143704","gmt_created":"2019-05-29 15:28:24","changed":"1559143704","gmt_changed":"2019-05-29 15:28:24","alt":"ICML 2019","file":{"fid":"236944","name":"icml2019.jpg","image_path":"\/sites\/default\/files\/images\/icml2019.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/icml2019.jpg","mime":"image\/jpeg","size":1680878,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/icml2019.jpg?itok=NwUHkeE6"}}},"media_ids":["622050"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50875","name":"School of Computer Science"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["allie.mcfadden@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"621531":{"#nid":"621531","#data":{"type":"news","title":"Two Georgia Tech Alums Receive Prestigious Awards at CHI 2019","body":[{"value":"\u003Cp\u003ETwo former \u003Ca href=\u0022http:\/\/www.gatech.edu\u0022\u003EGeorgia Tech\u003C\/a\u003E students were recognized by the CHI community this week in Glasgow, U.K., one for her overall contributions in human-computer interaction at the conference and another for her long history of promoting social action within the community.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EJennifer Mankoff\u003C\/strong\u003E, one of Professor \u003Cstrong\u003EGregory Abowd\u003C\/strong\u003E\u0026rsquo;s first of 30 Ph.D graduates in 2001, was inducted into the prestigious CHI Academy this week, and \u003Cstrong\u003EGillian Hayes\u003C\/strong\u003E (2007), also advised by Abowd, was awarded the Social Impact award.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMankoff, who was Abowd\u0026rsquo;s third ever Ph.D. graduate, joined an exclusive community that includes eight Georgia Tech faculty members. Most recently, Professor \u003Cstrong\u003EAmy Bruckman\u003C\/strong\u003E was \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/602715\/professor-amy-bruckman-joins-seven-other-ic-faculty-chi-academy\u0022\u003Einducted a year ago\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMankoff noted the mentors, like Abowd, she had along the way to give her that opportunity. Abowd provided the introduction for Mankoff at the awards ceremony for the CHI academy. She credited her research community and the CHI community for giving her the freedom to pursue the kind of research that she was passionate about.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The openness to let people be able to work on whatever they\u0026rsquo;re passionate about and see that has value is something that\u0026rsquo;s been important to me over the years,\u0026rdquo; Mankoff said. \u0026ldquo;More than once, I\u0026rsquo;ve shifted to another area that I wasn\u0026rsquo;t working in before and maybe a lot of others weren\u0026rsquo;t either. It\u0026rsquo;s a sign of how open the community is.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeated at a reunion\u0026nbsp;party for the Abowd \u0026ldquo;family\u0026rdquo; \u0026ndash; academics who were part of a lineage that began as doctoral students in Abowd\u0026rsquo;s lab \u0026ndash; she noted the importance of having a vibrant community like that.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We were very lucky to be there at the beginning, helping to form his group and to learn from him and all the energy he brings to this group,\u0026rdquo; she said. \u0026ldquo;It\u0026rsquo;s one of the strongest networks I have at CHI.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHayes received her Social Impact award just 12 years after Abowd received his own in 2007. She said it was an especially proud honor to have the distinction of following in the footsteps of her advisor.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The way he has instilled in us an ethos of being able to give back, being able to bake in community outcomes with our research outcomes and define good, interesting research problems that also really solve real-world problems, and work in partnership with communities,\u0026rdquo; Hayes said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHayes, whose 30-minute talk at the conference focused on ways in which the community needed to do better in thinking about issues of accessibility, access, racial and gender inequities, and much more, said she thought the CHI community was leading the way as a standard-bearer for diversity, inclusion, and service.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;But we still have a long way to go,\u0026rdquo; she said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHer talk, she hoped, would be a call to action to the rest of the community.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This is our time, and we can control our destinies and we can create truly community-driven innovation,\u0026rdquo; she said.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Jennifer Mankoff, one of Professor Gregory Abowd\u2019s first of 30 Ph.D graduates in 2001, was inducted into the prestigious CHI Academy this week, and Gillian Hayes (2007), also advised by Abowd, was awarded the Social Impact award."}],"uid":"33939","created_gmt":"2019-05-08 22:03:04","changed_gmt":"2019-05-08 22:03:04","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-05-08T00:00:00-04:00","iso_date":"2019-05-08T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"621530":{"id":"621530","type":"image","title":"CHI Awards 2019","body":null,"created":"1557352606","gmt_created":"2019-05-08 21:56:46","changed":"1557352606","gmt_changed":"2019-05-08 21:56:46","alt":"Jennifer Mankoff, Gregory Abowd, and Gillian Hayes smiling","file":{"fid":"236742","name":"Awards CHI.jpg","image_path":"\/sites\/default\/files\/images\/Awards%20CHI.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Awards%20CHI.jpg","mime":"image\/jpeg","size":131241,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Awards%20CHI.jpg?itok=s9avGS1d"}}},"media_ids":["621530"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"621184":{"#nid":"621184","#data":{"type":"news","title":"IC Researchers Seek to Improve Treatment for Schizophrenia Under New $2.7 Million NIMH Grant","body":[{"value":"\u003Cp\u003EFor the past few years, Georgia Tech School of Interactive Computing Assistant Professor \u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E has pursued research that gathers insights about mental health through digital traces individuals leave behind on social media.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUnder a new $2.7 million grant from the \u003Ca href=\u0022https:\/\/www.nimh.nih.gov\/index.shtml\u0022\u003ENational Institutes of Mental Health\u003C\/a\u003E (NIMH), she and a team of researchers at \u003Ca href=\u0022https:\/\/www.northwell.edu\/\u0022\u003ENorthwell Health\u003C\/a\u003E will apply that new information in a clinical setting in hopes of improving treatment.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In our past research, we have gained a number of new insights, but I see an opportunity to influence real world people and outcomes,\u0026rdquo; De Choudhury said. \u0026ldquo;Going beyond just academic and empirical findings, how do you take that information and make a difference in people\u0026rsquo;s lives? What research challenges do such translations pose to the computing domain?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis grant offers the researchers that opportunity. It will be one of the first in which computing researchers and leading experts in psychiatry research are coming together to influence how treatment can be delivered harnessing patient-contributed data. The grant is funded through a new NIMH program designed to inform and support delivery of high quality mental health services.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe idea is to build machine learning algorithms based on data that mental health patients voluntarily share with the research team, including both clinicians at Northwell Health and researchers in De Choudhury\u0026rsquo;s lab at \u003Ca href=\u0022http:\/\/www.gatech.edu\u0022\u003EGeorgia Tech\u003C\/a\u003E. With these algorithms, they hope to identify different risk markers and symptom changes that appear in social media posts to identify changes and trends in an individual over time.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy combing a number of different social media sources, primarily Facebook and Twitter, they will look at the use of words or patterns of words an individual uses. In mental illnesses like schizophrenia, the main population they will explore, that is important information to know.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If they are feeling delusional or experiencing paranoia, what is it that they are saying,\u0026rdquo; De Choudhury said. \u0026ldquo;We can look at social interactions and see whether they might be feeling isolation, which can have a negative impact on mental health. Nuances of language styles, like the way people use articles or pronouns, can say a lot about their psychological state, as well, which has been shown in our and co-investigator (University of Texas Professor) \u003Cstrong\u003EJamie Pennebaker\u003C\/strong\u003E\u0026rsquo;s prior work.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe population they will focus on comprises younger individuals, largely teens and early 20s, who have had a first episode of schizophrenia. Most will have only recently been diagnosed and admitted to a specialized treatment facility directed by the collaborators on the project in New York. The goal is to use the information gathered in their digital traces to identify risk markers that signal a potential relapse.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Schizophrenia is a challenging and debilitating illness,\u0026rdquo; De Choudhury said. \u0026ldquo;Even people under treatment have a high chance of relapse with negative outcomes on quality of life, productivity, and functioning. Symptoms often come back, and most mental illnesses are only managed, not cured.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBetter management means that the treatment is timely and highly adaptable to the patient\u0026rsquo;s needs, De Choudhury said. Unfortunately, that\u0026rsquo;s a challenge because, in clinical settings, there is very little knowledge about a patient\u0026rsquo;s day-to-day life. Unlike a disease such as cancer, which has an objective screening that can identify its presence and severity, mental illnesses are based on what is reported. These self-reports are often skewed, based on what a patient wants to tell or remembers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In some ways, the treatment paradigm right now is not very evidence based,\u0026rdquo; she said. \u0026ldquo;But to prevent relapse, it\u0026rsquo;s important that we try to be as precise and proactive as possible.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe project will span four years and began on April 15.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"This grant offers researchers the opportunity to apply findings of past research to real-world clinical settings."}],"uid":"33939","created_gmt":"2019-05-01 19:41:21","changed_gmt":"2019-05-01 19:41:21","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-05-01T00:00:00-04:00","iso_date":"2019-05-01T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"587685":{"id":"587685","type":"image","title":"Munmun De Choudhury","body":null,"created":"1487686001","gmt_created":"2017-02-21 14:06:41","changed":"1487783642","gmt_changed":"2017-02-22 17:14:02","alt":"Georgia Tech Assistant Professor Munmun De Choudhury","file":{"fid":"223975","name":"munmun portrait_horz.jpg","image_path":"\/sites\/default\/files\/images\/munmun%20portrait_horz.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/munmun%20portrait_horz.jpg","mime":"image\/jpeg","size":711876,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/munmun%20portrait_horz.jpg?itok=GwpgdV5R"}}},"media_ids":["587685"],"related_links":[{"url":"https:\/\/www.ic.gatech.edu\/podcasts\/ep-3-social-media-and-mental-health","title":"The Interaction Hour podcast: Social Media and Mental Health"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181214","name":"ic-hcc"},{"id":"181215","name":"ic-social-computing"},{"id":"181216","name":"cc-research"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"621151":{"#nid":"621151","#data":{"type":"news","title":"IC\u2019s Caitlyn Seim to Serve as Spring Ph.D. Commencement Speaker","body":[{"value":"\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E Ph.D. student \u003Cstrong\u003ECaitlyn Seim\u003C\/strong\u003E will serve as commencement speaker for the Georgia Tech Ph.D. graduation ceremony on May 3.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeim, who is advised by IC Professor \u003Cstrong\u003EThad Starner\u003C\/strong\u003E, was chosen by a committee of leaders from across campus, including the Office of the Dean of Students, various faculty, and commencement officials. The process included an audition of a speech written by Seim.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHaving recently defended her dissertation for her degree in Human-Centered Computing, Seim said that she is honored by her selection and opportunity to share the stage with Georgia Tech President \u003Cstrong\u003EBud Peterson\u003C\/strong\u003E and Vice Provost for Graduate Education and Faculty Affairs \u003Cstrong\u003EBonnie Ferri\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I am so thrilled to represent the graduating class, and I can\u0026rsquo;t wait to share my message about the importance of research,\u0026rdquo; Seim said. \u0026ldquo;I love Georgia Tech so much. After all my time here, I still enjoy it as if it were my first day on campus.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeim, whose research in wearable computing and passive haptic rehabilitation has been covered extensively by external media, said that in her speech she hopes to help graduates think about a recent realization that she had.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;That is the significant role we have in society\u0026rsquo;s progress,\u0026rdquo; she said. \u0026ldquo;It\u0026rsquo;s about the formation of knowledge and how Ph.D. students are uniquely trained to evaluate fact and expand what society can achieve. My training in the Human-Centered Computing program actually helped me to begin recognizing this by introducing me to the concept of epistemology.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELooking back, Seim will remember Georgia Tech as a unique student body and a beautiful campus.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;For me, I have to put special emphasis on the academic community,\u0026rdquo; she said. \u0026ldquo;The faculty made learning a great experience, and as a graduate student I felt like I was really part of a community. The student researchers who I mentor continue to impress me and consistently show curiosity, respect, and dedication. It has been a pleasure working with everyone.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe \u003Ca href=\u0022http:\/\/commencement.gatech.edu\/schedule\u0022\u003EPh.D. commencement ceremony\u003C\/a\u003E will take place at 9-10:30 a.m. Friday, May 3, at McCamish Pavilion. Ferri will also speak. No tickets are required for the event.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Seim, who is advised by IC Professor Thad Starner, was chosen by a committee of leaders from across campus, including the Office of the Dean of Students, various faculty, and commencement officials."}],"uid":"33939","created_gmt":"2019-05-01 01:07:58","changed_gmt":"2019-05-01 01:07:58","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-30T00:00:00-04:00","iso_date":"2019-04-30T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"611755":{"id":"611755","type":"image","title":"Caitlyn Seim - PHL","body":null,"created":"1537470856","gmt_created":"2018-09-20 19:14:16","changed":"1537470856","gmt_changed":"2018-09-20 19:14:16","alt":"Caitlyn Seim showing haptic glove","file":{"fid":"232896","name":"Seim Banner.jpg","image_path":"\/sites\/default\/files\/images\/Seim%20Banner.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Seim%20Banner.jpg","mime":"image\/jpeg","size":170103,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Seim%20Banner.jpg?itok=QblfJAZi"}}},"media_ids":["611755"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181210","name":"ic-ubicomp-and-wearable"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"620987":{"#nid":"620987","#data":{"type":"news","title":"Georgia Tech\u0027s Child Study Lab Sees Computer Science as New \u0027Microscope\u0027 for Autism Research","body":[{"value":"\u003Cp\u003EWhat if behavior could be mapped and analyzed in much the same way an MRI provides images of the brain or a microscope an up-close look at cells? Both proved to be paradigm shifts in detecting developmental anomalies or diseases like cancer, and \u003Ca href=\u0022http:\/\/www.gatech.edu\u0022\u003EGeorgia Tech\u003C\/a\u003E research at the intersection of computing and early childhood behavior could do the same for autism.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBuilding upon nearly a decade of research, \u003Ca href=\u0022http:\/\/www.childstudylab.gatech.edu\/\u0022\u003EGeorgia Tech\u0026rsquo;s Child Study Lab\u003C\/a\u003E, which was established in 2010 by a $10 million grant from the \u003Ca href=\u0022https:\/\/www.nsf.gov\/\u0022\u003ENational Science Foundation\u003C\/a\u003E, and collaborators at \u003Ca href=\u0022https:\/\/weill.cornell.edu\/\u0022\u003EWeill Cornell Medical College\u003C\/a\u003E were awarded last year with a $1.7 million grant from the \u003Ca href=\u0022https:\/\/www.nih.gov\/\u0022\u003ENational Institutes of Health\u003C\/a\u003E. The grant will help researchers collect new data, using the datasets created over the past decade to develop automated tools that better and more efficiently characterize behaviors that are present and important in typical child development but are often considered to be core, early-emerging markers of autism spectrum disorder (ASD) when absent.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E[VIDEO::https:\/\/youtu.be\/jVldx01ENHM::aVideoStyle]\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.youtube.com\/watch?v=jVldx01ENHM\u0022 target=\u0022_blank\u0022\u003E[RELATED:\u0026nbsp;Using Computer Science to Augment Autism Research at Georgia Tech (VIDEO)]\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPsychologists have long understood that there were links between early childhood development and the likelihood of typical language and behavior outcomes throughout life. What they weren\u0026rsquo;t able to do, however, was to study childhood behavior at a granular level similar to that of a microscope. Given the importance of early detection to inform proper interventions, the tedium of human coding and analysis poses a significant challenge.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;That process is manual and driven by humans specifying what happens in a frame of a video,\u0026rdquo; said \u003Cstrong\u003EJim Rehg\u003C\/strong\u003E, a professor in the \u003Ca href=\u0022http:\/\/www.ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E and the principal investigator on the NIH award. \u0026ldquo;It takes hours upon hours of data collection and analysis.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EComputing could alter that reality, and this work being done at Georgia Tech is a significant reason why.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Given enough video, we can model the details of behavior,\u0026rdquo; Rehg said. \u0026ldquo;Deep learning, married with the ability to collect the data, allows us to build out how our algorithms work in much the same way computer science has been applied to genetics and imaging to make those more powerful and scalable.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThat has long been the mission of the Child Study Lab, and the latest grant will continue to move the needle forward in autism research at Georgia Tech and beyond. Unlike many other conditions, autism spectrum disorder can\u0026rsquo;t be found by taking a blood test or viewing images of the brain. Doctors must analyze behavior through developmental screenings and comprehensive diagnostic evaluations.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDuring screenings, doctors might talk or play with a child to see how they learn, speak, or behave. Do they exhibit typical communicative skills like joint attention, in which two people use gestures or gaze to share their attention with respect to other objects or events? The skills a child demonstrates in these areas are known to be strong indicators of how they will develop throughout childhood and adolescence.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe challenge here is that, given how important it is to detect ASD at an early age and thus tailor interventions and education to meet the child\u0026rsquo;s specific needs, the manual labor that comes with these screenings and evaluations makes it far less efficient than detection of other developmental challenges. Autism spectrum disorder affects one in 59 children in the United States alone, and not all who are screened are ultimately determined to be one of those individuals.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe need for objective, automated measurements of behavior is clear, and Rehg \u0026ndash; along with IC Research Scientist \u003Cstrong\u003EAgata Rozga\u003C\/strong\u003E, Child Study Lab coordinator \u003Cstrong\u003EAudrey Southerland\u003C\/strong\u003E, collaborators at Weill Cornell, and more \u0026ndash; are taking steps in that direction.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;For us, the goal is to use these computational capabilities to extract the important key moments and information to give clinicians or psychologists the ability to more easily examine a child\u0026rsquo;s behavior,\u0026rdquo; Southerland said. \u0026ldquo;If we can provide additional details through technology about the quality or coordination of important social and communicative behaviors, we can hopefully provide behavioral experts with the capability of exploring these behaviors in much greater detail than currently possible.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe first grant from the NSF funded the creation of the Child Study Lab, which has over the years developed an extensive dataset of behaviors in typically developing children. At the time, it was the first large-scale investment in technology that would assist in modeling and sensing behaviors that underlie developmental conditions like autism spectrum disorder. Additional grants have assisted in studies that use computer vision to measure and detect gaze shifts or wearable technology and machine learning to detect and differentiate between types of problem behaviors.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe NIH grant brings all the past research together to compare what the sensory data says in relation to human coding, and how that might ultimately serve to develop reliable, objective, automated tools for measuring early, nonverbal communication behaviors.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The important thing is for us to make sure that whatever we produce is good enough so that we can actually push it out into the field to people who are specializing in this area,\u0026rdquo; Southerland said. \u0026ldquo;We never want to get rid of the human expert in this field, but we want to build technology they can use to augment and streamline their analysis of behavior.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn addition to the National Institutes of Health and the National Science Foundation, the Child Study Lab has also received funding from the \u003Ca href=\u0022https:\/\/www.simonsfoundation.org\/\u0022\u003ESimons Foundation\u003C\/a\u003E and has partnered with external entities like the \u003Ca href=\u0022https:\/\/www.marcus.org\/\u0022\u003EMarcus Autism Center\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESoutherland and the Child Study Lab are actively seeking families with young children to participate in this study to further develop their automated tools. Anyone interested in playing a part in this exciting work can visit the lab\u0026rsquo;s website to learn more.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech\u2019s Child Study Lab, which was established in 2010 by a $10 million grant from the National Science Foundation, and collaborators at Weill Cornell Medical College were awarded last year with a $1.7 million grant from the NIH."}],"uid":"33939","created_gmt":"2019-04-28 23:36:39","changed_gmt":"2019-04-28 23:36:39","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-28T00:00:00-04:00","iso_date":"2019-04-28T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"620985":{"id":"620985","type":"image","title":"Autism and Computing Research at Georgia Tech","body":null,"created":"1556487692","gmt_created":"2019-04-28 21:41:32","changed":"1556487692","gmt_changed":"2019-04-28 21:41:32","alt":"Creating the Next in Autism and Computing Research at Georgia Tech\u0027s Child Study Lab","file":{"fid":"236513","name":"Autism and Computing rotator EDIT2.jpg","image_path":"\/sites\/default\/files\/images\/Autism%20and%20Computing%20rotator%20EDIT2.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Autism%20and%20Computing%20rotator%20EDIT2.jpg","mime":"image\/jpeg","size":70199,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Autism%20and%20Computing%20rotator%20EDIT2.jpg?itok=wOe24NH3"}}},"media_ids":["620985"],"related_links":[{"url":"http:\/\/www.childstudylab.gatech.edu\/","title":"Child Study Lab at Georgia Tech"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"620928":{"#nid":"620928","#data":{"type":"news","title":"IC\u0027s Miranda Parker Uncovering Factors that Lead to CS Programs in Georgia","body":[{"value":"\u003Ch3\u003ELike the majority of research in IC, it comes down to the people\u003C\/h3\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EMiranda Parker\u003C\/strong\u003E was early on in her time as a Ph.D. student in the \u003Ca href=\u0022http:\/\/www.ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC) when she began her first quantitative study. She wanted to see whether they could model the variables that influence whether a school would or would not adopt computer science (CS) as a class for its students.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPrior to the study, the hypothesis was that variables like median income or enrollment numbers or the population of students who qualify for free and reduced cost lunch programs could be an indicator of whether or not computer science was implemented. Lower income levels, for example, might correlate to schools that just didn\u0026rsquo;t have the resources to deploy such programs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESomewhat to Parker\u0026rsquo;s surprise, the short answer to that question was \u0026ndash; no. No, a higher median income didn\u0026rsquo;t mean more computer science; no, schools with lower free and reduced lunch numbers didn\u0026rsquo;t teach computer science at a higher rate; no, higher enrollment didn\u0026rsquo;t necessarily mean more young students yearning to learn how to code.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOn the surface, that first study might have felt like a failure. If the goal was to prove that income disparity equated to a disparity in who was gaining exposure to a key part of their education, then it may be fair to describe it as such. However, Parker looks back on that study as a key component of what has guided her research at \u003Ca href=\u0022http:\/\/www.gatech.edu\u0022\u003EGeorgia Tech\u003C\/a\u003E ever since.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt wasn\u0026rsquo;t a failure, she said. It just helped open her eyes to some realities she may not have noticed otherwise.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Part of me wanted my first study to fail because part of me didn\u0026rsquo;t want to be able to say, \u0026lsquo;Oh, yes, these three things mean more computer science,\u0026rsquo;\u0026rdquo; she said. \u0026ldquo;Sure, it\u0026rsquo;s snazzy. It\u0026rsquo;s easy to put on a Facebook post. But it\u0026rsquo;s so much more complicated than that. And I\u0026rsquo;m glad that it\u0026rsquo;s more complicated than that.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOver the years, Parker, who studies \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/academics\/human-centered-computing-phd-program\u0022\u003Ehuman-centered computing\u003C\/a\u003E with a focus on computer science education, has gained a deeper understanding of what might influence a public high school in Georgia to offer computer science education. None of the above items are among them. What she said has shown some correlation is a bit more complex.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If a school had computer science in 2016, the correlation was that it also had computer science in 2015, 2014, and 2013,\u0026rdquo; she said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOkay, but how did it get started in 2013? That\u0026rsquo;s part of the question her research is trying to uncover.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;That\u0026rsquo;s an endless cycle,\u0026rdquo; she explained. \u0026ldquo;You had it before, now you still have it. But how did you get it to begin with?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne thing she\u0026rsquo;s learned, which can be said for a majority of research in IC, is that it comes down to the people. Who is involved with a school and what connections do they have to a particular subject? If a connection has worked in CS in the past or may be passionate about adding that to the school, the results indicate the school is much more likely to employ that subject.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMakes sense, right?\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If a school has someone who can teach computer science and there are parents saying we need to teach computer science, then whether it\u0026rsquo;s rural or urban or high or low income, it doesn\u0026rsquo;t matter,\u0026rdquo; Parker said. \u0026ldquo;They will have computer science. But if there\u0026rsquo;s no one there to push them, it\u0026rsquo;s much less likely.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt\u0026rsquo;s not just a person, either. Organizations like Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/constellations.gatech.edu\/\u0022\u003EConstellations Center for Equity in Computing\u003C\/a\u003E and the \u003Ca href=\u0022https:\/\/www.ceismc.gatech.edu\/\u0022\u003ECenter for Education Integrating Science, Math, and Computing\u003C\/a\u003E are also championing K-12 CS educational opportunities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut, Parker said, being successful is a bit more complicated than just serving CS up to the masses in communities that are unfamiliar with these and other organizations.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Computer science isn\u0026rsquo;t the end all, be all,\u0026rdquo; she said. \u0026ldquo;If a school is in a more agricultural-based county, that may benefit the school more than a heavy computer science program would. It\u0026rsquo;s about finding how computer science can most benefit students in different ways for different areas.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe most encouraging thing about that research, Parker said, was that the failure of her original study showed her one important piece of information.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;You don\u0026rsquo;t need high income to have computer science,\u0026rdquo; she said. \u0026ldquo;It really can be for everyone. That\u0026rsquo;s an important piece of information to know.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EParker is aiming to finish her Ph.D. work in the fall and will decide between pursuing a faculty position, which she is leaning toward now, or other opportunities that may present themselves down the road. Former Georgia Tech Professor \u003Cstrong\u003EMark Guzdial\u003C\/strong\u003E, now a faculty member at the University of Michigan, is Parker\u0026rsquo;s advisor.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Miranda Parker is investigating the qualities in high schools that lead to having a CS program in Georgia. One thing she\u2019s learned, which can be said for a majority of research in IC, is that it comes down to the people."}],"uid":"33939","created_gmt":"2019-04-25 22:05:16","changed_gmt":"2019-04-25 22:05:16","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-25T00:00:00-04:00","iso_date":"2019-04-25T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"620927":{"id":"620927","type":"image","title":"Miranda Parker","body":null,"created":"1556228807","gmt_created":"2019-04-25 21:46:47","changed":"1556228807","gmt_changed":"2019-04-25 21:46:47","alt":"Miranda Parker stands by the street","file":{"fid":"236482","name":"Parker rotator.jpg","image_path":"\/sites\/default\/files\/images\/Parker%20rotator.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Parker%20rotator.jpg","mime":"image\/jpeg","size":117463,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Parker%20rotator.jpg?itok=RXiQx4_O"}}},"media_ids":["620927"],"related_links":[{"url":"https:\/\/www.ic.gatech.edu\/academics\/human-centered-computing-phd-program","title":"Human-Centered Computing at Georgia Tech"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"620364":{"#nid":"620364","#data":{"type":"news","title":"People May Be Able to Find Images on a Computer Based Solely on Their Eye Movements","body":[{"value":"\u003Cp\u003EWhen humans try to recall images from memory, they involuntarily move their eyes in a pattern that is similar to when they are actually looking at the image.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EJames Hays\u003C\/strong\u003E, an associate professor in the \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E and the \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EMachine Learning Center at Georgia Tech\u003C\/a\u003E, and researchers from TU Berlin and Universit\u0026auml;t Regensburg, are looking at how these patterns, known as gaze patterns, can be used to retrieve images from memory so that it\u0026rsquo;s easier to find that same image \u0026ndash; like an adorable dog photo \u0026ndash; stashed away in the digital cloud.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThrough a controlled lab experiment and a real-world scenario, Hays and his co-authors have developed a matching technique using machine learning to help computers understand what image someone is thinking of, and accurately retrieve it from a computer folder \u0026ndash; based solely on eye movements.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing eye-tracking software in the lab, the researchers recorded the eye movements of 30 participants as they looked at 100 different indoor and outdoor images, ranging from picturesque lighthouse scenes to cozy living rooms. Participants were then asked to look at a blank screen and recall any of the 100 images they just saw.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe researchers also conducted a realistic scenario by putting together a mock museum with 20 posters of various sizes and orientations spread throughout the \u0026ldquo;museum.\u0026rdquo; They outfitted each participant with a headset complete with a \u003Ca href=\u0022https:\/\/pupil-labs.com\/pupil\/\u0022\u003EPupil mobile eye tracker\u003C\/a\u003E, complete with two eye cameras, and one front-facing camera. Participants were asked to walk around the museum and look at all of the images, taking however long they liked, and in whatever order they preferred. They took anywhere from a few seconds to over a minute looking at each poster.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAfter looking at all of the images, participants were asked to look at a blank whiteboard and recall as many of their favorite images as possible, in any order. Participants remembered between 5 and 10 of the total 20 poster images.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe results from both experiments indicated that the gaze patterns of people looking at a photograph contain a unique signature that computers can use to accurately determine the corresponding photo.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing the data collected from the experiments, researchers created spatial histograms, or heat maps, that could be analyzed by their new machine learning technique to determine which photo someone was thinking about. Hays and Co. also used a \u003Ca href=\u0022https:\/\/en.wikipedia.org\/wiki\/Convolutional_neural_network\u0022\u003EConvolutional Neural Network (CNN)\u003C\/a\u003E to look at the 2,700 collected heat maps.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The ability to retrieve images using eye movements would be beneficial to those who are disabled or unable to search for images using their hands or voice,\u0026rdquo; said Hays. \u0026ldquo;Also, wearable technology is a huge industry right now, and we believe that tracking motion with the eyes would be a natural by-product of that boom.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn Hays\u0026rsquo; previous research, \u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1801.02753\u0022\u003ESketchyGAN\u003C\/a\u003E, people are able to draw (rather than type) what they are looking for to get image search results. But, if images are mislabeled or people can\u0026rsquo;t draw that well, search results are not useful. Other attempts at image retrieval have included various types of brain scans, but those are often too expensive and impractical for everyday use.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile this new research may prove helpful to people, it does not come without limitations, researchers note. The scalability of the model in part depends on image content and how many images are in the database. The more images the database holds, the more likely it is that several different photos will produce extremely similar gaze patterns.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne proposed workaround to this potential issue is asking people to make more extensive eye movements than they normally would. At the moment, participants are not asked to do anything more intentional or out of the norm when looking at the images. Researchers think that by putting a small amount of effort back on the user, this would help the computer find the correct image.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnother foreseen problem is working with people\u0026rsquo;s memories. As people\u0026rsquo;s memories grow weaker with time or age, it will be harder to get a crisp gaze pattern and accurately return the right image. The team plans to explore these potential issues in the future by looking into the influence on memory decay and how it affects image retrieval from long-term memory.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe authors are also looking into combining gaze tracking with a speech interface, as that could be a rich resource for information. No matter which direction they go, the team believes that eye-movement image retrieval is not only possible but also a significant next step to improving human and computer interaction.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne might even say that before long, people will be able to find that favorite dog photo in the blink of an eye.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFurther details on this approach to image retrieval can be found in the paper,\u003Ca href=\u0022http:\/\/cybertron.cg.tu-berlin.de\/xiwang\/files\/mi.pdf\u0022\u003E \u0026ldquo;The Mental Image Revealed by Gaze Tracking,\u0026rdquo;\u003C\/a\u003E which has been accepted at the ACM Conference on Human Factors in Computing Systems (CHI 2019), May 4-9.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"What if we could find images on our computer just by tracking our eye movements? ML@GT assistant professor James Hays explores this idea in new research that will be presented next month at CHI 2019."}],"uid":"34773","created_gmt":"2019-04-12 14:42:21","changed_gmt":"2019-04-12 20:51:03","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-12T00:00:00-04:00","iso_date":"2019-04-12T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"620361":{"id":"620361","type":"image","title":"Machine Learning at Georgia Tech and School of Interactive Computing associate professor James Hays collaborated with researchers from TU Berlin and Universit\u00e4t Regensburg to create new eye-tracking software.","body":null,"created":"1555079754","gmt_created":"2019-04-12 14:35:54","changed":"1555102299","gmt_changed":"2019-04-12 20:51:39","alt":"","file":{"fid":"236216","name":"Screen Shot 2019-04-12 at 10.33.09 AM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202019-04-12%20at%2010.33.09%20AM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202019-04-12%20at%2010.33.09%20AM.png","mime":"image\/png","size":951664,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202019-04-12%20at%2010.33.09%20AM.png?itok=aI5T1_BW"}},"620363":{"id":"620363","type":"image","title":"In one experiment, participants were outfitted with a Pupil mobile eye tracker and asked to observe art in a fake museum.","body":null,"created":"1555079859","gmt_created":"2019-04-12 14:37:39","changed":"1555079859","gmt_changed":"2019-04-12 14:37:39","alt":"","file":{"fid":"236217","name":"Screen Shot 2019-04-12 at 10.33.34 AM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202019-04-12%20at%2010.33.34%20AM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202019-04-12%20at%2010.33.34%20AM.png","mime":"image\/png","size":1804726,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202019-04-12%20at%2010.33.34%20AM.png?itok=FFmiuM7e"}}},"media_ids":["620361","620363"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"134","name":"Student and Faculty"},{"id":"135","name":"Research"},{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"620328":{"#nid":"620328","#data":{"type":"news","title":"IC Student Brianna Tomlinson Earns Campus Life Scholarship","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing Ph.D. student \u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/content\/brianna-tomlinson\u0022\u003EBrianna Tomlinson\u003C\/a\u003E\u003C\/strong\u003E was awarded the \u003Ca href=\u0022https:\/\/campusservices.gatech.edu\/scholarships\u0022\u003ECampus Life Scholarship\u003C\/a\u003E in recognition of her leadership, scholarship, and service to Georgia Tech. The scholarship provides $5,000 from Campus Services and offers a lunch to honor recipients on April 18.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETomlinson is involved in the \u003Ca href=\u0022http:\/\/women.cc.gatech.edu\/grad.html\u0022\u003EGraduate Women@CC\u003C\/a\u003E group, helping to organize events. She has been involved in some capacity with the group since she came to Georgia Tech six years ago. The group is a collection of female graduate students who strive for professional success for their members. They meet once each month for coffee, where they discuss current projects they are working on, and also help to organize various workshops throughout the year.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s great to hear that people think my impact on GradWomen has been a good one, and the work to keep it going has been useful for the greater campus community,\u0026rdquo; Tomlinson said. \u0026ldquo;I\u0026rsquo;m hoping that it will actually help others learn about GradWomen and encourage them to get involved.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETomlinson is working toward her Ph.D. in human-centered computing. Her current work is on evaluating effective methods for studying engagement, learning, and transfer for multimodal interactive systems. This includes collaboration on a grant to develop and evaluate accessible auditory displays for PhET Interactive Simulations, a non-profit open educational resource project at the University of Colorado that creates and hosts explorable explanations.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EShe is advised by Professor \u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/bruce-walker\u0022\u003EBruce Walker\u003C\/a\u003E\u003C\/strong\u003E, who is jointly appointed in the School of Interactive Computing and the School of Psychology.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The scholarship provides $5,000 from Campus Services and offers a lunch to honor recipients on April 18."}],"uid":"33939","created_gmt":"2019-04-11 16:57:11","changed_gmt":"2019-04-11 16:57:11","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-11T00:00:00-04:00","iso_date":"2019-04-11T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"620327":{"id":"620327","type":"image","title":"Brianna Tomlinson","body":null,"created":"1555001765","gmt_created":"2019-04-11 16:56:05","changed":"1555001765","gmt_changed":"2019-04-11 16:56:05","alt":"Brianna Tomlinson","file":{"fid":"236201","name":"brianna_tomlinson_headshot.jpg","image_path":"\/sites\/default\/files\/images\/brianna_tomlinson_headshot.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/brianna_tomlinson_headshot.jpg","mime":"image\/jpeg","size":119012,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/brianna_tomlinson_headshot.jpg?itok=DVAnKN9p"}}},"media_ids":["620327"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"620251":{"#nid":"620251","#data":{"type":"news","title":"Georgia Tech\u2019s Newest AI System Explains Its Decisions to People in Real-Time to Understand User Preferences","body":[{"value":"\u003Cp\u003EGeorgia Institute of Technology researchers, in collaboration with Cornell and University of Kentucky, have developed an artificially intelligent (AI) agent that can automatically generate natural language explanations in real-time to convey the motivations behind its actions. The work is designed to give humans engaging with AI agents or robots confidence that the agent is performing the task correctly and can explain a mistake or errant behavior.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe agent also uses everyday language that non-experts can understand. The explanations, or \u0026ldquo;rationales\u0026rdquo; as the researchers call them, are designed to be relatable and inspire trust in those who might be in the workplace with AI machines or interact with them in social situations.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If the power of AI is to be democratized, it needs to be accessible to anyone regardless of their technical abilities,\u0026rdquo; said \u003Cstrong\u003EUpol Ehsan\u003C\/strong\u003E, Ph.D. student in the School of Interactive Computing at Georgia Tech and lead researcher.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;As AI pervades all aspects of our lives, there is a distinct need for human-centered AI design that makes black-boxed AI systems explainable to everyday users. Our work takes a formative step toward understanding the role of language-based explanations and how humans perceive them.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearchers developed a participant study to determine if their AI agent could offer rationales that mimicked human responses. Spectators watched the AI agent play the videogame Frogger and then ranked three on-screen rationales in order of how well each described the AI\u0026rsquo;s game move.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOf the three anonymized justifications for each move \u0026ndash; a human-generated response, the AI-agent response, and a randomly generated response \u0026ndash; the participants preferred the human-generated rationales first, but the AI-generated responses were a close second.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFrogger offered the researchers the chance to train an AI in a \u0026ldquo;sequential decision-making environment,\u0026rdquo; which is a significant research challenge because decisions that the agent has already made influence future decisions. Therefore, explaining the chain of reasoning to experts is difficult, and even more so when communicating with non-experts, according to researchers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe human spectators understood the goal of Frogger in getting the frog safely home without being hit by moving vehicles or drowned in the river. The simple game mechanics of moving up, down, left or right, allowed the participants to see what the AI was doing, and to evaluate if the rationales on the screen clearly justified the move.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe spectators judged the rationales based on:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EConfidence\u003C\/strong\u003E \u0026ndash; the person is confident in the AI to perform its task\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EHuman-likeness\u003C\/strong\u003E \u0026ndash; looks like it was made by a human\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EAdequate justification\u003C\/strong\u003E \u0026ndash; adequately justifies the action taken\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EUnderstandability\u003C\/strong\u003E \u0026ndash; helps the person understand the AI\u0026rsquo;s behavior\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EAI-generated rationales that were ranked higher by participants were those that showed recognition of environmental conditions and adaptability, as well as those that communicated awareness of upcoming dangers and planned for them. Redundant information that just stated the obvious or mischaracterized the environment were found to have a negative impact.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This project is more about understanding human perceptions and preferences of these AI systems than it is about building new technologies,\u0026rdquo; said Ehsan. \u0026ldquo;At the heart of explainability is sensemaking. We are trying to understand that human factor.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA second related study validated the researchers\u0026rsquo; decision to design their AI agent to be able to offer one of two distinct types of rationales:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EConcise, \u0026ldquo;focused\u0026rdquo; rationales \u003C\/strong\u003Eor\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EHolistic, \u0026ldquo;complete picture\u0026rdquo; rationales\u003C\/strong\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EIn this second study, participants were only offered AI-generated rationales\u0026nbsp;after watching the AI play Frogger. They were\u0026nbsp;asked to\u0026nbsp;select\u0026nbsp;the answer that\u0026nbsp;they preferred in a scenario\u0026nbsp;where an AI made a mistake or behaved unexpectedly. They did not know the rationales were grouped into the two categories.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy a 3-to-1 margin, participants favored answers that were classified in the \u0026ldquo;complete picture\u0026rdquo; category. Responses showed that people appreciated the AI thinking about future steps rather than just what was in the moment, which might make them more prone to making another mistake. People also wanted to know more so that they might directly help the AI fix the errant behavior.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The situated understanding of the perceptions and preferences of people working with AI machines give us a powerful set of actionable insights that can help us design better human-centered, rationale-generating, autonomous agents,\u0026rdquo; said \u003Cstrong\u003EMark Riedl\u003C\/strong\u003E, professor of Interactive Computing and lead faculty member on the project.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA possible future direction for the research will apply the findings to autonomous agents of various types, such as companion agents, and how they might respond based on the task at hand. Researchers will also look at how agents might respond in different scenarios, such as during an emergency response or when aiding teachers in the classroom.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research was \u003Ca href=\u0022https:\/\/www.youtube.com\/watch?v=9L4CZ5n7rQY\u0022\u003Epresented in March\u003C\/a\u003E\u0026nbsp;at the Association for Computing Machinery\u0026rsquo;s Intelligent User Interfaces 2019 Conference. The paper is titled \u003Cem\u003EAutomated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions\u003C\/em\u003E. Ehsan will present a position paper highlighting the design and evaluation challenges of human-centered Explainable AI systems at the upcoming \u003Cem\u003EEmerging Perspectives in Human-Centered Machine Learning\u003C\/em\u003E workshop at the ACM CHI 2019 conference, May 4-9, in Glasgow, Scotland.\u003C\/p\u003E\r\n\r\n\u003Cdiv\u003E\u0026nbsp;\u003C\/div\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Institute of Technology researchers, in collaboration with Cornell and University of Kentucky, have developed an artificially intelligent (AI) agent that can automatically generate natural language explanations in real-time to convey the motivations behind its actions. The work is designed to give humans engaging with AI agents or robots confidence that the agent is performing the task correctly and can explain a mistake or errant behavior.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Institute of Technology researchers have developed an artificially intelligent (AI) agent that can automatically generate natural language explanations in real-time to convey the motivations behind its actions."}],"uid":"27592","created_gmt":"2019-04-09 19:42:53","changed_gmt":"2019-04-09 20:06:57","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-09T00:00:00-04:00","iso_date":"2019-04-09T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"620255":{"id":"620255","type":"image","title":"Explainable AI for Frogger","body":null,"created":"1554840392","gmt_created":"2019-04-09 20:06:32","changed":"1554840392","gmt_changed":"2019-04-09 20:06:32","alt":"AI study with Frogger","file":{"fid":"236161","name":"Explainable AI.png","image_path":"\/sites\/default\/files\/images\/Explainable%20AI.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Explainable%20AI.png","mime":"image\/png","size":48748,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Explainable%20AI.png?itok=wGcqqHq9"}}},"media_ids":["620255"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGVU Center, College of Computing\u003C\/p\u003E\r\n\r\n\u003Cp\u003E678.231.0787\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"620129":{"#nid":"620129","#data":{"type":"news","title":"HackGT Hopes to be a \u2018Catalyst\u2019 for Underserved Atlanta Students","body":[{"value":"\u003Cp\u003EKnown for its \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/613449\/students-across-country-participate-hackgt-5\u0022\u003Ewildly successful hackathons\u003C\/a\u003E for college students, \u003Ca href=\u0022https:\/\/hack.gt\/\u0022\u003EHackGT\u003C\/a\u003E is bringing some of that magic to high school students from across Atlanta with the third annual Catalyst event. Set for April 13 on the Georgia Tech campus, \u003Ca href=\u0022https:\/\/catalyst.hack.gt\/#home\u0022\u003ECatalyst\u003C\/a\u003E is a one-day workshop, blended with traditional hackathon challenges.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe free event will bring more than 400 high school students from 60 schools across the metro Atlanta area. Catalyst aims to expose underserved students to various branches of science, technology, engineering, art and math (STEAM) education and ignite a spark to pursue such interests in the future.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;HackGT 5, BuildGT, Horizons, and many other hackathon related events are built for college students. Given the educational disparities that exist within certain parts of Atlanta, HackGT understands the importance of reaching out to communities beyond Georgia Tech and other collegiate environments,\u0026rdquo; said \u003Cstrong\u003EJordan Madison\u003C\/strong\u003E, computer science (CS) major and HackGT\u0026rsquo;s director of communications.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn 2017, less than one percent of students in Georgia public schools took the Advanced Placement Computer Science exam. Only two schools in the Atlanta Public School system offered the course. This lack of access motivated the organizers to keep the event completely free, from registration to swag, allowing students from any background the opportunity to participate in Catalyst.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESponsored by Amazon and Facebook, with in-kind donations from Disney and Pixar Animation Studios, Catalyst offers four tracks for participants to choose from: software, hardware, gaming, and design.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECatalyst welcomes participants with no prior STEAM experience, and each track offers a workshop to help participants develop the basic foundations and skills that are needed to complete the track\u0026rsquo;s tasks. Participants will create technology pieces in the workshop that they can take home.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWithin each track, students are divided into small groups and mentored by college student. The mentors provide hands-on support to help students better grasp concepts. The students will also hear from industry professionals about pursuing an education or career in STEAM during panel discussions scheduled for the event.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Underserved students success in computer science or other STEAM-related fields is mainly linked to their lack of access to resources and opportunities. They have plenty of talent, but no idea about the options that are waiting for them. Events like Catalyst are crucial for exposing more kids to STEAM who might not otherwise have the opportunity to do so,\u0026rdquo; said \u003Cstrong\u003EPK Graff\u003C\/strong\u003E, a fellow at the \u003Ca href=\u0022http:\/\/constellations.gatech.edu\/\u0022\u003EConstellations Center for Equity in Computing at Georgia Tech\u003C\/a\u003E who teaches computer science at schools in Atlanta Public Schools and serves as an advisory member for Catalyst.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERegistration closes April 5 at \u003Ca href=\u0022https:\/\/catalyst.hack.gt\/#registration\u0022\u003Ehttps:\/\/catalyst.hack.gt\/#registration\u003C\/a\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Set for April 13 on the Georgia Tech campus, Catalyst is a one-day workshop, blended with traditional hackathon challenges. "}],"uid":"34773","created_gmt":"2019-04-05 19:00:30","changed_gmt":"2019-04-05 19:00:30","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-05T00:00:00-04:00","iso_date":"2019-04-05T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"620128":{"id":"620128","type":"image","title":"Set for April 13 on the Georgia Tech campus, Catalyst is a one-day workshop, blended with traditional hackathon challenges. ","body":null,"created":"1554490714","gmt_created":"2019-04-05 18:58:34","changed":"1554490714","gmt_changed":"2019-04-05 18:58:34","alt":"","file":{"fid":"236109","name":"Screen Shot 2019-04-05 at 2.57.57 PM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202019-04-05%20at%202.57.57%20PM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202019-04-05%20at%202.57.57%20PM.png","mime":"image\/png","size":3153577,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202019-04-05%20at%202.57.57%20PM.png?itok=RUWjEysy"}}},"media_ids":["620128"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"606703","name":"Constellations Center"},{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"620110":{"#nid":"620110","#data":{"type":"news","title":"Six Members of GT Computing Awarded Prestigious Fellowships","body":[{"value":"\u003Cp\u003EEach year, Georgia Tech\u0026rsquo;s College of Computing is home to a number of students and faculty who are recognized by the computing community with fellowships from industry across the field.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis year is no different as six GT Computing individuals have been awarded fellowships with four different companies, including J.P. Morgan, IBM, Snap, and Facebook. Only those who accepted their awards are listed below.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EJ.P. Morgan Chase \u0026amp; Co.\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.jpmorgan.com\/global\/technology\/ai\/awards\u0022\u003EJ.P. Morgan Chase \u0026amp; Co.\u003C\/a\u003E awarded \u003Cstrong\u003ECharles David Byrd\u003C\/strong\u003E (Research Scientist and Ph.D. student advised by Professor \u003Cstrong\u003ETucker Balch\u003C\/strong\u003E) and Assistant Professor \u003Cstrong\u003EXu Chu\u003C\/strong\u003E for efforts in artificial intelligence research. It is the company\u0026rsquo;s first AI Research Awards, which are aimed at studying the use of AI and machine learning in areas including investment advice, risk management, digital assistants, and trading behavior. Only 47 fellowships were awarded by the company.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EByrd\u0026rsquo;s work, along with Balch, is focused on machine learning for financial applications, investigating mutual fund portfolio inference, intraday equity market forecasting, stock market simulation, and machine learning approaches to the evaluation of market efficiency. Byrd has been recognized in the past as the 2018 Graduate Student Instructor of the Year Award in the School of Interactive Computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EChu\u0026rsquo;s research interests revolve around two themes: using data management technologies to make machine learning more usable and using machine learning to tackle hard data management problems like data integration. Chu also earned the Microsoft Research Ph.D. Fellowship in 2015.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EIBM\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPh.D. student \u003Cstrong\u003EStacey Truex\u003C\/strong\u003E of the School of Computer Science was named a \u003Ca href=\u0022https:\/\/www.research.ibm.com\/university\/awards\/2019_phd_fellowship_awards.shtml\u0022\u003E2019 IBM Ph.D. Fellow\u003C\/a\u003E. The Fellowship, which has been around since the 1950s, recognizes and supports outstanding graduate students who are focused on solving problems that are fundamental to innovation. This includes pioneering work in areas like cognitive computing and augmented intelligence, quantum computing, blockchain, data-centric systems, advanced analytics, security, radical cloud innovation, and more. This highly-competitive award was given to only 16 Ph.D. students worldwide.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETruex (advised by Professor \u003Cstrong\u003ELing Liu\u003C\/strong\u003E) focuses on research from two complementary perspectives: 1) privacy, security, and trust in machine learning models and algorithmic decision making, and 2) secure, privacy-preserving artificial intelligence systems, services, and applications.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ESnap, Inc.\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/snapresearchfs.splashthat.com\/\u0022\u003ESnap, Inc., recognized\u003C\/a\u003E Ph.D. student \u003Cstrong\u003EHarsh Agrawal\u003C\/strong\u003E of the School of Interactive Computing with the 2019 Snap Research Fellowship and Scholarship. This fellowship recognizes students carrying out research in areas of computer science relevant to the company, including computer graphics, computer vision, machine learning, data mining, computational imaging, human-computer interaction, and other related fields. Each awardee will receive a $10,000 award and an offer for a full-time paid internship with the company.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAgrawal (advised by Assistant Professor \u003Cstrong\u003EDhruv Batra\u003C\/strong\u003E) does research at the intersection of computer vision and natural language processing. Prior to joining Georgia Tech, he spent time as a research engineer at Snap Research, where he was responsible for building large-scale infrastructure for visual recognition, search and developed algorithms for low-shot instance detection.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EFacebook\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/research.fb.com\/announcing-the-2019-facebook-fellows-and-emerging-scholars\/\u0022\u003EFacebook Research announced the selection of 21 Fellows and seven Emerging Scholars\u003C\/a\u003E this year out of more than 900 submitted applications from Ph.D. students all over the world. Among the awardees were \u003Cstrong\u003EAbhishek Das\u003C\/strong\u003E with the Facebook Fellowship and \u003Cstrong\u003EVanessa Oguamanam \u003C\/strong\u003Ewith the Emerging Scholar Award. The Facebook Fellowship program, now in its eighth year, is designed to encourage and support doctoral students engaged in innovative research in computer science and engineering.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDas (advised by Dhruv Batra) does research in deep learning and its applications in building agents that can see, think, talk, and act. His research has been supported by fellowships from Facebook, Adobe, and Snap, Inc., over the years. Oguamanam, who is in the School of Interactive Computing, pursues research in educational technology, human-computer interaction for development, diversity in STEM, and entrepreneurship. She is co-advised by Associate Professor \u003Cstrong\u003EBetsy DiSalvo\u003C\/strong\u003E and Assistant Professor \u003Cstrong\u003ENeha Kumar\u003C\/strong\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"J.P. Morgan, IBM, Snap, and Facebook awarded six College of Computing faculty and students."}],"uid":"33939","created_gmt":"2019-04-04 22:23:48","changed_gmt":"2019-04-04 22:23:48","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-04T00:00:00-04:00","iso_date":"2019-04-04T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"620109":{"id":"620109","type":"image","title":"2019 College of Computing Fellowships","body":null,"created":"1554416151","gmt_created":"2019-04-04 22:15:51","changed":"1554416151","gmt_changed":"2019-04-04 22:15:51","alt":"Harsh Agrawal, Xu Chu, Abhishek Das, Vanessa Oguamanam, Charles David Byrd, and Stacey Truex","file":{"fid":"236101","name":"CoC Fellowships.png","image_path":"\/sites\/default\/files\/images\/CoC%20Fellowships.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/CoC%20Fellowships.png","mime":"image\/png","size":852597,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/CoC%20Fellowships.png?itok=TjGe8z44"}}},"media_ids":["620109"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"145171","name":"Cybersecurity"},{"id":"39431","name":"Data Engineering and Science"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"619749":{"#nid":"619749","#data":{"type":"news","title":"Coda, Georgia Tech\u2019s newest and largest home in Tech Square, was envisioned in a digital world years before it became a part of Midtown\u2019s skyline","body":[{"value":"\u003Cp\u003EGeorgia Tech\u0026rsquo;s vision for Tech Square\u0026rsquo;s newest structure, the \u003Cstrong\u003E\u003Ca href=\u0022https:\/\/codatechsquare.com\/\u0022\u003ECoda\u003C\/a\u003E\u003C\/strong\u003E building, was only an idea in 2015 when initial development talks began. The first tenants started moving in this month after more than two years of construction and much anticipation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut researchers in the Georgia Tech IMAGINE Lab didn\u0026rsquo;t have to wait for brick and steel to start being laid or watch a \u0026ldquo;construction cam\u0026rdquo; on a website to envision the possibilities for the new building. They were able to use their expertise in digital imaging, 3D modeling, and augmented reality technologies to create Tech Square in a digital model that included Coda in its earliest concept.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn 2015, the IMAGINE Lab, part of the Center for Spatial Planning Analytics and Visualization at Georgia Tech, was tasked by stakeholders at the institute to create a pilot project for a quick visual tool for planning the future Coda building.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The main goal of the digital application was to quickly visualize a few possible options with building concepts that included 20, 30 and 40 stories, and allow people to interact with the models and see how the cityscape in midtown would be altered,\u0026rdquo; said \u003Cstrong\u003EMiro Malesevic\u003C\/strong\u003E, digital designer at the IMAGINE Lab.\u003C\/p\u003E\r\n\r\n\u003Cblockquote\u003E\r\n\u003Cp\u003E\u003Cstrong\u003EIn essence, the researchers gave decision makers a virtual time machine to the future that brought the building to life and showed how it might be situated in Tech Square and impact the area.\u003C\/strong\u003E\u003C\/p\u003E\r\n\u003C\/blockquote\u003E\r\n\r\n\u003Cp\u003EThe visualization tool came in the form of an augmented reality app on mobile devices that allowed users to point the screens at a 2D physical map of Tech Square and watch a 3D model of the space come to life on the screen. Users could tap the screen to start with a 20-story building and tap twice more to end up with a structure twice the height (Coda eventually ended up with 21 levels).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsers could also understand how the length of shadows cast by the building or the structure itself might occlude views at the street level or other buildings. The digital AR application even provided a glimpse of the possibility for traffic simulations.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Use of the 3D AR application has an advantage over traditional 2D blueprints as it provides an individual user with 3D perspective of the design, interaction with the environment, and the ability to use simulations to help in decision-making,\u0026rdquo; said Malesevic, who worked on the project.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe powerful tool was built within a week, thanks to the IMAGINE Lab\u0026rsquo;s 3D modeling library, compiled over a 20-year period.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOver the years the IMAGINE Lab has produced numerous architectural visualizations for Georgia Tech, non-profit, and local private organizations supporting economic development efforts at the city and state level.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe third phase of Tech Square was announced in September. It includes preliminary plans for a two-tower complex at the northwest corner of West Peachtree and Fifth streets and possibly a retail plaza as well as an underground parking deck.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe design team in the IMAGINE Lab is already building this next version of Tech Square inside their digital world. The rest of us will have to wait and see how it turns out sometime in 2022 or later.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EStory\u003C\/strong\u003E: Joshua Preston\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EVideo\u003C\/strong\u003E: Noah Posner\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EVideo Editing\u003C\/strong\u003E:\u0026nbsp;Terence Rushin\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EIn 2015, the IMAGINE Lab, part of the Center for Spatial Planning Analytics and Visualization at Georgia Tech, was tasked by stakeholders at the institute to create a pilot project for a quick visual tool for planning the future Coda building.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"In 2015, the IMAGINE Lab, part of the Center for Spatial Planning Analytics and Visualization at Georgia Tech, was tasked by stakeholders at the institute to create a pilot project for a quick visual tool for planning the future Coda building."}],"uid":"27592","created_gmt":"2019-03-27 17:54:00","changed_gmt":"2019-03-28 12:59:55","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-03-27T00:00:00-04:00","iso_date":"2019-03-27T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"619759":{"id":"619759","type":"image","title":"Early Coda Concept in Augmented Reality","body":null,"created":"1553710231","gmt_created":"2019-03-27 18:10:31","changed":"1553710231","gmt_changed":"2019-03-27 18:10:31","alt":"","file":{"fid":"235963","name":"Coda Concept 2015.png","image_path":"\/sites\/default\/files\/images\/Coda%20Concept%202015.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Coda%20Concept%202015.png","mime":"image\/png","size":1407544,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Coda%20Concept%202015.png?itok=gNnSsnVv"}}},"media_ids":["619759"],"related_links":[{"url":"https:\/\/youtu.be\/ThoGpLmBJ2o","title":"VIDEO: Early Coda Concept in Augmented Reality"}],"groups":[{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50877","name":"School of Computational Science and Engineering"}],"categories":[{"id":"131","name":"Economic Development and Policy"},{"id":"179355","name":"Building Construction"},{"id":"142","name":"City Planning, Transportation, and Urban Growth"},{"id":"143","name":"Digital Media and Entertainment"}],"keywords":[],"core_research_areas":[{"id":"39531","name":"Energy and Sustainable Infrastructure"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EGVU Center at Georgia Tech\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E678.231.0787\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"619508":{"#nid":"619508","#data":{"type":"news","title":"3 IC Faculty Members Awarded Promotions","body":[{"value":"\u003Cp\u003EThree tenure awards and promotions for faculty in Georgia Tech\u0026rsquo;s School of Interactive Computing (IC) were announced this week. These appointments will become effective Aug. 15, 2019 after Board of Regents approval.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EDevi Parikh\u003C\/strong\u003E received tenure and was elevated to the position of associate professor. Parikh joined IC in 2016 and also currently works as a research scientist at Facebook AI Research (FAIR). She earned her Ph.D. from Carnegie Mellon University in 2009. Her research focus is in artificial intelligence (AI) at the intersection of machine learning and computer vision, and she has been recently recognized as one of the top women in AI in publications like \u003Cem\u003E\u003Ca href=\u0022https:\/\/www.vogue.com\/projects\/13548844\/women-in-ai\/\u0022\u003EVogue\u003C\/a\u003E\u003C\/em\u003E and \u003Cem\u003E\u003Ca href=\u0022https:\/\/www.forbes.com\/sites\/mariyayao\/2017\/05\/18\/meet-20-incredible-women-advancing-a-i-research\/2\/#3b0a91c84ede\u0022\u003EForbes\u003C\/a\u003E\u003C\/em\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EDhruv Batra\u003C\/strong\u003E, who joined IC in 2016, received tenure and an elevation to associate professor. His machine learning and computer vision research has been featured in the \u003Ca href=\u0022https:\/\/www.bostonglobe.com\/ideas\/2015\/04\/01\/how-automatically-detect-most-important-people-photograph\/tZND3z3epWTJu4Gvf9FSRN\/story.html\u0022\u003E\u003Cem\u003EBoston Globe\u003C\/em\u003E\u003C\/a\u003E, \u003Ca href=\u0022https:\/\/www.newsweek.com\/artificial-intelligence-algorithm-taught-recognise-humor-413832\u0022\u003E\u003Cem\u003ENewsweek\u003C\/em\u003E\u003C\/a\u003E, and other media outlets. Batra earned his Ph.D. from Carnegie Mellon in 2010 and is also a FAIR research scientist.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EKaren Liu\u003C\/strong\u003E, who already has tenure, was promoted to the position of full professor. Liu received her Ph.D. from the University of Washington in 2005. Her research focus is in computer graphics and robotics, including physics-based animation, character animation, optimal control, reinforcement learning, and computational biomechanics. Along with her students, she founded the physics simulator \u003Ca href=\u0022http:\/\/dartsim.github.io\/\u0022\u003EDART\u003C\/a\u003E, which won the Grand Prize of Open Source Software World Challenge in 2016. Additional research has been featured in places like \u003Cem\u003E\u003Ca href=\u0022https:\/\/www.nbcnews.com\/mach\/science\/these-new-gadgets-could-be-game-changers-senior-living-ncna791841\u0022\u003ENBC News\u003C\/a\u003E\u003C\/em\u003E, among others.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAll three faculty are also members of Georgia Tech\u0026#39;s \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003ECenter for Machine Learning\u003C\/a\u003E, \u003Ca href=\u0022http:\/\/gvu.gatech.edu\u0022\u003EGVU Center\u003C\/a\u003E, and \u003Ca href=\u0022http:\/\/robotics.gatech.edu\u0022\u003EInstitute for Robotics and Intelligent Machines\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Each of these faculty members has done tremendous work in Georgia Tech\u0026rsquo;s School of Interactive Computing,\u0026rdquo; IC Chair \u003Cstrong\u003EAyanna Howard\u003C\/strong\u003E said. \u0026ldquo;They are recognized not only for their incredible research but also for their teaching and leadership service in the community. This is a well-deserved recognition of their hard work hard work and accomplishments.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Devi Parikh and Dhruv Batra were awarded tenure and elevated to associate professor, while Karen Liu was elevated to full professor."}],"uid":"33939","created_gmt":"2019-03-22 15:19:06","changed_gmt":"2019-03-26 15:13:20","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-03-22T00:00:00-04:00","iso_date":"2019-03-22T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"619507":{"id":"619507","type":"image","title":"IC Promotions 2019","body":null,"created":"1553267589","gmt_created":"2019-03-22 15:13:09","changed":"1553267589","gmt_changed":"2019-03-22 15:13:09","alt":"Devi Parikh, Karen Liu, Dhruv Batra","file":{"fid":"235858","name":"devidhruvkaren.png","image_path":"\/sites\/default\/files\/images\/devidhruvkaren.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/devidhruvkaren.png","mime":"image\/png","size":159179,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/devidhruvkaren.png?itok=6rQohDwM"}}},"media_ids":["619507"],"groups":[{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"619523":{"#nid":"619523","#data":{"type":"news","title":"Meet IC: Atlanta Native Matthew Guzdial Merges Passions for Machine Learning and Creativity","body":[{"value":"\u003Cp\u003EThe School of Interactive Computing (IC) is the unique home to one of the widest varieties of computing researchers in the country. Part iSchool and part computer science, IC merges disciplines to address problems at the center of humans and computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn this collaborative environment, IC student researchers are impacting domains including artificial intelligence, robotics, health care, social computing, data visualization and analytics, and more. As a result, their backgrounds are as varied as their research areas.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn this series of Q\u0026amp;As, we\u0026rsquo;d like you to meet some of our talented graduate students. Today, meet \u003Cstrong\u003EMatthew Guzdial\u003C\/strong\u003E, a machine learning and creativity researcher under advisor \u003Cstrong\u003EMark Riedl\u003C\/strong\u003E who has already gained plenty of attention for his work on artificial intelligence in video games.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EAdvisor:\u003C\/strong\u003E IC Associate Professor Mark Riedl\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EResearch Focus Areas:\u003C\/strong\u003E Creativity and machine learning\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EHometown:\u003C\/strong\u003E Atlanta, Ga.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EHigh school:\u003C\/strong\u003E Lakeside High School\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EUndergraduate Degree:\u003C\/strong\u003E B.S. in Computational Media at Georgia Tech\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ECurrent Degree Program:\u003C\/strong\u003E Ph.D. in Computer Science\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ETell us a little bit about your research:\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn creativity and machine learning, I\u0026rsquo;m interested both in how we get machine learning approaches to work in creative domains, as well as how we take cognitive models of creativity to improve machine learning approaches.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EGiven the focus on creativity, is there something in your research you have created or would like to create?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn terms of a \u0026ldquo;thing\u0026rdquo; I am building, I\u0026rsquo;ve been collaborating with a team of undergrads and a master\u0026rsquo;s student on an intelligent level editor for the past few years (\u003Ca href=\u0022https:\/\/youtu.be\/UkMeM5Ty1lA\u0022\u003Evideo\u003C\/a\u003E and \u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1901.06417\u0022\u003Epaper\u003C\/a\u003E). What I\u0026rsquo;d really like, though, is to combine this with our work on automated game generation to create a new, intelligent automated game generation tool that would allow anyone to make a 2-D game in, hopefully, an intuitive way.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EDo you have a favorite place to hang out on campus or in the city?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOn campus, I mostly hang out at my desk or on an armchair in my lab. In the city, I\u0026rsquo;m a fan of Bookhouse Pub and the restaurant One Eared Stag.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EIt sounds like you spend a lot of time with research. Do you have any other hobbies you like to do in your spare time?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI\u0026rsquo;m an avid runner, so I squeeze in a minimum of one 3-plus-mile run each week and aim for 2-3. I\u0026rsquo;m also hugely into podcasts, with my favorites currently being My Brother, My Brother and Me, Punch Up the Jam, and Friends at the Table.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat is your favorite memory from your years at Georgia Tech?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI was part of this kind of wild production of After the Quake at DramaTech, where we built a visualization for one of the actors that he could \u0026ldquo;conduct\u0026rdquo; with his hands through a Kinect, Processing, and a Projector. That was hugely fun to go see.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat is your proudest accomplishment?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAsk me in, like, six months, and I\u0026rsquo;ll say this Ph.D.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Meet Matthew Guzdial, a machine learning and creativity researcher under advisor Mark Riedl who has already gained plenty of attention for his work on artificial intelligence in video games."}],"uid":"33939","created_gmt":"2019-03-22 19:49:55","changed_gmt":"2019-03-22 19:49:55","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-03-22T00:00:00-04:00","iso_date":"2019-03-22T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"619522":{"id":"619522","type":"image","title":"Matthew Guzdial","body":null,"created":"1553283810","gmt_created":"2019-03-22 19:43:30","changed":"1553283810","gmt_changed":"2019-03-22 19:43:30","alt":"Matthew Guzdial","file":{"fid":"235862","name":"Guzdial_Matthew_thumb.jpg","image_path":"\/sites\/default\/files\/images\/Guzdial_Matthew_thumb.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Guzdial_Matthew_thumb.jpg","mime":"image\/jpeg","size":150587,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Guzdial_Matthew_thumb.jpg?itok=ahDxb4K7"}}},"media_ids":["619522"],"related_links":[{"url":"http:\/\/guzdial.com\/","title":"Learn More About Matthew Guzdial\u0027s Research"},{"url":"https:\/\/www.ic.gatech.edu\/news\/612183\/georgia-tech-researchers-develop-ai-can-create-entirely-new-games","title":"Georgia Tech Researchers Develop AI That Can Create Entirely New Games"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"618999":{"#nid":"618999","#data":{"type":"news","title":"Michael Best to Speak at U.N. for Release of Report on Digital Gender Equality","body":[{"value":"\u003Cp\u003EAs many around the world celebrate International Women\u0026rsquo;s Day on Friday, March 8, a number of \u003Ca href=\u0022http:\/\/www.gatech.edu\u0022\u003EGeorgia Tech\u003C\/a\u003E faculty and students are making their own contributions to promoting women\u0026rsquo;s rights and gender equality. Among them is Georgia Tech Associate Professor \u003Cstrong\u003EMichael Best\u003C\/strong\u003E, who will speak at the United Nations next week during the formal release of a \u003Ca href=\u0022https:\/\/docs.wixstatic.com\/ugd\/04bfff_e53606000c594423af291b33e47b7277.pdf\u0022\u003Eresearch report\u003C\/a\u003E by the \u003Ca href=\u0022https:\/\/www.equals.org\/\u0022\u003EEQUALS Global Partnership\u003C\/a\u003E, a coalition of more than 90 partners from government, industry, and academia that he helped found in 2015.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBest, who holds appointments in the \u003Ca href=\u0022https:\/\/inta.gatech.edu\/\u0022\u003ESam Nunn School of International Affairs\u003C\/a\u003E and the \u003Ca href=\u0022http:\/\/www.ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E, will offer closing remarks on the report\u0026rsquo;s launch during the 63\u003Csup\u003Erd\u003C\/sup\u003E session of the \u003Ca href=\u0022http:\/\/www.unwomen.org\/en\/csw\u0022\u003ECommission on the Status of Women\u003C\/a\u003E, the principal global intergovernmental body dedicated to the promotion of gender equality and empowerment of women.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe report, titled \u003Cem\u003ETaking Stock: Data and Evidence on Gender Equality in Digital Access, Skills and Leadership\u003C\/em\u003E, highlights the impacts of technology on women in various contexts like jobs and wages, security and privacy, cyber threats, and new technologies such as artificial intelligence.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAmong the report\u0026rsquo;s results are a few key findings:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EWhile gender gaps are observable in most aspects of information and communications technology (ICT) access, skills, and leadership, the picture is still complex with large regional variations.\u003C\/li\u003E\r\n\t\u003Cli\u003EThere is no one final strategy for eliminating gender digital inequalities.\u003C\/li\u003E\r\n\t\u003Cli\u003EThe dominant approaches to gender equality in ICT access, skills and leadership mostly frame issues in binary (male\/female) terms, thereby masking the relevance of other pertinent identities.\u003C\/li\u003E\r\n\t\u003Cli\u003ETo ensure privacy and safety as well as full participation in the digital economy, women and girls should have equal opportunities to develop adequate basic and advanced digital skills.\u003C\/li\u003E\r\n\t\u003Cli\u003EDevelopments in digital technologies open new pathways to gender diversity and inclusion; however, lack of attention to gender dynamics hampers the potential for true progress.\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This report offers a comprehensive look at the issues affecting women and girls equality in a digital age,\u0026rdquo; Best said. \u0026ldquo;While surfacing many current challenges, it also sketches pathways forward towards achieving digital gender equality.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBest joined Georgia Tech\u0026rsquo;s faculty in 2004 and has directed the \u003Ca href=\u0022https:\/\/cs.unu.edu\/\u0022\u003EUnited Nations University Institute on Computing and Society \u003C\/a\u003E(UNU-CS) in China since 2015. He co-founded the EQUALS Global Partnership and the \u003Ca href=\u0022https:\/\/www.equals.org\/research\u0022\u003EEQUALS Research Group\u003C\/a\u003E, the latter of which is responsible for the report.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEQUALS works to reverse the increasing digital gender divide, aiming to close the gap by 2030 and supporting the U.N. Sustainable Development Goals by empowering women through their use of information and communication technologies.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;From its inception, the EQUALS Global Partnership has focused on evidence and data to illuminate the intersections of gender with ICTs,\u0026rdquo; Best said. \u0026ldquo;We have positioned the EQUALS Research Group at the forefront of this investigation, shining a light onto the realities and possibilities for women and girls in a digital age.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe report will be released on March 15 at the United Nations Headquarters in New York.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Michael Best will speak at the United Nations next week during the formal release of a research report by the EQUALS Global Partnership, a coalition of more than 90 partners from government, industry, and academia that he helped found in 2015."}],"uid":"33939","created_gmt":"2019-03-08 18:36:20","changed_gmt":"2019-03-08 18:36:20","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-03-08T00:00:00-05:00","iso_date":"2019-03-08T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"618997":{"id":"618997","type":"image","title":"EQUALS graphic","body":null,"created":"1552069697","gmt_created":"2019-03-08 18:28:17","changed":"1552069697","gmt_changed":"2019-03-08 18:28:17","alt":"EQUALS - International Women\u0027s Day 2019","file":{"fid":"235630","name":"EQUALS.png","image_path":"\/sites\/default\/files\/images\/EQUALS.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/EQUALS.png","mime":"image\/png","size":113788,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/EQUALS.png?itok=UNvk8V5l"}}},"media_ids":["618997"],"related_links":[{"url":"https:\/\/www.iac.gatech.edu\/news-events\/features\/michael-best-united-nations","title":"\u0027They Need Us. And We Need Them.\u0027"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"132","name":"Institute Leadership"},{"id":"134","name":"Student and Faculty"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"151","name":"Policy, Social Sciences, and Liberal Arts"}],"keywords":[{"id":"907","name":"Michael Best"},{"id":"167256","name":"Sam Nunn School of International Affairs"},{"id":"166848","name":"School of Interactive Computing"},{"id":"180738","name":"digital gender equality"},{"id":"86981","name":"gender equality"},{"id":"23281","name":"international women\u0027s day"},{"id":"2628","name":"united nations"},{"id":"10230","name":"equality"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39511","name":"Public Service, Leadership, and Policy"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"618690":{"#nid":"618690","#data":{"type":"news","title":"Ph.D. Candidate Caitlyn Seim Earns Prestigious Neuroscience:Translate Grant From Stanford","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing Ph.D. candidate \u003Cstrong\u003ECaitlyn Seim\u003C\/strong\u003E was awarded one of Stanford University\u0026rsquo;s exclusive \u003Ca href=\u0022https:\/\/neuroscience.stanford.edu\/research\/programs\/neurosciencetranslate\u0022\u003ENeuroscience:Translate grants\u003C\/a\u003E. The grant is part of a new program to support translational neuroscience research to find practical solutions for unmet patient needs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeim earned the grant based on her research into passive haptic stimulation, which resulted in a glove that could be used to assist in stroke rehabilitation. Stroke is one of the leading causes of disability around the globe, impacting millions of survivors each year.\u0026nbsp; Survivors often lose function in their arms or hands, making it difficult to perform everyday functions like dressing or eating.\u0026nbsp; Spasticity and tone can also cause hands to be involuntarily clenched in a rigid position \u0026ndash; a problem for which there are few effective treatments.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESurvivors lack options when it comes to rehabilitation, and existing methods can be strenuous, costly, or painful. Building on previous work, Seim is using the funding to investigate a novel stimulation method using a wireless, wearable device that may provide therapy on the go and to patients who do not have access to high-intensity rehabilitation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeim started working on this research during her time at Georgia Tech with Professor \u003Cstrong\u003EThad Starner\u003C\/strong\u003E. She\u0026rsquo;s now collaborating with \u003Cstrong\u003EMaarten Lansberg\u003C\/strong\u003E of the Stanford University Medical Center and \u003Cstrong\u003EAllison Okamura\u003C\/strong\u003E, professor of Mechanical Engineering at Stanford. Seim will join Stanford as a postdoctoral researcher this summer, where she will continue to work on this project.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeim said that she and Starner are launching a company to put the device on the market \u0026ndash; with the goal of translating research outcomes to clinical solutions.\u0026nbsp; \u0026ldquo;If we can help, then it\u0026#39;s all worth it,\u0026rdquo; Seim said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo connect with Seim about this project, you can email her at \u003Ca href=\u0022mailto:ceseim@gatech.edu\u0022\u003Eceseim@gatech.edu\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Seim was awarded for her research into passive haptic stimulation that could assist in stroke recovery."}],"uid":"33939","created_gmt":"2019-03-02 00:58:08","changed_gmt":"2019-03-02 00:58:08","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-03-01T00:00:00-05:00","iso_date":"2019-03-01T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"611755":{"id":"611755","type":"image","title":"Caitlyn Seim - PHL","body":null,"created":"1537470856","gmt_created":"2018-09-20 19:14:16","changed":"1537470856","gmt_changed":"2018-09-20 19:14:16","alt":"Caitlyn Seim showing haptic glove","file":{"fid":"232896","name":"Seim Banner.jpg","image_path":"\/sites\/default\/files\/images\/Seim%20Banner.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Seim%20Banner.jpg","mime":"image\/jpeg","size":170103,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Seim%20Banner.jpg?itok=QblfJAZi"}}},"media_ids":["611755"],"related_links":[{"url":"https:\/\/www.ic.gatech.edu\/news\/611757\/good-vibrations-passive-haptic-learning-could-be-key-rehabilitation","title":"Good Vibrations: Passive Haptic Learning Could Be a Key to Rehabilitation"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"104221","name":"passive haptic learning"},{"id":"180696","name":"PHL"},{"id":"167732","name":"Stroke"},{"id":"179165","name":"stroke recovery"},{"id":"170072","name":"Caitlyn Seim"},{"id":"1944","name":"Thad Starner"},{"id":"654","name":"College of Computing"},{"id":"166848","name":"School of Interactive Computing"},{"id":"77691","name":"wearable technology"},{"id":"167386","name":"Stanford"},{"id":"1304","name":"neuroscience"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"618567":{"#nid":"618567","#data":{"type":"news","title":"Researchers Use Social Media to Model Stress Following Incidents of Gun Violence on Campus","body":[{"value":"\u003Cp\u003EAn algorithm developed by researchers at \u003Ca href=\u0022http:\/\/gatech.edu\u0022\u003EGeorgia Tech\u003C\/a\u003E can quantify periods of high stress on college campuses and could better inform appropriate responses by counselors, deans of students, faculty, and student populations themselves.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing Reddit posts following incidents of gun violence on 12 American campuses as a test bed for their algorithm, researchers were able to identify sharp upticks in stress levels in the weeks immediately following these events and also the common words or phrases that increased or decreased during that period.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;You can always make the indirect inference that you\u0026rsquo;re seeing higher stress levels due to a specific event, like on-campus violence,\u0026rdquo; said \u003Ca href=\u0022http:\/\/www.munmund.net\/\u0022\u003E\u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E\u003C\/a\u003E, an assistant professor in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/www.ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E. \u0026ldquo;But what does that mean? Currently, we work on our hunches about the level of impact. This work can provide insight into these types of events by quantifying stress levels. What is the impact, and how?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo study the impact and the algorithm\u0026rsquo;s ability to detect it, De Choudhury and Georgia Tech Ph.D. student \u003Ca href=\u0022http:\/\/koustuv.com\/\u0022\u003E\u003Cstrong\u003EKoustuv Saha\u003C\/strong\u003E\u003C\/a\u003E brainstormed the types of events that could impact students\u0026rsquo; lives the most. They determined something unique and local to their specific campus, like incidents of violence, would offer an abundance of interaction between students on social media.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo be able to measure stress levels in those time periods immediately following these instances, they built a classifier trained on separate control data \u0026ndash; unrelated posts of high stress (class, crises, etc.) and low stress (general news, frivolous posts about cats, etc.).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EApplying the algorithm to the 12 campus groups, they found that there was, not surprisingly, higher stress surrounding those events. More importantly, though, they were able to identify aspects of that stress that weren\u0026rsquo;t readily available by the simple knowledge that on-campus violence induces a negative response from students. For example, while \u0026ldquo;class\u0026rdquo; was a word that commonly came up in high-stress posts before the incident, in that short period following any discussion of academics significantly dropped. On the other hand, words that were rarely seen throughout the year \u0026ndash; social words, like \u0026ldquo;friends\u0026rdquo; and \u0026ldquo;people\u0026rdquo; \u0026ndash; suddenly appeared.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;They were words that indicated concern or solidarity, bonding words,\u0026rdquo; De Choudhury said. \u0026ldquo;We can see that there is a different sense of community. All of this is actionable, because if class is not a concern at that time, perhaps we need to adapt things at the campus level that can better meet the students\u0026rsquo; needs, like peer support groups or things like that.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile the approach was tested only for college campuses encountering gun violence, Saha said that he could imagine a similar approach transferring to other settings. The challenge would be adjusting it to account for the size and makeup of the community.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;On campus, they are younger students who already interact on Reddit with each other,\u0026rdquo; he said. \u0026ldquo;If you\u0026rsquo;re talking larger-scale incidents, perhaps nationally, you have a much more diverse community which doesn\u0026rsquo;t all communicate via the same medium or in the same way.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis research was published in the paper \u003Ca href=\u0022http:\/\/koustuv.com\/papers\/PACM_HCI_CSCW2018_Stress.pdf\u0022\u003E\u003Cem\u003EModeling Stress with Social Media Around Incidents of Gun Violence on College Campus\u003C\/em\u003E\u003C\/a\u003E. It was presented at the 21\u003Csup\u003Est\u003C\/sup\u003E ACM Conference on Computer-Support Cooperative Work and Social Computing (CSCW).\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Using Reddit posts following incidents of gun violence on 12 American campuses as a test bed for their algorithm, researchers were able to identify sharp upticks in stress levels in the weeks immediately following these events."}],"uid":"33939","created_gmt":"2019-02-27 21:06:42","changed_gmt":"2019-02-27 21:06:42","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-02-27T00:00:00-05:00","iso_date":"2019-02-27T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"618566":{"id":"618566","type":"image","title":"Students on Campus","body":null,"created":"1551301507","gmt_created":"2019-02-27 21:05:07","changed":"1551301507","gmt_changed":"2019-02-27 21:05:07","alt":"Students sit together in the grass on campus","file":{"fid":"235453","name":"students on campus.jpg","image_path":"\/sites\/default\/files\/images\/students%20on%20campus.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/students%20on%20campus.jpg","mime":"image\/jpeg","size":218892,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/students%20on%20campus.jpg?itok=j8PeaHTe"}}},"media_ids":["618566"],"related_links":[{"url":"https:\/\/medium.com\/acm-cscw\/modeling-stress-with-social-media-around-incidents-of-gun-violence-on-college-campuses-291e62c79203","title":"Blog: Modeling Stress with Social Media Around Incidents of Gun Violence on College Campuses"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"167543","name":"social media"},{"id":"180672","name":"gun violence"},{"id":"167229","name":"stress"},{"id":"180673","name":"modeling stress"},{"id":"180674","name":"social media and gun violence"},{"id":"109","name":"Georgia Tech"},{"id":"654","name":"College of Computing"},{"id":"166848","name":"School of Interactive Computing"},{"id":"89321","name":"Munmun De Choudhury"},{"id":"180675","name":"koustuv saha"},{"id":"180676","name":"cscw"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"618556":{"#nid":"618556","#data":{"type":"news","title":"Novel App Uses AI to Guide, Support Cancer Patients","body":[{"value":"\u003Cp\u003EArtificial Intelligence is helping to guide and support some 50 breast cancer patients in rural Georgia through a novel mobile application that gives them personalized recommendations on everything from side effects to insurance.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nThe app, called MyPath, adapts to each stage in a patient\u0026rsquo;s cancer journey. So the information available on the app \u0026ndash; which runs on a tablet computer \u0026ndash; regularly changes based on each patient\u0026rsquo;s progress. Are you scheduled for surgery? MyPath will tell you what you need to know the day before.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\n\u0026ldquo;Patients have told us, \u0026lsquo;It just seemed to magically know what I needed,\u0026rsquo;\u0026rdquo; said\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/elizabeth-mynatt\u0022\u003EElizabeth Mynatt\u003C\/a\u003E, principal investigator for the work and Distinguished Professor in the\u0026nbsp;\u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E\u0026nbsp;at Georgia Tech.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nMynatt, who is also Executive Director of the\u0026nbsp;\u003Ca href=\u0022http:\/\/www.ipat.gatech.edu\/\u0022\u003EInstitute for People and Technology\u003C\/a\u003E, believes that MyPath is the first healthcare app capable of personalization (through its application of AI) for holistic cancer care. In addition to incorporating a patient\u0026rsquo;s medical data, the app also addresses a variety of other relevant issues such as social and emotional needs.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nShe presented the work February 15 at the 2019 meeting of the American Association for the Advancement of Science. The research has been sponsored by the National Cancer Institute.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\n\u003Cstrong\u003ENational Recognition\u003C\/strong\u003E\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nIn January MyPath was recognized by iSchools, a consortium of some 100 institutions worldwide (including Georgia Tech) dedicated to advancing the information field. Maia Jacobs, who recently received her Ph.D. from Georgia Tech for her work on MyPath, was named winner of the 2019 iSchools Doctoral Dissertation Award.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nAccording to iSchools, \u0026ldquo;the Award Committee felt [that Jacobs\u0026rsquo; work] was timely and important, and lauded its impact in how patients manage their health.\u0026rdquo; Jacobs, now a postdoctoral fellow at Harvard, is currently exploring how to expand MyPath to other diseases.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nThe work was also honored in 2016 when it was featured in a report to President Barack Obama by the President\u0026rsquo;s Cancer Panel. The report, Improving Cancer-Related Outcomes with Connected Health, aimed to \u0026ldquo;help patients manage their health information and participate in their own care,\u0026rdquo; according to a Georgia Tech story at the time.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\n\u003Cstrong\u003EThe Beginning\u003C\/strong\u003E\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nSix years ago Mynatt\u0026rsquo;s team began working with the Harbin Clinic in Rome, Georgia. \u0026ldquo;They have a tremendous program in holistic cancer care where they recognize that their patients, who are from a large rural area, face a variety of challenges to be able to successfully navigate the cancer journey,\u0026rdquo; Mynatt said.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nBut the Harbin doctors and cancer navigators \u0026ndash; people who help patients through the cancer journey \u0026ndash; wanted a better way to stay connected to patients on a regular basis. The navigators, in particular, found that they tended to interact with patients a great deal at diagnosis, but less frequently over time. And that meant that although there are many recommendations for, say, lowering anxiety, they weren\u0026rsquo;t necessarily being communicated.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nSaid Mynatt, \u0026ldquo;We wondered how technology could amplify what these great people are doing.\u0026rdquo;\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\n\u003Cstrong\u003EHow it Works\u003C\/strong\u003E\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nMyPath begins with a mobile library of resources compiled from the American Cancer Society and other reputable organizations. Then, it is personalized with each patient\u0026rsquo;s diagnosis and treatment plan, including the dates for specific procedures. Patients also complete regular surveys that help inform the system \u0026ndash; and caregivers \u0026ndash; of their changing needs and symptoms.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nThe result is a system that provides each patient with resources and suggestions specific to their personal situation. Because MyPath knows, for example, that you have stage 2 breast cancer and will be undergoing a lumpectomy on a specific date, when you click on the category \u0026ldquo;Preparing for Surgery\u0026rdquo; it will suggest relevant articles to prepare you for what\u0026rsquo;s ahead. Have you reported nausea in the system\u0026rsquo;s survey? MyPath will bring your attention to resources that can help combat the side effect. The system also provides quick access to contact information for specific caregivers.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nOther apps \u0026ndash; and the Internet \u0026ndash; aren\u0026rsquo;t personalized. That means slogging through a great deal of often technical information that\u0026rsquo;s not relevant to your situation. In contrast, \u0026ldquo;Every day MyPath puts the right resources at your fingertips to help you through your cancer journey,\u0026rdquo; Mynatt said.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\n\u003Cstrong\u003EMore than Medical\u003C\/strong\u003E\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nSome of MyPath\u0026rsquo;s most popular features have nothing to do directly with cancer. Buttons for \u0026ldquo;Emotional Support\u0026rdquo; and \u0026ldquo;Day to Day Matters\u0026rdquo; are regularly consulted by patients. \u0026ldquo;When we asked them about how they used the tablet for healthcare, many patients would talk to us about playing Angry Birds, which they would download to distract them during chemo sessions,\u0026rdquo; Mynatt said.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nMyPath is the second generation of the app. Patient feedback from its predecessor, My Journey Compass, led to changes including the personalization. Development continues. For example, Mynatt\u0026rsquo;s team is hoping to expand the app for use by cancer survivors, who often face additional challenges like hormone replacement therapy. The team is also working on a version that individual patients could download, which would make the app available to many more users.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nThis work is sponsored by the National Cancer Institute, part of the National Institutes of Health, under award RO1 CA195653. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Artificial Intelligence is helping to guide and support some 50 breast cancer patients in rural Georgia through a novel mobile application that gives them personalized recommendations on everything from side effects to insurance."}],"uid":"33939","created_gmt":"2019-02-27 18:35:16","changed_gmt":"2019-02-27 18:35:16","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-02-27T00:00:00-05:00","iso_date":"2019-02-27T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"617953":{"id":"617953","type":"image","title":"Tablet computer running MyPath app","body":null,"created":"1550355258","gmt_created":"2019-02-16 22:14:18","changed":"1550355258","gmt_changed":"2019-02-16 22:14:18","alt":"MyPath application on a tablet computer","file":{"fid":"235224","name":"mypath_5799.jpg","image_path":"\/sites\/default\/files\/images\/mypath_5799.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/mypath_5799.jpg","mime":"image\/jpeg","size":218528,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/mypath_5799.jpg?itok=feLnPH1m"}}},"media_ids":["617953"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"172776","name":"MyPath"},{"id":"10989","name":"Beth Mynatt"},{"id":"2493","name":"health care"},{"id":"2835","name":"ai"},{"id":"2556","name":"artificial intelligence"},{"id":"109","name":"Georgia Tech"},{"id":"166848","name":"School of Interactive Computing"},{"id":"12888","name":"IPaT"},{"id":"118671","name":"Maia Jacobs"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EElizabeth Thomson\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"617730":{"#nid":"617730","#data":{"type":"news","title":"College to Host More Than 60 Undergraduate Women for Inaugural I.AM.GradComputing","body":[{"value":"\u003Cp\u003ETo support the success of women in computing, Georgia Tech this week is hosting I.AM.GradComputing, a research-focused workshop for undergraduate women in computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe inaugural event begins Thursday and is organized by Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/www.cc.gatech.edu\u0022\u003ECollege of Computing\u003C\/a\u003E, \u003Ca href=\u0022http:\/\/www.ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E, and the College\u0026rsquo;s \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/student-life\/gt-computing-community\/oec-office\u0022\u003EOffice of Outreach, Enrollment and Community\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBetween 60-70 undergraduate women from institutes located in the Southeast are participating thanks to a gift from Google. Among those schools represented will be Georgia Tech, Kennesaw State, Spelman College, and Agnes Scott College. Attendance is based on acceptance of an application submitted by interested students.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We are encouraged by the trajectory of women who are electing to pursue graduate degrees in computing, but there\u0026rsquo;s so much more left to accomplish,\u0026rdquo; said \u003Cstrong\u003EAyanna Howard\u003C\/strong\u003E, chair of the School of Interactive Computing and one of the organizers of the event.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We want to engage with and provide guidance to women in all of these different critical areas like networking and branding and the benefits of a graduate degree. This event is going to be a wonderful opportunity to do that.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFollowing a welcome dinner on Thursday, the I.Am.GradComputing workshop will feature a series of sessions on Friday. The sessions will cover relevant topics including:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003ETools and tips on research opportunities\u003C\/li\u003E\r\n\t\u003Cli\u003ENetworking and personal brand building\u003C\/li\u003E\r\n\t\u003Cli\u003ECareer planning\u003C\/li\u003E\r\n\t\u003Cli\u003EBuilding self-confidence\u003C\/li\u003E\r\n\t\u003Cli\u003EAchieving a healthy work-life balance\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EAttendees will have an opportunity to engage with experienced women in computing, like Howard, IC faculty members \u003Cstrong\u003EAmy Bruckman\u003C\/strong\u003E, \u003Cstrong\u003EBeki Grinter\u003C\/strong\u003E, \u003Cstrong\u003ERosa Arriaga\u003C\/strong\u003E, and others.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe goal of the workshop, according to Howard, is to better prepare these women to succeed in computing-related careers, and to ultimately increase the number of undergraduate women pursuing graduate degrees in computing-related fields.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI.AM.GradComputing wraps up Saturday with a hackathon centered around AI for social good. During this event, scheduled for six hours, students will be encouraged to conceptualize or create an artificial intelligence application that addresses a social issue of their choice.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Between 60-70 undergraduate women from institutes located in the Southeast are participating thanks to a gift from Google."}],"uid":"33939","created_gmt":"2019-02-13 03:00:48","changed_gmt":"2019-02-13 03:00:48","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-02-12T00:00:00-05:00","iso_date":"2019-02-12T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"617729":{"id":"617729","type":"image","title":"Women of robotics","body":null,"created":"1550026547","gmt_created":"2019-02-13 02:55:47","changed":"1550026547","gmt_changed":"2019-02-13 02:55:47","alt":"Women of Georgia Tech Robotics","file":{"fid":"235135","name":"women_in_robotics.jpg","image_path":"\/sites\/default\/files\/images\/women_in_robotics.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/women_in_robotics.jpg","mime":"image\/jpeg","size":56633,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/women_in_robotics.jpg?itok=sL93yJ-h"}}},"media_ids":["617729"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"8469","name":"women in computing"},{"id":"208","name":"computing"},{"id":"654","name":"College of Computing"},{"id":"166848","name":"School of Interactive Computing"},{"id":"39401","name":"OEC"},{"id":"144291","name":"Office of Outreach Enrollment and Community"},{"id":"180502","name":"graduate computing"},{"id":"1051","name":"Computer Science"},{"id":"8471","name":"grace hopper"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"617728":{"#nid":"617728","#data":{"type":"news","title":"Team of Researchers Headed to SXSW EDU to Discuss VR in Education","body":[{"value":"\u003Cp\u003EClassrooms in Cobb County, Ga., are using virtual reality (VR) to venture inside plant cells. Students in Mumbai, India, are using VR to explore the Louvre Museum. But are the learning outcomes actually better for the kids?\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech and Stanford University researchers will discuss this and other crucial questions about the benefits and challenges of using VR in the classroom during a panel next month at South by Southwest EDU (SXSW EDU) 2019.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe panel, \u003Cem\u003EVirtually Real: Using Immersive Tech in Education\u003C\/em\u003E, is set for 5 p.m. March 4\u0026nbsp;in Room 11AB of the Austin Convention Center. \u003Ca href=\u0022https:\/\/schedule.sxswedu.com\/2019\/events\/PP87095\u0022\u003EFurther information about the panel can be found here.\u003C\/a\u003E It will feature Georgia Tech\u0026rsquo;s \u003Cstrong\u003ENeha Kumar\u003C\/strong\u003E and \u003Cstrong\u003ETamara Pearson\u003C\/strong\u003E, Stanford\u0026rsquo;s \u003Cstrong\u003EAditya Vishwanath\u003C\/strong\u003E, and Cobb County Schools\u0026rsquo; \u003Cstrong\u003ESally Creel\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETeachers, school administrators, and others attending the panel can expect a lively and insightful discussion. The panelists will use their research findings from the Cobb County and Mumbai projects as a springboard to discuss:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003ESocial implications of using VR in the classroom\u003C\/li\u003E\r\n\t\u003Cli\u003EImplications for resource-constrained populations\u003C\/li\u003E\r\n\t\u003Cli\u003EPhysical challenges like dizziness or nausea that can affect users of VR or other immersive technologies\u003C\/li\u003E\r\n\t\u003Cli\u003EHow to maintain engagement when VR is no longer a novel technology\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EAlong with sharing their research and lessons learned, the panelists hope to have an open conversation with attendees about their experiences, questions, or concerns about using VR in the classroom to improve learning outcomes.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe SXSW EDU Conference \u0026amp; Festival is an annual event that \u0026ldquo;cultivates and empowers a community of engaged stakeholders to advance teaching and learning.\u0026rdquo; Along with panel sessions for leading educational experts, the four-day event offers attendees workshops, interactive learning experiences, film screenings, early-stage startups, and business and networking opportunities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EPanelists\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003ENeha Kumar is an assistant professor, jointly appointed in Georgia Tech\u0026rsquo;s College of Computing and Sam Nunn School of International Affairs. Her research lies at the intersection of human-centered computing and global development.\u003C\/li\u003E\r\n\t\u003Cli\u003EAditya Vishwanath is a Knight-Hennessy Scholar at Stanford University, pursuing his Ph.D. in learning sciences and technology design. He completed his undergraduate degree at Georgia Tech\u0026rsquo;s College of Computing.\u003C\/li\u003E\r\n\t\u003Cli\u003ETamara Pearson is the associate director of school and community engagement at the Center for Education Integrating Science, Mathematics and Computing (CEISMC) at Georgia Tech. Her current work focuses on partnering with schools and districts to help develop innovative curriculum and programs, as well as understanding how to best engage populations historically underrepresented in STEM fields.\u003C\/li\u003E\r\n\t\u003Cli\u003ESally Creel is the STEM and Innovation supervisor at Cobb County Schools. She coordinated implementation of VR resources in the local schools for the team\u0026rsquo;s study, including recruiting classrooms and teachers to participate.\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ESpread the word!\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWell-attended sessions at SXSW EDU tend to benefit from the strong support of the networks surrounding the speakers themselves, as well as attendees. Help the panelists by spreading the word about their talk on social media. \u003Ca href=\u0022https:\/\/www.sxswedu.com\/social-media-marketing-toolkit\/\u0022\u003EYou can access a social media toolkit here.\u003C\/a\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The panel, Virtually Real: Using Immersive Tech in Education, is set for 5 p.m. March 4\u00a0in Room 11AB of the Austin Convention Center."}],"uid":"33939","created_gmt":"2019-02-12 23:04:25","changed_gmt":"2019-02-12 23:04:25","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-02-12T00:00:00-05:00","iso_date":"2019-02-12T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"617717":{"id":"617717","type":"image","title":"SXSW EDU Social","body":null,"created":"1550004824","gmt_created":"2019-02-12 20:53:44","changed":"1550004824","gmt_changed":"2019-02-12 20:53:44","alt":"See you at SXSW EDU March 4-7, 2019","file":{"fid":"235127","name":"Screen Shot 2019-02-12 at 3.46.35 PM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202019-02-12%20at%203.46.35%20PM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202019-02-12%20at%203.46.35%20PM.png","mime":"image\/png","size":447594,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202019-02-12%20at%203.46.35%20PM.png?itok=8Rv7AzZx"}}},"media_ids":["617717"],"related_links":[{"url":"https:\/\/schedule.sxswedu.com\/2019\/events\/PP87095","title":"Virtually Real: Using Immersive Tech in Education"},{"url":"https:\/\/www.cc.gatech.edu\/news\/605000\/vr-taking-students-where-once-only-ms-frizzle-and-magic-school-bus-could","title":"VR Taking Students Where Once Only Ms. Frizzle and the Magic School Bus Could"},{"url":"https:\/\/www.cc.gatech.edu\/content\/researchers-work-kids-mumbai-examine-classroom-potential-virtual-reality","title":"Researchers Work with Kids in Mumbai to Examine Classroom Potential of Virtual Reality"},{"url":"https:\/\/www.sxswedu.com\/","title":"SXSW EDU 2019"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"145251","name":"virtual reality"},{"id":"148381","name":"vr"},{"id":"138871","name":"Neha Kumar"},{"id":"177678","name":"aditya vishwanath"},{"id":"180500","name":"Sally Creel"},{"id":"172657","name":"Tamara Pearson"},{"id":"109","name":"Georgia Tech"},{"id":"654","name":"College of Computing"},{"id":"166848","name":"School of Interactive Computing"},{"id":"167256","name":"Sam Nunn School of International Affairs"},{"id":"411","name":"CEISMC"},{"id":"174526","name":"Cobb County Schools"},{"id":"180501","name":"VR in education"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"617450":{"#nid":"617450","#data":{"type":"news","title":"More Than 120 Students Participate in Interactivity@GT 2019","body":[{"value":"\u003Cp\u003EGeorgia Tech hosted its annual Interactivity event on Jan. 31 in the Historic Academy of Medicine.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMore than 120 students and representatives from 61 companies participated in the annual showcase and job fair for graduate students enrolled in one of three master\u0026rsquo;s programs at Georgia Tech \u0026ndash; M.S. in Human-Computer Interaction, M.S. in Digital Media, and Master of Industrial Design.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENew this year, Interactivity, which is presented by the GVU Center and sponsored by Mailchimp, included a traditional job fair. Eighteen companies participated in the fair, which was focused specifically on user experience-related jobs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs in past years, Interactivity kicked off with a morning poster session for students to share research projects with visiting industry partners. After lunch, students took part in \u0026ldquo;one-minute madness,\u0026rdquo; an opportunity for each student to take the stage and give a one-minute elevator pitch about themselves, their interests, and their work.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Interactivity is unique because it provides a one-stop shop for companies looking for world-class HCI and UX talent,\u0026rdquo; said \u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/people\/richard-henneman\u0022\u003EDick Henneman\u003C\/a\u003E\u003C\/strong\u003E, a professor of the practice in the School of Interactive Computing and the Director of the MS-HCI program. \u0026ldquo;We experimented this year by including a traditional career fair for our MS-HCI, MID, and MSDM students. Judging by the reaction from both students and company recruiters, it was a huge hit that will continue in the future.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EInteractivity has proven successful over the years for students looking to enter industry as a STEM professional. Since 2014-18, in fact, more than 50 percent of graduates from the MS-HCI program took jobs at major companies in five of the top 10 metro areas for STEM professionals \u0026ndash; 28.7 percent to Atlanta, 15.8 percent to San Francisco, Calif., and 6.4 percent to Seattle, Wash.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELearn more in the graphic below, or\u0026nbsp;\u003Ca href=\u0022https:\/\/public.tableau.com\/views\/HCIgrads-2014-2018\/Dashboard1?:embed=y\u0026amp;:display_count=yes\u0026amp;publish=yes:showVizHome=no#2\u0022\u003Eclick the link to interact with the graphic in a new window\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor more information on Georgia Tech\u0026rsquo;s affiliated master\u0026rsquo;s programs and Interactivity in general, follow the links below:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022http:\/\/mshci.gatech.edu\/\u0022\u003EMaster of Science in Human-Computer Interaction\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/dm.lmc.gatech.edu\/\u0022\u003EMaster of Science in Digital Media\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/id.gatech.edu\/mid\u0022\u003EMaster of Industrial Design\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022http:\/\/interactivity.cc.gatech.edu\/\u0022\u003EInteractivity@GT\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Representatives from 61 companies also attended the annual event for master\u0027s students at Georgia Tech."}],"uid":"33939","created_gmt":"2019-02-06 20:46:12","changed_gmt":"2019-02-07 20:42:21","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-02-06T00:00:00-05:00","iso_date":"2019-02-06T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"617442":{"id":"617442","type":"image","title":"Interactivity 2019","body":null,"created":"1549484478","gmt_created":"2019-02-06 20:21:18","changed":"1549484478","gmt_changed":"2019-02-06 20:21:18","alt":"Interactivity 2019","file":{"fid":"235019","name":"IMG_2984.JPG","image_path":"\/sites\/default\/files\/images\/IMG_2984.JPG","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/IMG_2984.JPG","mime":"image\/jpeg","size":106297,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/IMG_2984.JPG?itok=XUNOXP8J"}},"617514":{"id":"617514","type":"image","title":"MS-HCI graduate job placements","body":null,"created":"1549571937","gmt_created":"2019-02-07 20:38:57","changed":"1549571937","gmt_changed":"2019-02-07 20:38:57","alt":"","file":{"fid":"235043","name":"mshci grap placement graphic.png","image_path":"\/sites\/default\/files\/images\/mshci%20grap%20placement%20graphic.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/mshci%20grap%20placement%20graphic.png","mime":"image\/png","size":173005,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/mshci%20grap%20placement%20graphic.png?itok=oBvZjWWY"}}},"media_ids":["617442","617514"],"related_links":[{"url":"https:\/\/public.tableau.com\/views\/HCIgrads-2014-2018\/Dashboard1?:embed=y\u0026:display_count=yes\u0026publish=yes:showVizHome=no#2","title":"MS-HCI Graduate Job Placement Map"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"14646","name":"human-computer interaction"},{"id":"107441","name":"ms-hci"},{"id":"180428","name":"ms digital media"},{"id":"124","name":"Digital Media"},{"id":"3128","name":"Industrial Design"},{"id":"180429","name":"dick henneman"},{"id":"2483","name":"interactive computing"},{"id":"166848","name":"School of Interactive Computing"},{"id":"654","name":"College of Computing"},{"id":"177254","name":"GTComputing"},{"id":"109","name":"Georgia Tech"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"617424":{"#nid":"617424","#data":{"type":"news","title":"Two Computing Professors Among Finalists in Dean Search","body":[{"value":"\u003Cp\u003EGeorgia Tech announced today that four finalists have been chosen in the College of Computing dean search.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe search began last summer following the announcement that Zvi Galil, John P. Imlay Jr. Dean of Computing, would \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/606918\/dean-zvi-galil-step-down-after-next-academic-year\u0022\u003Estep down as dean at the end of the 2018\/19 academic year\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAmong the finalists announced today are College of Computing Executive Associate Dean and Professor \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~isbell\/\u0022\u003E\u003Cstrong\u003ECharles Isbell\u003C\/strong\u003E\u003C\/a\u003E and \u003Cstrong\u003EEllen\u003C\/strong\u003E \u003Cstrong\u003EZegura\u003C\/strong\u003E, Fleming Chair and Professor in the School of Computer Science.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs part of the final selection process, each candidate will visit campus and present an open seminar addressing their broad vision for the College of Computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe hour-long seminars are open to all students, faculty, and staff. Interested individuals can attend in person, watch real-time via live stream, or watch a post-event video of each candidate presentation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe finalists are included below in order of their campus seminar presentations:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003ECharles Isbell\u003C\/strong\u003E,\u0026nbsp;professor and executive associate dean for the College of Computing at the Georgia Institute of Technology, will present an open seminar on\u0026nbsp;\u003Cstrong\u003EFeb. 19, at 11 a.m. in Clough Undergraduate Learning Commons, Room 152.\u003C\/strong\u003E\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EKathleen Fisher\u003C\/strong\u003E, chair of the Computer Science Department at Tufts University, will present an open seminar on\u0026nbsp;\u003Cstrong\u003EFeb. 21, at 11 a.m. in Clough Undergraduate Learning Commons, Room 152\u003C\/strong\u003E\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003ERadha Poovendran\u003C\/strong\u003E, professor and chair of the Electrical and Computer Engineering Department at the University of Washington, will present an open seminar on\u0026nbsp;\u003Cstrong\u003EFeb. 26, at 11 a.m. in Clough Undergraduate Learning Commons, Room 152.\u003C\/strong\u003E\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EEllen Zegura\u003C\/strong\u003E, Fleming Professor in the School of Computer Science and executive faculty co-director of the Center for Serve-Learn-Sustain at the Georgia Institute of Technology, will present an open seminar on\u0026nbsp;\u003Cstrong\u003EFeb. 28, at 11 a.m.\u003C\/strong\u003E\u0026nbsp;\u003Cstrong\u003Ein Clough Undergraduate Learning Commons, Room 152.\u003C\/strong\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EAdditional details can be found on the College of Computing\u0026nbsp;\u003Ca href=\u0022http:\/\/www.provost.gatech.edu\/dean-computing\u0022\u003Edean search site\u003C\/a\u003E, including each respective candidate\u0026rsquo;s bio and curriculum vitae, as well as links to the seminars and surveys. Note that Georgia Tech login credentials are required to access the live stream and post-event videos. Surveys for the College of Computing dean search will be available through midnight on March 3.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Four finalists have been chosen for the College of Computing dean search."}],"uid":"34541","created_gmt":"2019-02-06 18:22:49","changed_gmt":"2019-02-07 20:33:58","author":"Tess Malone","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-02-06T00:00:00-05:00","iso_date":"2019-02-06T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"617425":{"id":"617425","type":"image","title":"Dean Search","body":null,"created":"1549477400","gmt_created":"2019-02-06 18:23:20","changed":"1549477400","gmt_changed":"2019-02-06 18:23:20","alt":"Dean Search","file":{"fid":"235014","name":"deansearch.jpeg","image_path":"\/sites\/default\/files\/images\/deansearch.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/deansearch.jpeg","mime":"image\/jpeg","size":29915,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/deansearch.jpeg?itok=oU2DIg0w"}}},"media_ids":["617425"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"431631","name":"OMS"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"},{"id":"606703","name":"Constellations Center"},{"id":"576481","name":"ML@GT"},{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAlbert Snedeker, News \u0026amp; Media Relations Manager\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:albert.snedeker@cc.gatech.edu\u0022\u003Ealbert.snedeker@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["albert.snedeker@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"617001":{"#nid":"617001","#data":{"type":"news","title":"Fairness in Machine Learning Conference Comes to Atlanta","body":[{"value":"\u003Cp\u003EFairness in\u0026nbsp;machine learning (ML) is becoming one of the most pressing issues in society. This week, more than 500 people are in Atlanta for the\u0026nbsp;Fairness, Accountability, and Transparency (FAT) conference, Jan. 29 through 31,\u0026nbsp;to discuss improving ethics in ML.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs more and more products and services come to rely on artificial intelligence and ML, ethical issues continue to arise. According to School of Computer Science Assistant Professor\u0026nbsp;\u003Ca href=\u0022http:\/\/jamiemorgenstern.com\/\u0022\u003E\u003Cstrong\u003EJamie Morgenstern\u003C\/strong\u003E\u003C\/a\u003E, who is one of the conference\u0026#39;s program chairs, this is because much of the data used to train these systems\u0026nbsp;is\u0026nbsp;historical and often reflects societal biases of the time.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/610888\/jamie-morgenstern-wants-bring-fairness-machine-learning\u0022 target=\u0022_blank\u0022\u003E[RELATED:\u0026nbsp;Jamie Morgenstern Wants to Bring Fairness to Machine Learning]\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe FAT conference was established to mitigate these issues by developing awareness of this inherent bias. Morgenstern defines each term as follows:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EFairness:\u003C\/strong\u003E This can also be called predictive equity. Systems should do a similarly good job improving services for all groups.\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EAccountability: \u003C\/strong\u003EResearchers should be able to explain why computational systems behave the way they do.\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003ETransparency:\u003C\/strong\u003E A system should be understandable to the population it will serve.\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EBecause these issues impact more than just computer science, and\u0026nbsp;ML now touches everything from policy to business, conference attendees\u0026nbsp;include lawyers, policymakers, and a variety\u0026nbsp;of industry representatives.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If we\u0026rsquo;re just having this conversation ourselves as computer scientists, we will invariably get it wrong,\u0026rdquo; Morgenstern said. \u0026ldquo;We want to promote a broad, diverse population to come together, network, and be externally visible in this field.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/616279\/human-rights-may-help-shape-artificial-intelligence-2019\u0022 target=\u0022_blank\u0022\u003E[RELATED:\u0026nbsp;\u0026#39;Human Rights\u0026#39; May Help Shape Artificial Intelligence in 2019]\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENow in its second year, the conference is affiliated with ACM this year. The program chairs are Morgenstern and Data \u0026amp; Society founder and Microsoft Research Principal Researcher \u003Cstrong\u003Edanah boyd\u003C\/strong\u003E. Local Chairs Professor \u003Cstrong\u003EDeven Desai\u003C\/strong\u003E of Georgia Tech\u0026#39;s Scheller College of Business and \u003Cstrong\u003EBrandeis Marshall\u003C\/strong\u003E of Spelman College have also been critical to the conference\u0026rsquo;s mission.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech also has a paper at the conference: \u003Ca href=\u0022https:\/\/dl.acm.org\/authorize?N675456\u0022\u003E\u003Cstrong\u003E\u003Cem\u003EA Taxonomy of Ethical Tensions in Inferring Mental Health States from Social Media\u003C\/em\u003E\u003C\/strong\u003E\u003C\/a\u003E by School of Interactive Computing (IC) Ph.D. student \u003Ca href=\u0022http:\/\/steviechancellor.com\/\u0022\u003E\u003Cstrong\u003EStevie Chancellor\u003C\/strong\u003E\u003C\/a\u003E, Dr.\u003Cstrong\u003E Michael Birnbaum\u003C\/strong\u003E, University of Rochester Professor\u003Cstrong\u003E Eric Caine\u003C\/strong\u003E and Associate Professor\u003Cstrong\u003E Vincent Silenzio, \u003C\/strong\u003Eand IC Assistant Professor \u003Ca href=\u0022http:\/\/www.munmund.net\/\u0022\u003E\u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E\u003C\/a\u003E\u003Cstrong\u003E.\u003C\/strong\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"SCS Assistant Professor Jamie Morgenstern acts as program chair for important machine learning conference."}],"uid":"34541","created_gmt":"2019-01-28 21:26:10","changed_gmt":"2019-02-02 02:36:43","author":"Tess Malone","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-01-28T00:00:00-05:00","iso_date":"2019-01-28T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"617002":{"id":"617002","type":"image","title":"Scales ","body":null,"created":"1548711313","gmt_created":"2019-01-28 21:35:13","changed":"1548711313","gmt_changed":"2019-01-28 21:35:13","alt":"Scales","file":{"fid":"234824","name":"2000px-Unbalanced_scales2.svg_.png","image_path":"\/sites\/default\/files\/images\/2000px-Unbalanced_scales2.svg_.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/2000px-Unbalanced_scales2.svg_.png","mime":"image\/png","size":85577,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/2000px-Unbalanced_scales2.svg_.png?itok=HnFA9z_2"}}},"media_ids":["617002"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1299","name":"GVU Center"}],"categories":[{"id":"129","name":"Institute and Campus"}],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ETess Malone, Communications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:tess.malone@cc.gatech.edu\u0022\u003Etess.malone@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["tess.malone@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"617061":{"#nid":"617061","#data":{"type":"news","title":"See and Say: Abhishek Das Working to Provide Crucial Communication Tools for Intelligent Agents","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing Ph.D. student \u003Ca href=\u0022https:\/\/abhishekdas.com\/\u0022\u003E\u003Cstrong\u003EAbhishek Das\u003C\/strong\u003E\u003C\/a\u003E remembers the moment his interests in computer vision and language began to come into focus. It was early in his time as a Ph.D. student when he came across an algorithm that could generate a one-line natural language description of an image with incredible accuracy. When he saw the results, it seemed almost magical, he said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I was blown away because you could give it any image, and it would generate a fairly plausible sentence,\u0026rdquo; he said. \u0026ldquo;I had never seen that before.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESix months later, there were papers being published on question answering, where the algorithm could not only generate a sentence but could even answer questions about the image. He was similarly floored by the impressive results.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe was advised by \u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~dbatra\/\u0022\u003EDhruv Batra\u003C\/a\u003E\u003C\/strong\u003E and also working closely with \u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~parikh\/\u0022\u003EDevi Parikh\u003C\/a\u003E\u003C\/strong\u003E, both assistant professors at Virginia Tech at the time. When they joined Georgia Tech, Das brought his thirst for research in that space to Atlanta, as well. Now, nearly two years later, he has published a number of research papers in projects ranging from \u003Ca href=\u0022https:\/\/visualdialog.org\/\u0022\u003Evisual dialogue\u003C\/a\u003E to a task called \u0026ldquo;\u003Ca href=\u0022https:\/\/embodiedqa.org\/\u0022\u003Eembodied question answering\u003C\/a\u003E.\u0026rdquo; He is working toward additional research involving multiple agents, and sees a world not far off that takes advantage of all of this simulated research to develop hardware for assistive tech like in-home robots.\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u0026#39;It feels within reach...\u0026#39;\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EIt\u0026rsquo;s a future that has been featured in popular culture for years \u0026ndash; think about \u003Ca href=\u0022http:\/\/thejetsons.wikia.com\/wiki\/Rosey\u0022\u003ERosie, the robot maid who first appeared on \u003Cem\u003EThe Jetsons \u003C\/em\u003Ein 1962\u003C\/a\u003E \u0026ndash; but is one that Das is beginning to see on the horizon.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It feels within reach, the vision that we see in science fiction,\u0026rdquo; he said. \u0026ldquo;Movies of robots that you can talk to or give instructions to.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile people outside of the research sphere may just see the cold steel exterior of these imagined robots, it requires so many different elements to develop a viable foundation. This includes work in computer vision, which involves analysis of visual information by a machine, and language, which involves written or verbal communication and instruction. Das works at the intersection of both domains.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBroadly, his research has been in developing algorithms and intelligent agents that can see, talk, and ultimately act on that understanding in physical environments, taking actions like navigation or executing instructions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/embodiedqa.org\/paper.pdf\u0022\u003EFindings from a recent research project\u003C\/a\u003E were published and \u003Ca href=\u0022https:\/\/www.youtube.com\/watch?v=gz2VoDrvX-A\u0026amp;feature=youtu.be\u0026amp;t=1h29m14s\u0022\u003Epresented\u003C\/a\u003E at the \u003Ca href=\u0022http:\/\/cvpr2018.thecvf.com\/\u0022\u003E2018 Computer Vision and Pattern Recognition conference\u003C\/a\u003E in Salt Lake City, Utah. They explored an idea called embodied question answering. In this project, there is an agent that is asked a question and must ascertain an answer by moving through and inquiring about other aspects of its environment.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It combines these three modalities: computer vision, language understanding, and reinforcement learning to take actions in this environment,\u0026rdquo; Das said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe application here could be an assistive robot that could take a question or a command \u0026ndash; \u0026ldquo;Where are my keys,\u0026rdquo; for example \u0026ndash; and provide an answer or perform a task based on its understanding of the environment. He\u0026rsquo;s also conducting similar work with multiple agents, which could help coordinate to perform certain tasks.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I\u0026rsquo;m not currently working with the hardware side of things,\u0026rdquo; he said. \u0026ldquo;All of this is simulation, but these are the end goals. The vision is that these will make it to robots with these sorts of capabilities. And, more importantly, the algorithms that I\u0026rsquo;m building will hopefully generalize and be useful for a wide variety of tasks.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch3\u003EA culture of collaboration\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EDas\u0026rsquo; work has received extensive media attention and he has had the opportunity to work under some prestigious grants and fellowships. Currently, he is supported by fellowships from Facebook, Adobe, and Snap. He was recently awarded fellowships from Facebook, Microsoft Research, NVIDIA. He declined the latter two and accepted Facebook.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne of the great benefits, he said, of working at Georgia Tech in this space has been the opportunity to collaborate with individuals who are conducting research in complementary domains.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;On my floor in the College of Computing, there are people who are experts in computer vision, natural language processing, reinforcement learning, in robotics, or other areas, and it\u0026rsquo;s always awesome to bounce ideas off of them,\u0026rdquo; he said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Just this semester, I was taking (Associate Professor) \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~chernova\/\u0022\u003E\u003Cstrong\u003ESonia Chernova\u003C\/strong\u003E\u003C\/a\u003E\u0026rsquo;s course in human-robot interaction, and we prototyped a version of a tabletop embodied robot that could actually implement a very primitive version of the embodied question answering algorithm. That was a very interesting experience.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDas is gaining new valuable experience this semester, as well. Having interned three times at Facebook AI Research, he is spending this semester in London interning with DeepMind, where he will work in areas related to this general space of agents that can see, talk, and act.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"School of Interactive Computing student Abhishek Das has published a number of research papers in projects ranging from visual dialogue to a task called \u201cembodied question answering.\u201d"}],"uid":"33939","created_gmt":"2019-01-30 18:19:28","changed_gmt":"2019-01-30 18:19:28","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-01-30T00:00:00-05:00","iso_date":"2019-01-30T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"617059":{"id":"617059","type":"image","title":"Abhishek Das","body":null,"created":"1548872305","gmt_created":"2019-01-30 18:18:25","changed":"1548872305","gmt_changed":"2019-01-30 18:18:25","alt":"Abhishek Das","file":{"fid":"234841","name":"Abhishek Das.jpeg","image_path":"\/sites\/default\/files\/images\/Abhishek%20Das.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Abhishek%20Das.jpeg","mime":"image\/jpeg","size":36461,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Abhishek%20Das.jpeg?itok=uQ--yhSL"}}},"media_ids":["617059"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"176750","name":"Abhishek Das"},{"id":"11506","name":"computer vision"},{"id":"180344","name":"nlp"},{"id":"23981","name":"natural language processing"},{"id":"173615","name":"dhruv batra"},{"id":"173616","name":"devi parikh"},{"id":"180345","name":"embodied question answering"},{"id":"176752","name":"visual dialogue"},{"id":"166848","name":"School of Interactive Computing"},{"id":"654","name":"College of Computing"},{"id":"1051","name":"Computer Science"},{"id":"667","name":"robotics"},{"id":"2352","name":"robots"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"616821":{"#nid":"616821","#data":{"type":"news","title":"Seeing is Believing: Atlanta Ranks #7 for STEM Professionals","body":[{"value":"\u003Cp\u003EChoosing a job based on its location is never easy. This is particularly true for science, technology, engineering, and math (STEM) professionals who often have multiple job offers in different cities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut thanks to a \u003Ca href=\u0022https:\/\/public.tableau.com\/views\/Atlanta7STEM-friendlycity2019\/Dashboard1?:embed=y\u0026amp;:display_count=yes\u0026amp;publish=yes\u0026amp;:showVizHome=no\u0022 target=\u0022_blank\u0022\u003Enew data visualization created by Georgia Tech\u003C\/a\u003E, comparing the 100 largest metro areas in the United States just got a whole lot easier.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe interactive tool visualizes data compiled and published by personal finance site WalletHub, which was featured in a recent \u003Ca href=\u0022https:\/\/www.ajc.com\/news\/world\/atlanta-named-one-the-best-metro-areas-for-stem-professionals\/5dCFqvf8XOmQ5ZARSV85dL\/\u0022\u003EAtlanta Journal-Constitution story\u003C\/a\u003E. It allows users to easily navigate and understand the rankings for each city in three categories: professional opportunities, STEM-friendliness,\u0026nbsp;and quality of life.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAccording to the data, \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/about\/atlanta\u0022 target=\u0022_blank\u0022\u003EAtlanta\u003C\/a\u003E ranks as the #7 top city in the U.S. for STEM professionals. The city ranks #1 for job openings for STEM graduates per capita and #2 for the quality of engineering opportunities.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A new GT Computing data visualization features rankings for the top 100 U.S. cities. "}],"uid":"32045","created_gmt":"2019-01-24 16:15:53","changed_gmt":"2019-01-30 18:03:45","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-01-24T00:00:00-05:00","iso_date":"2019-01-24T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"584932":{"id":"584932","type":"image","title":"Coda - Renderings ","body":null,"created":"1481562001","gmt_created":"2016-12-12 17:00:01","changed":"1481562001","gmt_changed":"2016-12-12 17:00:01","alt":"","file":{"fid":"223022","name":"Coda2.Updated.jpg","image_path":"\/sites\/default\/files\/images\/Coda2.Updated.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Coda2.Updated.jpg","mime":"image\/jpeg","size":3036256,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Coda2.Updated.jpg?itok=glMVkxWt"}}},"media_ids":["584932"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"}],"categories":[],"keywords":[{"id":"489","name":"atlanta"},{"id":"2301","name":"entrepreneur"},{"id":"167258","name":"STEM"},{"id":"180290","name":"STEM professionals"},{"id":"46361","name":"GT computing"},{"id":"2556","name":"artificial intelligence"},{"id":"9167","name":"machine learning"},{"id":"667","name":"robotics"},{"id":"145071","name":"fintech"},{"id":"292","name":"Biotech"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAlbert Snedeker, Communications Manager\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:albert.snedeker@cc.gatech.edu?subject=ATL%20STEM%20Pros\u0022\u003Ealbert.snedeker@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["albert.snedeker@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"616001":{"#nid":"616001","#data":{"type":"news","title":"AAAI 2019: Charles Isbell Named a 2019 Fellow and Ashok Goel to Give Invited Talk at AI Conference","body":[{"value":"\u003Cp\u003EProfessors and students from \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EThe Machine Learning Center at Georgia Tech (ML@GT)\u003C\/a\u003E are kicking off the New Year presenting some of their latest research at the 33\u003Csup\u003Erd\u003C\/sup\u003E AAAI Conference on Artificial Intelligence (AAAI-19) in Honolulu, Hawaii, Jan. 27 through Feb. 1.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFocused solely on artificial intelligence, the conference brings together more than 2,000 artificial intelligence (AI) researchers from academia and industry.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne of the highlights of the conference for Georgia Tech will be the recognition of\u003Cstrong\u003E Charles Isbell\u003C\/strong\u003E as a \u003Ca href=\u0022https:\/\/twitter.com\/RealAAAI\/status\/1072132592017625088\u0022\u003E2019 AAAI Fellow\u003C\/a\u003E. Isbell, Associate Executive Dean for the \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/\u0022\u003ECollege of Computing\u003C\/a\u003E, is being recognized for his more than two decades of significant and sustained technical contributions to the field of AI.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlso well known for \u003Ca href=\u0022https:\/\/www.popsci.com\/heres-how-an-ai-tricked-students-into-thinking-it-was-their-ta\u0022\u003Ehis contributions to AI\u003C\/a\u003E, ML@GT\u0026rsquo;s \u003Cstrong\u003EAshok Goel\u003C\/strong\u003E is one of the conference\u0026rsquo;s invited speakers and will be discussing \u003Cem\u003EExperiments in Teaching AI\u003C\/em\u003E. In his talk, Goel will present several experiments on teaching cognitive systems in online and blended learning settings. Goel \u0026shy;\u0026ndash; a professor in the \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing at Georgia Tech\u003C\/a\u003E \u0026ndash; will also share results and draw out some general principles for teaching AI, as well as using AI to teach AI.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAAAI-19 will also feature five Georgia Tech-led research papers. According to the conference website, accepted papers touch on a variety of topics within the field including natural language processing, robotics, deep learning, and knowledge representation, and can be applied to transportation, commerce, sustainability, healthcare, and other important industries.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech\u0026rsquo;s five papers are:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cem\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~isbell\/papers\/aaai2019composable.pdf\u0022\u003EComposable Modular Reinforcement Learning\u003C\/a\u003E\u003C\/em\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cem\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1804.04164\u0022\u003EUnderstanding Story Characters, Movie Actors and Their Versatility with Gaussian Representations\u003C\/a\u003E\u003C\/em\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cem\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1811.05831.pdf\u0022\u003ERevisiting Projection-Free Optimization For Strongly Convex Constraint Sets\u003C\/a\u003E\u003C\/em\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cem\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1711.06232.pdf\u0022\u003EA Novel Framework for Robustness Analysis of Visual QA Models\u003C\/a\u003E\u003C\/em\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cem\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1809.01852.pdf\u0022\u003EGAMENet: Graph Augmented MEmory Networks for Recommending Medication Combination\u003C\/a\u003E\u003C\/em\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EThe \u003Ca href=\u0022https:\/\/sites.google.com\/view\/kegworkshop\/\u0022\u003EKnowledge Extraction from Games\u003C\/a\u003E workshop taking place on Jan. 27 was organized by Georgia Tech Computer Science Ph.D. candidate \u003Cstrong\u003EMatthew Guzdial \u003C\/strong\u003Eand his peers from Pomona College and Drexel University. The workshop will explore approaches and questions to the automated extraction of design elements, music, character graphics, and other \u0026ldquo;knowledge\u0026rdquo; from games.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech will present five papers and bring home several big honors at AAAI 2019."}],"uid":"34773","created_gmt":"2019-01-07 18:37:15","changed_gmt":"2019-01-30 16:39:23","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-01-17T00:00:00-05:00","iso_date":"2019-01-17T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"616622":{"id":"616622","type":"image","title":"Ashok Goel will present a keynote speech and Charles Isbell will honored as a 2019 Fellow at AAAI.","body":null,"created":"1547822281","gmt_created":"2019-01-18 14:38:01","changed":"1547822281","gmt_changed":"2019-01-18 14:38:01","alt":"","file":{"fid":"234688","name":"aaai.jpg","image_path":"\/sites\/default\/files\/images\/aaai.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/aaai.jpg","mime":"image\/jpeg","size":803604,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/aaai.jpg?itok=AmF_umWt"}}},"media_ids":["616622"],"groups":[{"id":"576481","name":"ML@GT"},{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1299","name":"GVU Center"},{"id":"606703","name":"Constellations Center"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"616279":{"#nid":"616279","#data":{"type":"news","title":"\u0027Human Rights\u0027 May Help Shape Artificial Intelligence in 2019","body":[{"value":"\u003Cp\u003EEthics and accountability will be among the most significant challenges for artificial intelligence (AI) in 2019, according to a survey of researchers at Georgia Tech\u0026rsquo;s College of Computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn response to an email query about AI developments that can be expected in 2019, most of the researchers \u0026ndash; whether talking about \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003Emachine learning\u003C\/a\u003E (ML), \u003Ca href=\u0022http:\/\/www.robotics.gatech.edu\/\u0022\u003Erobotics\u003C\/a\u003E, \u003Ca href=\u0022http:\/\/vis.gatech.edu\/\u0022\u003Edata visualizations\u003C\/a\u003E, \u003Ca href=\u0022https:\/\/gtnlp.wordpress.com\/\u0022\u003Enatural language processing\u003C\/a\u003E, or other facets of AI \u0026ndash; touched on the growing importance of recognizing the needs of people in AI systems.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In 2019, I hope we will see AI researchers and practitioners start to frame the debate about proper and improper uses of artificial intelligence and machine learning in terms of human rights,\u0026rdquo; said Associate Professor \u003Ca href=\u0022http:\/\/eilab.gatech.edu\/mark-riedl\u0022\u003E\u003Cstrong\u003EMark Riedl\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/youtu.be\/o-YLQJ-oRqE\u0022 target=\u0022_blank\u0022\u003E[RELATED: Is AI Coming For My Job?]\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;More and more, interpretability and fairness are being recognized as critical issues to address to ensure AI appropriately interacts with society,\u0026rdquo; said Ph.D. student\u0026nbsp;\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/fredhohman.com\/\u0022\u003EFred Hohman\u003C\/a\u003E\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003ETaking on algorithmic bias\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EQuestions about the rights of end users of AI-enabled services and products are becoming a priority, but Riedl said more is needed.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Companies are making progress in recognizing that AI systems may be biased in prejudicial ways. [However,] we need to start talking about the next step: remedy. How do people seek remedy if they believe an AI system made a wrong decision?\u0026rdquo; said Riedl.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAssistant Professor \u003Ca href=\u0022http:\/\/jamiemorgenstern.com\/\u0022\u003E\u003Cstrong\u003EJamie Morgenstern\u003C\/strong\u003E\u003C\/a\u003E sees algorithmic bias as an ongoing concern in 2019 and gave banking as an example of an industry that may be in the news for its algorithmic decision-making.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I project that we\u0026rsquo;ll have more high-profile examples of financial systems that use machine learning having worse rates of lending to women, people of color, and other communities historically underrepresented in the \u0026lsquo;standard\u0026rsquo; American economic system,\u0026rdquo; Morgenstern said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/615576\/georgia-tech-researchers-working-improve-fairness-ml-pipeline\u0022 target=\u0022_blank\u0022\u003E[RELATED:\u0026nbsp;Researchers Working To Improve Fairness in the ML Pipeline]\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn recent years corporate responses to cases of bias have been hit or miss, but Assistant Professor \u003Ca href=\u0022http:\/\/www.munmund.net\/\u0022\u003E\u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E\u003C\/a\u003E said 2019 may see a shift in how tech companies balance their shareholders\u0026rsquo; interests with the interests of their customers and society.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;[Companies] will be increasingly subject to governmental regulation and will be forced to come up with safeguards to address misuse and abuse of their technologies, and will even consider broader partnerships with their market competitors to achieve this. For some corporations, business interests may take a backseat to ethics until they regain customer trust,\u0026rdquo; said De Choudhury.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003EWorking toward more transparency\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EOne way companies can regain that trust is through sharing their algorithms with the public, our experts said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Developers tend to walk around feeling objective because \u0026lsquo;it\u0026rsquo;s the algorithm that is determining the answer\u0026rsquo;. Moving forward, I believe that the algorithms will have to be increasingly \u0026lsquo;inspectable\u0026rsquo; and developers will have to explain their answers,\u0026rdquo; Executive Associate Dean and Professor \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/fac\/Charles.Isbell\/\u0022\u003E\u003Cstrong\u003ECharles Isbell\u003C\/strong\u003E\u003C\/a\u003E\u003Cstrong\u003E.\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPh.D. student\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~ypinter3\/\u0022\u003E\u003Cstrong\u003EYuval Pinter\u003C\/strong\u003E\u003C\/a\u003E agreed. In the coming year, \u0026ldquo;[I] think we will see that researchers are trying to [develop] techniques and tests that can help us to better understand what\u0026rsquo;s going on in the actual wiring of our very fancy machine learning models.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This is not only for curiosity but also because legal applications or regulation in various countries are starting to require that algorithmic decision-making programs be able to explain why they are doing what they are doing,\u0026rdquo; said Pinter.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERegents\u0026rsquo; Professor \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/aimosaic\/faculty\/arkin\/\u0022\u003E\u003Cstrong\u003ERon Arkin\u003C\/strong\u003E\u003C\/a\u003E believes that these concerns are becoming more central precisely because artificial intelligence will continue to grow in importance in our everyday lives.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/podcasts\/ep-1-pt-1-whos-behind-wheel\u0022 target=\u0022_blank\u0022\u003E[RELATED: Who\u0026#39;s Behind the Wheel?]\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Despite continued hype and omnipresent doomsayers, panic and fear over the growth of AI and robotics should begin to subside in 2019 as the benefits to people\u0026rsquo;s lives are becoming more apparent to the world.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;However, I expect to see lawyers jumping into the fray so we may also see lawsuits determining policy for self-driving cars [and other applications] more so than government regulation or the legal system,\u0026rdquo; said Arkin.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":[{"value":"Georgia Tech experts highlight need to address bias and transparency in ongoing debate about role of AI"}],"field_summary":"","field_summary_sentence":[{"value":"Georgia Tech researchers say ethics and transparency are likely top 2019 trends in the burgeoning field of AI."}],"uid":"32045","created_gmt":"2019-01-11 20:36:29","changed_gmt":"2019-01-25 15:27:43","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-01-15T00:00:00-05:00","iso_date":"2019-01-15T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"616435":{"id":"616435","type":"image","title":"GT Computing 2019 AI Predictions","body":null,"created":"1547573803","gmt_created":"2019-01-15 17:36:43","changed":"1547573803","gmt_changed":"2019-01-15 17:36:43","alt":"GT Computing 2019 AI Predictions","file":{"fid":"234636","name":"Predictions rotator_final main.png","image_path":"\/sites\/default\/files\/images\/Predictions%20rotator_final%20main.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Predictions%20rotator_final%20main.png","mime":"image\/png","size":176681,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Predictions%20rotator_final%20main.png?itok=Y0ssml1r"}}},"media_ids":["616435"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"545781","name":"Institute for Data Engineering and Science"},{"id":"576481","name":"ML@GT"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[{"id":"2556","name":"artificial intelligence"},{"id":"9167","name":"machine learning"},{"id":"180204","name":"algorithmic bias"},{"id":"2947","name":"transparency"},{"id":"180205","name":"riedl"},{"id":"180206","name":"hohman"},{"id":"175631","name":"isbell"},{"id":"180207","name":"de choudhury"},{"id":"180208","name":"morgenstern"},{"id":"180209","name":"arkin"},{"id":"180210","name":"2019 trends"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"},{"id":"39541","name":"Systems"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAlbert Snedeker, Communications Manager\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:albert.snedeker@cc.gatech.edu?subject=2019%20AI%20Predictions\u0022\u003Ealbert.snedeker@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["albert.snedeker@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"615833":{"#nid":"615833","#data":{"type":"news","title":"Seth Hutchinson Named New Executive Director of IRIM","body":[{"value":"\u003Cp\u003EThe Georgia Institute of Technology has selected \u003Cstrong\u003ESeth Hutchinson\u003C\/strong\u003E as the new executive director of the\u0026nbsp;\u003Ca href=\u0022http:\/\/www.robotics.gatech.edu\/\u0022\u003EInstitute for Robotics and Intelligent Machines\u003C\/a\u003E\u0026nbsp;(IRIM).\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~seth\/\u0022\u003EHutchinson\u003C\/a\u003E\u0026nbsp;is a professor and KUKA Chair for Robotics in Georgia Tech\u0026rsquo;s College of Computing and has served as associate director of IRIM.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBefore joining Georgia Tech in January 2018, he was a professor of electrical and computer engineering at the University of Illinois at Urbana-Champaign. Hutchinson holds a bachelor of science, master of science and Ph.D. in electrical engineering from Purdue University.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Seth is internationally known for his work in robotics as evidenced by his more than 200 publications, his editor-in-chief role of the\u0026nbsp;\u003Cem\u003EIEEE Transactions on Robotics\u003C\/em\u003E\u0026nbsp;and his recent selection as president-elect of the IEEE Robotics and Automation Society,\u0026rdquo; said Chaouki Abdallah, Georgia Tech\u0026rsquo;s executive vice president for research. \u0026ldquo;I am pleased that he will be the new executive director of Georgia Tech\u0026rsquo;s Institute for Robotics and Intelligent Machines, and I look forward to working with him toward the goal of making Georgia Tech the leader in robotics, autonomy and manufacturing.\u0026rdquo;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHutchinson\u0026rsquo;s research interests lie in vision-based control, motion planning, planning under uncertainty, pursuit-evasion, localization and mapping, locomotion and bio-inspired robotics. Hutchinson is the coauthor of two books, \u0026ldquo;\u003Cem\u003EPrinciples of Robot Motion - Theory, Algorithms, and Implementations\u003C\/em\u003E,\u0026rdquo; and \u0026ldquo;\u003Cem\u003ERobot Modeling and Control\u003C\/em\u003E.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The robotics research happening here at Georgia Tech is among the best in the world, from actuators to high-level reasoning,\u0026rdquo; he said. \u0026ldquo;I honestly cannot think of a place I\u0026rsquo;d rather be right now than here, working with this group of people.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAt Georgia Tech, IRIM serves as an umbrella under which robotics researchers, educators and students from across campus can come together to advance the many high-powered and diverse robotics activities.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIRIM\u0026rsquo;s mission is to create new and exciting opportunities for faculty collaboration; educate the next generation of robotics experts, entrepreneurs, and academic leaders; and partner with industry and government to pursue truly transformative robotics research. IRIM serves more than 90 faculty members, 180 graduate students and 40 robotics labs. The robotics program at Georgia Tech attracts more than $60 million in research annually.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Hutchinson\u00a0is a professor and KUKA Chair for Robotics in Georgia Tech\u2019s College of Computing and has served as associate director of IRIM."}],"uid":"33939","created_gmt":"2019-01-03 17:56:55","changed_gmt":"2019-01-03 17:56:55","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-01-03T00:00:00-05:00","iso_date":"2019-01-03T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"615814":{"id":"615814","type":"image","title":"Seth Hutchinson, executive director of IRIM","body":null,"created":"1546526715","gmt_created":"2019-01-03 14:45:15","changed":"1546526715","gmt_changed":"2019-01-03 14:45:15","alt":"Seth Hutchinson with robotics lab","file":{"fid":"234441","name":"seth-hutchinson-9688.jpg","image_path":"\/sites\/default\/files\/images\/seth-hutchinson-9688.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/seth-hutchinson-9688.jpg","mime":"image\/jpeg","size":414202,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/seth-hutchinson-9688.jpg?itok=A2UVVxL3"}}},"media_ids":["615814"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"129","name":"Institute and Campus"},{"id":"132","name":"Institute Leadership"},{"id":"134","name":"Student and Faculty"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"169760","name":"Seth Hutchinson"},{"id":"78271","name":"IRIM"},{"id":"78811","name":"Institute for Robotics and Intelligent Machines"},{"id":"180037","name":"IRIM director"},{"id":"667","name":"robotics"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EJohn Toon\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearch News\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:jtoon@gatech.edu\u0022\u003Ejtoon@gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"615418":{"#nid":"615418","#data":{"type":"news","title":"Assistant Professor Dhruv Batra Earns Prestigious ECASE-Army Award","body":[{"value":"\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E Assistant Professor \u003Cstrong\u003EDhruv Batra\u003C\/strong\u003E was recently selected as a recipient of the prestigious Early Career Award for Scientists and Engineers (ECASE-Army) by the Army Research Office, providing five years\u0026rsquo; worth of research funding to make artificial intelligence (AI) systems more transparent, explainable, and trustworthy.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe award, which provides a total of $1 million over the course of the grant, comes as a result of Batra\u0026rsquo;s selection for a similar early-career award by the Army Research Office Young Investigator Program in 2014.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research Batra\u0026rsquo;s lab will pursue with the funding addresses a fundamental challenge in development of AI \u0026ndash; their \u0026ldquo;black-box\u0026rdquo; nature, the consequent difficulty humans face in identifying why or how AI systems fail, and how to improve upon those technologies. When a self-driving car from a major tech company, for example, suffered its first fatality in 2015, legal and regulatory agencies understandably questioned what went wrong. The challenge at the time was providing a sufficient answer to that question.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Your response can\u0026rsquo;t just be, \u0026lsquo;Well, there was this machine learning box in there, and it just didn\u0026rsquo;t detect the car. We don\u0026rsquo;t know why,\u0026rsquo;\u0026rdquo; said Batra, who is also a member of the \u003Ca href=\u0022http:\/\/ml.gatech.edu\u0022\u003EMachine Learning\u003C\/a\u003E and \u003Ca href=\u0022http:\/\/gvu.gatech.edu\u0022\u003EGVU\u003C\/a\u003E Centers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBatra\u0026rsquo;s research aims to create AI systems that can more readily explain what they do and why. This could come in the form of natural language or visual explanations, both of which \u0026ndash; computer vision and natural language processing \u0026ndash; are central areas of focus in Batra\u0026rsquo;s lab. The machine could, for example, identify regions in image that provide support for its predictions, potentially assisting a user\u0026rsquo;s understanding of what the machine can or cannot do.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt\u0026rsquo;s an important area of study for a few reasons, Batra said. He classifies AI technology into three levels of maturity:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003ELevel 1 is technology that is in its infancy. It is not near deployment to everyday users, and the consumers of the technology are researchers. The goal for transparency and explanation is to help researchers and developers to understand the failure modes and current limitations, and deduce how to improve the technology \u0026ndash; \u0026ldquo;actionable insight,\u0026rdquo; as Batra called it.\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003ELevel 2 is when things are working to a degree, enough so that the technology can and has been deployed.\u003Cbr \/\u003E\r\n\t\u003Cbr \/\u003E\r\n\t\u0026ldquo;The technology may be mature in a narrow range, and you can ship the product,\u0026rdquo; Batra said. \u0026ldquo;Like face detection or fingerprint technology. It\u0026rsquo;s built into products and being used at agencies, airports, or other places.\u0026rdquo;\u003Cbr \/\u003E\r\n\t\u003Cbr \/\u003E\r\n\tIn such cases, you want explanations and interpretability that helps build appropriate trust with users. Users can understand when the system reliably works and when it might not work \u0026ndash; face detection in bad lighting, for example \u0026ndash; and make efforts to use in a more appropriate setting.\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003ELevel 3 is typically a fairly narrow category where the AI is better \u0026ndash; sometimes significantly so \u0026ndash; than the human. Batra used chess-playing and Go-playing bots as an example. The best chess-playing bots convincingly outperform the best humans and reliably hand a resounding defeat to the average human player.\u003Cbr \/\u003E\r\n\t\u003Cbr \/\u003E\r\n\t\u0026ldquo;We already know bots play much better than humans,\u0026rdquo; he said. \u0026ldquo;In such cases, you don\u0026rsquo;t need to improve the machine and you already trust its skill level. You want the machine to give you explanations not so that you can improve the AI, but so that you can improve yourself.\u0026rdquo;\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EBatra envisions scenarios where the techniques his lab develops could assist at all three levels, but the experiments will take place between Levels 1 and 2. They will work in Visual Question Answering, which are agents that answer natural language questions about visual content, and other areas of maturity that may reach the product level in five or more years.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe funding will begin for Batra in January. Batra has served as an assistant professor at Georgia Tech since Fall 2016. \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~dbatra\/\u0022\u003EVisit his website for more information about his research.\u003C\/a\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The award will provide $1 million worth of funding over the course of the next five years."}],"uid":"33939","created_gmt":"2018-12-14 18:39:05","changed_gmt":"2018-12-14 18:39:05","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-12-14T00:00:00-05:00","iso_date":"2018-12-14T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"586461":{"id":"586461","type":"image","title":"Dhruv Batra","body":null,"created":"1485377710","gmt_created":"2017-01-25 20:55:10","changed":"1485377710","gmt_changed":"2017-01-25 20:55:10","alt":"","file":{"fid":"223509","name":"DhruvBatra.jpg","image_path":"\/sites\/default\/files\/images\/DhruvBatra.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/DhruvBatra.jpg","mime":"image\/jpeg","size":82240,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/DhruvBatra.jpg?itok=D762Jyi-"}}},"media_ids":["586461"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"173615","name":"dhruv batra"},{"id":"179995","name":"ecase-army"},{"id":"1633","name":"PECASE"},{"id":"166848","name":"School of Interactive Computing"},{"id":"654","name":"College of Computing"},{"id":"2556","name":"artificial intelligence"},{"id":"173614","name":"visual question answering"},{"id":"179996","name":"VQA"},{"id":"2835","name":"ai"},{"id":"179997","name":"explainable AI"},{"id":"8494","name":"HCI"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"615032":{"#nid":"615032","#data":{"type":"news","title":"Computing Professors Recognized With Prestigious ACM Fellowships","body":[{"value":"\u003Cp\u003ETwo Georgia Tech College of Computing faculty members have been named as Fellows of the Association for Computing Machinery (ACM).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn an announcement made today, Executive Associate Dean \u003Cstrong\u003ECharles Isbell\u003C\/strong\u003E and School of Interactive Computing Professor \u003Cstrong\u003EAmy Bruckman\u003C\/strong\u003E were named as two of 56 ACM Fellows selected for 2018.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAccording to the ACM news release, \u0026ldquo;the accomplishments of the 2018 ACM Fellows underpin the technologies that define the digital age and greatly impact our professional and personal lives. ACM Fellows are composed of an elite group that represents less than 1 percent of the Association\u0026rsquo;s global membership.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIsbell, a Georgia Tech alumnus, was named as an ACM Fellow \u0026ldquo;for contributions to interactive machine learning; and for contributions to increasing access and diversity in computing.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe organization selected Bruckman for her \u0026ldquo;contributions to collaborative computing and foundational work in Internet research ethics.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In society, when we identify our tech leaders, we often think of men and women in industry who have made technologies pervasive while building major corporations,\u0026rdquo; said ACM President Cherri M. Pancake. \u0026ldquo;At the same time, the dedication, collaborative spirit and creativity of the computing professionals who initially conceived and developed these technologies goes unsung. The ACM Fellows program publicly recognizes the people who made key contributions to the technologies we enjoy. Even when their work did not directly result in a specific technology, they have made major theoretical contributions that have advanced the science of computing. We are honored to add a new class of Fellows to ACM\u0026rsquo;s ranks and we look forward to the guidance and counsel they will provide to our organization.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUnderscoring ACM\u0026rsquo;s global reach, the 2018 Fellows hail from universities, companies and research centers in Finland, Greece, Israel, Sweden, Switzerland, and the US.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe 2018 Fellows have been cited for numerous contributions in areas including accessibility, augmented reality, algorithmic game theory, data mining, storage, software and the World Wide Web.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EACM will formally recognize its 2018 Fellows at the annual Awards Banquet, to be held in San Francisco on June 15, 2019. Additional information about the 2018 ACM Fellows, as well as previous ACM Fellows, is available through the ACM Fellows site.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"In an announcement made today, Executive Associate Dean Charles Isbell and School of Interactive Computing Professor Amy Bruckman were named as two of 56 ACM Fellows selected for 2018."}],"uid":"33939","created_gmt":"2018-12-05 21:19:31","changed_gmt":"2018-12-05 21:19:31","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-12-05T00:00:00-05:00","iso_date":"2018-12-05T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"615031":{"id":"615031","type":"image","title":"ACM Fellows","body":null,"created":"1544044546","gmt_created":"2018-12-05 21:15:46","changed":"1544044546","gmt_changed":"2018-12-05 21:15:46","alt":"Amy Bruckman and Charles Isbell","file":{"fid":"234189","name":"ACM Fellows.jpg","image_path":"\/sites\/default\/files\/images\/ACM%20Fellows.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/ACM%20Fellows.jpg","mime":"image\/jpeg","size":52855,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/ACM%20Fellows.jpg?itok=UFx9qCLN"}}},"media_ids":["615031"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"8472","name":"amy bruckman"},{"id":"10664","name":"charles isbell"},{"id":"3047","name":"ACM"},{"id":"113911","name":"ACM Fellows"},{"id":"654","name":"College of Computing"},{"id":"166848","name":"School of Interactive Computing"},{"id":"208","name":"computing"},{"id":"172908","name":"Association for Computing Machinery"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EBen Snedeker\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Manager\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:albert.snedeker@cc.gatech.edu\u0022\u003Ealbert.snedeker@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"613667":{"#nid":"613667","#data":{"type":"news","title":"Georgia Tech Ph.D. Student Wins Best Paper Honorable Mention at VISxAI 2018","body":[{"value":"\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.cse.gatech.edu\/\u0022\u003EGeorgia Tech Computational Science and Engineering (CSE)\u003C\/a\u003E Ph.D. student \u003Cstrong\u003EFred Hohman\u003C\/strong\u003E was recently recognized with an honorable mention for best paper at this year\u0026rsquo;s\u0026nbsp;VISxAI workshop. The workshop is a part of the\u0026nbsp;\u003Ca href=\u0022http:\/\/ieeevis.org\/year\/2018\/welcome\u0022\u003EIEEE VIS 2018\u003C\/a\u003E conference.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHohman\u0026rsquo;s \u0026ldquo;explorable\u0026rdquo; article \u003Ca href=\u0022https:\/\/idyll.pub\/post\/dimensionality-reduction-293e465c2a3443e8941b016d\/\u0022\u003EThe Beginner\u0026rsquo;s Guide to Dimensionality\u003C\/a\u003E Reduction was created in collaboration with \u003Cstrong\u003EMatt Conlen\u003C\/strong\u003E of the University of Washington. Using a dataset of artworks from the Metropolitan Museum of Art in New York City, Hohman and Conlen explore the methods that data scientists use to visualize high-dimensional data.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EVisualizing the myriad connections between all of the different features of each artwork in a high-dimensional graph could provide new insights. However, as Hohman says in the article, humans can\u0026rsquo;t see so many dimensions all at once.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDimensionality reduction algorithms reduce the number of random variables by collecting a set of principal variables that retain the variation present in the data. This allows the data to be presented in fewer dimensions, which can be more easily processed by human viewers. This kind of projection is called an\u0026nbsp;\u003Cem\u003Eembedding\u003C\/em\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe guide teaches users about embeddings and compares some of the most popular dimensionality reduction algorithms used today to create them. The article also contains a list of pros and cons for each of the algorithms to help readers use this technique for their own data. All of the algorithms mentioned are open-source Python implementations.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Explorable and interactive articles are a great medium for teaching concepts that haven\u0026rsquo;t seen much usage and attention in academia yet,\u0026rdquo; said Hohman. \u0026ldquo;It\u0026rsquo;s really great to see recognition for our article, which helps people learn and engage with complicated concepts through interactive visualizations that are easily accessible on the web,\u0026rdquo; said Hohman.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIEEE VIS is the flagship conference on visualization and visual analytics. Hohman was also a panelist at this year\u0026rsquo;s event, and his advisor, CSE Associate Professor \u003Cstrong\u003EPolo Chau\u003C\/strong\u003E, served as a co-organizer of VISxAI. IEEE VIS was held Oct. 21-26 in Berlin, Germany.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor more information on Georgia Tech\u0026rsquo;s presence at IEEE VIS, explore highlights with the \u003Ca href=\u0022https:\/\/gvu.gatech.edu\/vis-2018\u0022\u003EGVU Center\u0026rsquo;s interactive overview.\u003C\/a\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Hohman and Conlen demonstrate how artwork from the Metropolitan Museum of Art can be categorized using machine learning techniques."}],"uid":"34773","created_gmt":"2018-11-01 19:44:23","changed_gmt":"2018-12-03 17:19:59","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-11-01T00:00:00-04:00","iso_date":"2018-11-01T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"613693":{"id":"613693","type":"image","title":"ML@GT Ph.D. student Fred Hohman collaborated with Matt Conlen of the University of Washington to create an explorable paper about high-dimensional data visualization.","body":null,"created":"1541105775","gmt_created":"2018-11-01 20:56:15","changed":"1541605649","gmt_changed":"2018-11-07 15:47:29","alt":"","file":{"fid":"233722","name":"me6-1 copy.jpg","image_path":"\/sites\/default\/files\/images\/me6-1%20copy.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/me6-1%20copy.jpg","mime":"image\/jpeg","size":292177,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/me6-1%20copy.jpg?itok=_0gSQOz4"}}},"media_ids":["613693"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"614861":{"#nid":"614861","#data":{"type":"news","title":"New IC Assistant Professor Matthew Gombolay Takes Flight at Georgia Tech","body":[{"value":"\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E Assistant Professor \u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/matthew-gombolay\u0022\u003EMatthew Gombolay\u003C\/a\u003E\u003C\/strong\u003E was always interested in space and aviation. He had taken some flying lessons as a teen and after college decided that he wanted to finish his pilot certification. He had received some prodding from his then-girlfriend \u0026ndash; now wife \u0026ndash; in the form of a flight lesson in the Washington D.C. area.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlthough he had done classes when he was around 15 or 16 years old, he treated it like it was a brand-new experience for him.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It kind of was,\u0026rdquo; said Gombolay, who had backed out of his lessons when he was younger after being told he was ready to fly solo about eight hours in. \u0026ldquo;I got a little shy and embarrassed and quiet, but I always wanted to do it.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnd he did, receiving his certification after finishing his undergraduate degree. It was everything he had hoped. It was a different experience than his studies and his research, which is mostly an exercise of his mental capabilities. Flying required mental effort, but also physical \u0026ndash; his hands, his feet, coordination of his body. It was something that he appreciated.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut it was also something that helped guide him on the path he wanted to follow for his research.\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Cstrong\u003EEarning his wings\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EHis studies lie in a number of areas, namely robotics, artificial intelligence (AI), machine learning (ML), human factors engineering, human-robot interaction, planning and scheduling, queuing theory, real-time systems, and operations research.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA lot of that was borne out of a specific experience he had during a flight lesson.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I was flying the aircraft, and my instructor told me to plot a diversion to another airport because we were going to pretend that the airport I was headed to had some weather that would prevent me from landing,\u0026rdquo; he explained. \u0026ldquo;That\u0026rsquo;s a lot of work. You have to fly the plane, you have to get out a map and do all the segments you\u0026rsquo;ll take, measure the angles, measure distances, calculate fuel burn and figure out how you\u0026rsquo;ll change your flight plan.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESo, when given the directive to use autopilot while doing the calculations, Gombolay input his altitude and heading and stuck his head into his calculations.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt was a mistake.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;She told me I broke the first rule,\u0026rdquo; he said. \u0026ldquo;You have to aviate, navigate, then communicate. I was so desperate to handle the workload that I turned over the first duty to an autopilot and didn\u0026rsquo;t really know how it worked.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If there was Mt. Everest in front of us, it wasn\u0026rsquo;t going to steer away. If there was another plane, it wasn\u0026rsquo;t going to steer away. If it was low on fuel, it wasn\u0026rsquo;t going to tell me to turn back.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt was a realization of how quickly and easily humans are willing to trust automated systems that may not be entirely prepared to handle that workload. Your willingness to be vulnerable is a huge choice, he said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I can trust something, but that doesn\u0026rsquo;t make it trustworthy,\u0026rdquo; he said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis realization helped guide him to human factors engineering during his graduate studies at the Massachusetts Institute of Technology, where he earned his Ph.D. in Autonomous Systems in 2017.\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Cstrong\u003EMaking robots personal\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003ESince joining Georgia Tech in the beginning of the fall semester, Gombolay has been growing his lab and beginning a handful of new projects that build on some of his past research.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne recently-funded project done in collaboration with MIT focuses on how humans make decisions as part of a team \u0026ndash; the strategies, the styles, etc. Using a video game as an example, he explained that some individuals may prefer an aggressive approach versus a defensive.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;These different stylistic things emerge naturally in how humans solve problems,\u0026rdquo; Gombolay said. \u0026ldquo;But that diversity isn\u0026rsquo;t very pleasant for machine learning algorithms because the average of two different people is not a third good person. It\u0026rsquo;s just an ugly mess.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHis lab is looking at ways to synthesize policies that can leverage all of the data about styles and strategies and tailor to individual differences. Health care is an example, Gombolay said. Consider a physical therapist who wants to teach a robot how to take care of a patient at home. Each therapist has his or her own unique style of stretching, massaging, and strength-training their patients, and each patient has a unique malady, response profile, or anatomy.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Most algorithms today that would be put on a robot to help it learn how to care for a patient would either apply a one-size-fits-all model, which can result in a blend that helps nobody, or train from scratch for each new patient-therapist combination, which would take way too long to be a practical solution.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We want to leverage every robot\u0026rsquo;s collective experience while still being able to tailor the behavior to each individual.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOther areas of focus for his lab include manufacturing, health care, and new areas in reinforcement learning. He is currently funding three students, and his lab includes one research scientist.\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Cstrong\u003EA few hobbies\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EWhen he\u0026rsquo;s not doing research \u0026ndash; or maybe flying a plane \u0026ndash; Gombolay is usually taking part in one of his other hobbies, like tennis or building models of Star Wars or Star Trek ships with his LEGOs and MegaBloks.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe\u0026rsquo;s also a musician, who started on the violin and piano before adding an alto saxophone to the mix and later a guitar. The guitar is his instrument of choice nowadays, and he\u0026rsquo;s spent a lot of time using it in bands in college \u0026ndash; church, a cover band, talent shows and the like.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe\u0026rsquo;s found a couple of people on campus, like fellow IC Professor \u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/seth-hutchinson\u0022\u003ESeth Hutchinson\u003C\/a\u003E\u003C\/strong\u003E, who don\u0026rsquo;t mind getting together for a jam session now and again.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs the Star Wars and Star Trek models might suggest, Gombolay has always been fascinated by space and space travel. It\u0026rsquo;s influenced his path in research and, who knows, in another life he might have been an astronaut.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Maybe,\u0026rdquo; he said when asked whether that was ever an ambition. \u0026ldquo;Who knows? Maybe one day I\u0026rsquo;ll be on a rocket to Mars. I\u0026rsquo;ll take my wife and the kid with me. She\u0026rsquo;s a physician, so she\u0026rsquo;ll take care of us.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Inspired by an experience in a flying lesson, Georgia Tech\u0027s Matthew Gombolay is researching how to make robotics more personal and trustworthy."}],"uid":"33939","created_gmt":"2018-11-30 21:43:11","changed_gmt":"2018-11-30 21:43:11","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-11-30T00:00:00-05:00","iso_date":"2018-11-30T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"614858":{"id":"614858","type":"image","title":"Matthew Gombolay main","body":null,"created":"1543613417","gmt_created":"2018-11-30 21:30:17","changed":"1543613417","gmt_changed":"2018-11-30 21:30:17","alt":"Matthew Gombolay","file":{"fid":"234132","name":"Main image.jpeg","image_path":"\/sites\/default\/files\/images\/Main%20image.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Main%20image.jpeg","mime":"image\/jpeg","size":152003,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Main%20image.jpeg?itok=ia0LZupe"}}},"media_ids":["614858"],"related_links":[{"url":"https:\/\/www.ic.gatech.edu\/content\/artificial-intelligence-machine-learning","title":"Artificial Intelligence and Machine Learning"},{"url":"https:\/\/www.ic.gatech.edu\/content\/robotics-computational-perception","title":"Robotics and Computational Perception"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"175375","name":"matthew gombolay"},{"id":"2483","name":"interactive computing"},{"id":"166848","name":"School of Interactive Computing"},{"id":"8494","name":"HCI"},{"id":"667","name":"robotics"},{"id":"109","name":"Georgia Tech"},{"id":"78271","name":"IRIM"},{"id":"78811","name":"Institute for Robotics and Intelligent Machines"},{"id":"4137","name":"aeronautics"},{"id":"78851","name":"HRI"},{"id":"78841","name":"human-robot interaction"},{"id":"654","name":"College of Computing"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"614427":{"#nid":"614427","#data":{"type":"news","title":"Georgia Tech Will Show Off Latest Research at AI\u2019s \u2018Hottest\u2019 Conference","body":[{"value":"\u003Cp\u003EIt is uncommon to hear about a machine learning and artificial intelligence (AI) conference selling out like Taylor Swift concert, but the \u003Ca href=\u0022https:\/\/nips.cc\/Conferences\/2018\u0022\u003ENeural Information Processing Systems (NeurIPS)\u003C\/a\u003E conference did just that.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe conference sold out in \u003Ca href=\u0022https:\/\/medium.com\/syncedreview\/nips-tickets-sell-out-in-less-than-12-minutes-e3aab37ab36a\u0022\u003Eless than 12 minutes\u003C\/a\u003E for its Dec. 2 - 8 gathering in Montreal, Quebec. As one of the biggest AI conferences in the world, tech companies like Google, Microsoft, and Facebook come to find new talent, while renowned researchers present their latest work.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA large number of Georgia Tech faculty and students will be among the throngs of attendees. With 26 papers by more than 23 Georgia Tech authors and several workshops to participate in, the Yellow Jackets are one of the leading contributors to the conference program.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EByron Boots\u003C\/strong\u003E and \u003Cstrong\u003EDhruv Batra\u003C\/strong\u003E, assistant professors in the Machine Learning Center at Georgia Tech (ML@GT) and the \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing,\u003C\/a\u003E are serving as area chairs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We are thrilled to be a top performing university at a conference of NeurIPS\u0026rsquo; caliber. Our faculty and students continue to push boundaries and revolutionize our field, and it shows at events like this,\u0026rdquo; said \u003Cstrong\u003EIrfan Essa,\u003C\/strong\u003E \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EML@GT\u003C\/a\u003E director.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs NeurIPS has increased in popularity since its first meeting in 1987, the conference receives thousands of submissions each year with a record high of 3,240 submissions in 2017. Over the years the content has shifted from examining biological and artificial neural networks and to focus more on AI, statistics, and machine learning.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBelow is a list of Georgia Tech\u0026rsquo;s spotlight presentations, posters, and workshops being featured at NeurIPS next month.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ESpotlights\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1801.03423.pdf\u0022\u003EA Smoothed Analysis of the Greedy Algorithm for the Linear Contextual Bandit Problem\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003ESampath Kannan, Jamie Morgenstern, Aaron Roth, Bo Waggoner, and Steven Wu\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1811.05016.pdf\u0022\u003ELearning Temporal Point Processes via Reinforcement Learning\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EShuang Li, Shuai Xiao, Shixiang Zhu, Nan Du, Yao Xie, Le Song\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1805.10611.pdf\u0022\u003ERobust Hypothesis Testing Using Wasserstein Uncertainty Sets\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003ERUI GAO, Liyan Xie, Yao Xie, Huan Xu\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1807.07531.pdf\u0022\u003ELimited Memory Kelley\u0026rsquo;s Method Converges for Composite Convex and Submodular Objectives\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003ESong Zhou, Swati Gupta, and Madeleine Udell\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/www.seas.upenn.edu\/~xsi\/data\/nips18.pdf\u0022\u003ELearning Loop Invariants for Program Verification\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EXujie Si, Hanjun Dai, Mukund Raghothaman, Mayur Naik, and Le Song\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1807.10455.pdf\u0022\u003EAcceleration through Optimistic No-Regret Dynamics\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EJun-Kun Wang and Jacob Abernethy\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EPosters\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1805.10755.pdf\u0022\u003EDual Policy Iteration\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EWen Sun, Geoff Gordon, Byron Boots, and Drew Bagnell\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1810.13400\u0022\u003EDifferentiable MPC for End-to-End Planning and Control\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EBrandon Amos, Jake Sacks, Ivan Dario Jimenez, Byron Boots, and Zico Kolter\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1809.08820.pdf\u0022\u003EOrthogonally Decoupled Variational Gaussian Processes\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EHugh Samilbeni, Ching-An Cheng, Byron Boots, and Marc Deisenroth\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1810.12369.pdf\u0022\u003ELearning and Inference in Hilbert Space with Quantum Graphical Models\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003ESid Srinivasan, Carlton Downey, and Byron Boots\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1811.00103\u0022\u003EThe Price of Fair PCA: One Extra Dimension\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003ESamira Samadi, Uthaipon Tantipongpipat, Mohit Singh, Jamie Morgenstern, and Santosh Vempala\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1801.03423.pdf\u0022\u003EA Smoothed Analysis of the Greedy Algorithm for the Linear Contextual Bandit Problem\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003ESampath Kannan, Jamie Morgenstern, Aaron Roth, Bo Waggoner, and Steven Wu\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1811.05016.pdf\u0022\u003ELearning Temporal Point Processes via Reinforcement Learning\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EShuang Li, Shuai Xiao, Shixiang Zhu, Nan Du, Yao Xie, Le Song\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1805.10611.pdf\u0022\u003ERobust Hypothesis Testing Using Wasserstein Uncertainty Sets\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003ERUI GAO, Liyan Xie, Yao Xie, Huan Xu\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1807.07531.pdf\u0022\u003ELimited Memory Kelley\u0026rsquo;s Method Converges for Composite Convex and Submodular Objectives\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003ESong Zhou, Swati Gupta, and Madeleine Udell\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1810.11896.pdf\u0022\u003ESmoothed Analysis of Discrete Tensor Decomposition and Assemblies of Neurons\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003ENima Anari, Amin Saberi, Wolfgang Maass, Robert Legenstein, Christos Papadimitriou, and Santosh Vempala\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1803.06416\u0022\u003EDifferential Privacy for Growing Databases\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003ERachel Cummings, Sara Krehbiel, Kevin Lai, and Uthaipon (Tao) Tantipongpipat.\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1808.10056.pdf\u0022\u003EDifferentially Private Change-Point Detection\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003ERachel Cummings, Sara Krehbiel, Yajun Mei, Rui Tuo, and Wanrong Zhang\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/www.cs.rice.edu\/~as143\/Papers\/topkapi.pdf\u0022\u003ETopkapi: Parallel and Fast Sketches for Finding Top-K Frequent Elements\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EAnkush Mandal, He Jiang, Anshumali Shrivastava, and Vivek Sarkar\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1810.03649.pdf\u0022\u003EOvercoming Language Priors in Visual Question Answering with Adversarial Regularization\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003ESainandan Ramakrishnan, Aishwarya Agrawal, and Stefan Lee\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1806.06004\u0022\u003EPartially Supervised Image Captioning\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EPeter Anderson, Stephen Gould, and Mark Johnson\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1805.09298.pdf\u0022\u003ELearning towards Minimum Hyperspherical Energy\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EWeiyang Liu, Rongmei Lin, Zhen Liu, Lixin Liu, Zhiding Yu, Bo Dai, and Le Song\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/nips.cc\/Conferences\/2018\/Schedule?showEvent=11921\u0022\u003ECoupled Variational Bayes via Optimization Embedding\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EBo Dai, Hanjun Dai, Niao He, Weiyang Liu, Zhen Liu, Jianshu Chen, Lin Xiao, and Le Song\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/www.seas.upenn.edu\/~xsi\/data\/nips18.pdf\u0022\u003ELearning Loop Invariants for Program Verification\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EXujie Si, Hanjun Dai, Mukund Raghothaman, Mayur Naik, and Le Song\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/papers.nips.cc\/paper\/7667-cooperative-neural-networks-conn-exploiting-prior-independence-structure-for-improved-classification.pdf\u0022\u003ECooperative Neural Networks (CoNN): Exploiting Prior Independence Structure for Improved Classification\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EHarsh Shrivastava, Eugene Bart, Bob Price, Hanjun Dai, Bo Dai, Srinivas Aluru\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1803.02312.pdf\u0022\u003EDimensionality Reduction for Stationary Time Series via Stochastic Nonconvex Optimization\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EMinshuo Chen, Lin Yang, Mengdi Wang, and Tuo Zhao\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1806.01660.pdf\u0022\u003ETowards Understanding Acceleration Tradeoff between Momentum and Asynchrony in Distributed Nonconvex Stochastic Optimization\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003ETianyi Liu, Shiyang Li, Jianping Shi, Enlu Zhou, and Tuo Zhao\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1612.02803.pdf\u0022\u003EThe Physical Systems behind Optimization Algorithms\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003ELin Yang, Raman Arora, Vladimir Braverman, and Tuo Zhao\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1810.11098.pdf\u0022\u003EProvable Gaussian Embedding with One Observation\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EMing Yu, Zhuoran Yang, Tuo Zhao, Mladen Kolar, and Zhaoran Wang\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1805.09298.pdf\u0022\u003ELearning Towards Minimum Hyperspherical Energy\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EWeiyang Liu, Rongmei Lin, Zhen Liu, Lixin Liu, Zhiding Yu, Bo Dai, and Le Song\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1807.10455.pdf\u0022\u003EAcceleration through Optimistic No-Regret Dynamics\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EJun-Kun Wang and Jacob Abernethy\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1810.09593\u0022\u003EMiME: Multilevel Medical Embedding of Electronic Health Records for Predictive Healthcare\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EEdward Choi, Cao Xiao, Walter F. Stewart, and Jimeng Sun\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWorkshops\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EWorkshop on AI in Finance\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003ETucker Balch, School of Interactive Computing Professor and Associate Chair, is an invited speaker.\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/nips2018vigil.github.io\/\u0022\u003EVisually-Grounded Interaction and Language (ViGIL)\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech organizers include Erik Wijmans, Samyak Datta, Stefan Lee, Peter Anderson, Dhruv Batra, and Devi Parikh.\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/sites.google.com\/view\/nips18-ilr\u0022\u003EImitation Learning and its Challenges in Robotics\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EInteractive Computing Ph.D. student Mustafa Mukadam is organizing the workshop.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/blackinai.github.io\/\u0022\u003E2nd Black in AI Workshop\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EApplication of The Hilbert Schmit Independence Criterion to Lexical Geographic Variation in Lyon, France\u0026nbsp;by Taha Merghani\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/www.wordplay2018.com\/\u0022\u003EWordplay: Reinforcement and Language Learning in Text-based Games\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EPlaying Text-Adventure Games with Graph-Based Deep Reinforcement Learning\u0026nbsp;\u003Cbr \/\u003E\r\nPrithviraj Ammanabrolu and Mark O. Riedl\u0026nbsp;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech will present 26 papers at NeurIPS, a premier AI conference happening December 2-8 in Montreal, Quebec."}],"uid":"34773","created_gmt":"2018-11-19 21:10:44","changed_gmt":"2018-11-30 21:02:02","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-11-19T00:00:00-05:00","iso_date":"2018-11-19T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"614435":{"id":"614435","type":"image","title":"NeurIPS 2018 will be held in Montreal, Quebec and is one of the premier AI conferences around the world. Photo Credit: Tourism Quebec","body":null,"created":"1542663792","gmt_created":"2018-11-19 21:43:12","changed":"1542810166","gmt_changed":"2018-11-21 14:22:46","alt":"","file":{"fid":"233926","name":"tourism-montreal-greater-montreal-convention-and-tourism-bureau-gmctb-photo.jpg","image_path":"\/sites\/default\/files\/images\/tourism-montreal-greater-montreal-convention-and-tourism-bureau-gmctb-photo.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/tourism-montreal-greater-montreal-convention-and-tourism-bureau-gmctb-photo.jpg","mime":"image\/jpeg","size":86232,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/tourism-montreal-greater-montreal-convention-and-tourism-bureau-gmctb-photo.jpg?itok=u8WLLEZR"}}},"media_ids":["614435"],"groups":[{"id":"576481","name":"ML@GT"},{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"614766":{"#nid":"614766","#data":{"type":"news","title":"Georgia Tech Researchers Helping Develop Game to Improve STEM Learning in Chronically-Ill Children","body":[{"value":"\u003Cp\u003EGeorgia Tech researchers are partnering with a Georgia-based game developer on a $1.5 million \u003Ca href=\u0022https:\/\/www.nih.gov\/\u0022\u003ENational Institutes of Health\u003C\/a\u003E (NIH) Small Business Innovation Research grant to help chronically-ill children maintain their educational development.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWith an emphasis on science, technology, engineering, and math (STEM) subjects, researchers from the Schools of \u003Ca href=\u0022http:\/\/www.ic.gatech.edu\u0022\u003EInteractive Computing\u003C\/a\u003E and \u003Ca href=\u0022https:\/\/bme.gatech.edu\/\u0022\u003EBiomedical Engineering\u003C\/a\u003E are teaming with \u003Ca href=\u0022https:\/\/www.th.ru.st\/\u0022\u003EThrust Interactive, Inc.\u003C\/a\u003E, to create digital games that can help these kids that tend to miss a lot of school due to their illnesses.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAssociate Professor \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/betsy-disalvo\u0022\u003E\u003Cstrong\u003EBetsy DiSalvo\u003C\/strong\u003E\u003C\/a\u003E (IC) and Associate Professor \u003Cstrong\u003E\u003Ca href=\u0022http:\/\/www.ien.gatech.edu\/people\/faculty\/wilbur-lam\u0022\u003EWilbur Lam\u003C\/a\u003E\u003C\/strong\u003E (BME) are leading the project, which will span two years under the current terms of the grant. Their goal is to take advantage of the time chronically-ill children spend in waiting rooms, having transfusions, or other times spent outside of the classroom.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe digital games are based on physical tabletop games created by members of Lam\u0026rsquo;s lab. Led by Dr. \u003Cstrong\u003EElaissa Hardy\u003C\/strong\u003E\u0026nbsp;(Emory), a team of BME undergraduate students originally created the tabletop games to help kids in the hospital with sickle cell disease engage with STEM subjects.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELam\u0026rsquo;s lab has worked with DiSalvo and Thrust for the past two years to pilot test digital versions of these games. The new NIH grant will be used to develop findings from the pilot testing so the research team can better understand how to create a scalable model that can be used in hospitals across the country.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnother challenge the team wants to address is the difficulty children face in discussing their diseases with others. Common illnesses such as diabetes and asthma, as well as those less common like sickle cell and cystic fibrosis, can be challenging topics for children, particularly in their early teen years.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The middle schoolers we interviewed told us it was awkward to talk about their disease,\u0026rdquo; DiSalvo said. \u0026ldquo;Sometimes, they got bullied or had issues finding ways to discuss it with their peers. Previous research has shown that if you can have kids play a game around their disease, they\u0026rsquo;ll engage about it more in conversation with peers and families.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It can diminish the stigma, and it also positions them as experts. When children feel like they have expertise, they are usually willing to dive deeper and learn more to maintain their expert position.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA better understanding of their disease at this age is critical for young people beginning to take charge of managing their own care, according to the researchers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;These adolescents are beginning to transition into adulthood, so managing their illness is beginning to become their responsibility,\u0026rdquo; DiSalvo said. \u0026ldquo;Those transitions are difficult because, in doctor visits, parents tend to dominate the conversation while kids sit in the background, not really asking questions or engaging. It\u0026rsquo;s important to change that dynamic at this age.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe researchers are investigating three different approaches to the digital games to determine the best learning experience outcomes. They will test content using:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EPictures and words\u003C\/li\u003E\r\n\t\u003Cli\u003EPictures and audio\u003C\/li\u003E\r\n\t\u003Cli\u003EPictures, words, and audio.\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EFollow-up comprehension tests after will help determine which approach leads to the best results. Those tests will take up the first year of the project, with the second year focused on testing the application in live hospital settings.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We want it to be so fun and engaging that they don\u0026rsquo;t think of it as an educational game,\u0026rdquo; said Sarah Boyd, a Thrust Interactive team member who will work on design.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s fun, and they\u0026rsquo;re learning. There are existing approaches relating to education of disease, but they aren\u0026rsquo;t as engaging. We want a fun and engaging game first, but then they\u0026rsquo;re going to be learning about their health as they engage.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThrust Interactive has elicited help from \u003Cstrong\u003EPaul Jenkins\u003C\/strong\u003E, a comic book writer and video game creator who has been involved with \u003Cem\u003ETeenage Mutant Ninja Turtles\u003C\/em\u003E, a number of Marvel Comics titles, and video games like \u003Cem\u003EGod of War\u003C\/em\u003E and \u003Cem\u003EThe Darkness\u003C\/em\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"With an emphasis on STEM subjects, researchers from the Schools of Interactive Computing and Biomedical Engineering are teaming with Thrust Interactive, Inc., to create digital games that can help these kids learn."}],"uid":"33939","created_gmt":"2018-11-29 16:46:54","changed_gmt":"2018-11-29 16:46:54","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-11-29T00:00:00-05:00","iso_date":"2018-11-29T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"614765":{"id":"614765","type":"image","title":"Video game on tablet STOCK","body":null,"created":"1543509687","gmt_created":"2018-11-29 16:41:27","changed":"1543509687","gmt_changed":"2018-11-29 16:41:27","alt":"Mom and daughter look at a tablet together sitting on the couch.","file":{"fid":"234068","name":"pexels-photo-1310121.jpeg","image_path":"\/sites\/default\/files\/images\/pexels-photo-1310121.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/pexels-photo-1310121.jpeg","mime":"image\/jpeg","size":91294,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/pexels-photo-1310121.jpeg?itok=OeAI6cgO"}}},"media_ids":["614765"],"related_links":[{"url":"https:\/\/spark.adobe.com\/page\/a7Lw2tHg90iZz\/","title":"Computer Science Education Week at Georgia Tech"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"166848","name":"School of Interactive Computing"},{"id":"176756","name":"School of Biomedical Engineering"},{"id":"11961","name":"betsy disalvo"},{"id":"14681","name":"Wilbur Lam"},{"id":"179817","name":"STEM learning"},{"id":"177206","name":"CSEd"},{"id":"1051","name":"Computer Science"},{"id":"11355","name":"computer science education"},{"id":"179818","name":"CSed week"},{"id":"2449","name":"video games"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"614367":{"#nid":"614367","#data":{"type":"news","title":"Designing a Better Future: IC Ph.D. Student Ari Schlesinger Keeps Tech Focus on Equity, Inclusion","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003EAri Schlesinger\u003C\/strong\u003E was spending time at \u003Ca href=\u0022https:\/\/www.microsoft.com\/en-us\/research\/\u0022\u003EMicrosoft Research\u003C\/a\u003E (MSR) in Cambridge, United Kingdom, shortly after a Microsoft AI chatbot made headlines for devolving into a racist, sexist mess within 24 hours of launch in 2016.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAfter the incident, an influx of think pieces about the chatbot, named \u0026ldquo;Tay,\u0026rdquo; attempted to explain that racism was a design issue. If designed better, they contended, chatbots wouldn\u0026rsquo;t encounter these problems.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESchlesinger and her MSR collaborators \u003Ca href=\u0022https:\/\/static1.squarespace.com\/static\/5a8b405a18b27d5478196dca\/t\/5a8b690d24a694d7072d25a1\/1519085853799\/chi18-schlesinger-LetsTalkAboutRace.pdf\u0022\u003Ewrote a piece in response\u003C\/a\u003E, contending that it\u0026rsquo;s not just a design flaw, but a problem with how tech firms and, more broadly, designers think about these issues in general.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We wanted to point out research opportunities to figure out ways that we can do better at considering issues like race and identity when designing systems to avoid creating something like a chatbot that reproduces the types of problems that Tay produced,\u0026rdquo; she said. \u0026ldquo;It\u0026rsquo;s important that we really identify the central issue that causes these problems. It\u0026rsquo;s hard to address a problem if you can\u0026rsquo;t name it.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThese questions are central to the research she is conducting at Georgia Tech, where she is now a Ph.D. student advised by Professors \u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/people\/beki-grinter\u0022\u003EBeki Grinter\u003C\/a\u003E\u003C\/strong\u003E and \u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/people\/keith-edwards\u0022\u003EKeith Edwards\u003C\/a\u003E\u003C\/strong\u003E in the School of Interactive Computing. Recently, she was a finalist for the \u003Ca href=\u0022https:\/\/gvu.gatech.edu\/gvu-graduate-student-awards-program-2018\u0022\u003EFoley Scholarship\u003C\/a\u003E, where she was recognized for her research into ways enterprises can operationalize strategies to support software development with fairness in mind.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003EUnderstand the social impact\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EIt wasn\u0026rsquo;t a straightforward path to studying equity, inclusion, and fairness in computer science (CS), however.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn 2012, Schlesinger\u0026rsquo;s second year pursuing a CS major through Harvey Mudd at Pitzer College she had a realization. CS degrees, she noticed, were not focusing on the vast social impacts that they were producing. She began to worry that an awareness of this social change might be missing in many CS educational environments.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;About a year and a half into my degree, I was just like \u0026ndash; these machines, these programs, they\u0026rsquo;re ubiquitous,\u0026rdquo; she said. \u0026ldquo;They\u0026rsquo;re in everything. They\u0026rsquo;re changing the world, and we\u0026rsquo;re not talking about that.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt was this realization that led her to course correct during her undergraduate degree. She redefined her major at Pitzer College, adjusted the trajectory of her career to pursue research full-time, and honed in on an area she says is vital to introducing mechanisms in enterprise, education, and beyond that protect against bias and exclusion.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne of the benefits of being at Pitzer College for her undergraduate degree was that Schlesinger was given the opportunity to define her own major. Her interests at the intersection of computer science, humanities, and social sciences led to a degree she designed, called \u0026ldquo;technology and social change.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe social impacts at the heart of technology and CS are central to her interests, and she found that CS education was one of the few spaces she had experienced in computing that was really thinking about social impact. Upon graduation, she took a position at Harvey Mudd College running a \u003Ca href=\u0022https:\/\/www.nsf.gov\/\u0022\u003ENational Science Foundation\u003C\/a\u003E grant in CS education called \u0026ldquo;\u003Ca href=\u0022http:\/\/csteachingtips.org\/\u0022\u003ECS Teaching Tips\u003C\/a\u003E.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile there was a semblance of an ethics requirement in most CS degrees, whether or not it was a priority was unclear, and that was what concerned Schlesinger.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Who is teaching it? How is it defined? What\u0026rsquo;s being covered? Often what you learn about ethics and these social concerns in CS depends on who you know and what you\u0026rsquo;re exposed to,\u0026rdquo; she explained. \u0026ldquo;Sometimes in academia, I think we have these siloing problems, where one discipline does this and another does that and it\u0026rsquo;s very hard to move between the two.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt\u0026rsquo;s important, she said, that CS departments have someone within them who bring all of these disparate fields together, introducing people to literature and ideas they may not otherwise see in their respective disciplines.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt was this focus that drew her to Georgia Tech, specifically as it pertained to advisors Grinter and Edwards. She was looking for graduate advisors who could get excited about this idea investigating and implementing equity and inclusion within things like programming languages or artificial intelligence. Thinking more broadly, she knew that this wasn\u0026rsquo;t just a problem within CS education, but within technology as a whole.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The pot is full,\u0026rdquo; she said. \u0026ldquo;Those questions of who gets an advantage or not when we are designing software or when we build computing systems. Technical systems have the opportunity to minimize expansion of harm, but they also have the opportunity to further discriminate. What can we do to stop hurting each other?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003E\u0026lsquo;The next step depends on you\u0026rsquo;\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EHer future work at Georgia Tech will follow a similar path, examining some of these issues of equity and bias in online communities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThere are rampant issues of harassment and discrimination in more traditional online communities. More specifically, there are issues of diversity and inclusion within open-source communities, where programmers interact and work on a tech product that might be widely adopted and will ultimately reflect some piece of those interactions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Online communities seem to be places where many people of color, women, people with various marginalized identities are harassed,\u0026rdquo; Schlesinger said. \u0026ldquo;That happens in the tech workplace, and it happens in these open-source spaces. Part of our work will look at this distilled problem space and ask questions about what is the connection between inclusion, discrimination and online communities. Are there ways these spaces are designed that inhibit good behaviors or promote bad?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOf course, her next step in this research only one approach, and she said that it\u0026rsquo;s important to note there are many paths to pursue. Asked what the next step in this space should be, Schlesinger turned it around.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The answer is that there is a clear step for everybody and the world would be a better place if we took that next step, but the next step depends on you,\u0026rdquo; she said. \u0026ldquo;Who you are, what you\u0026rsquo;re doing, where you work, what you think about. There is something to do, but what that is depends on you.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Questions of who is advantaged when designing software are central took tech development. Ari Schlesinger is shining a spotlight on those issues."}],"uid":"33939","created_gmt":"2018-11-16 23:00:36","changed_gmt":"2018-11-16 23:00:36","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-11-16T00:00:00-05:00","iso_date":"2018-11-16T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"614366":{"id":"614366","type":"image","title":"Ari Schlesinger","body":null,"created":"1542408354","gmt_created":"2018-11-16 22:45:54","changed":"1542408354","gmt_changed":"2018-11-16 22:45:54","alt":"Ari Schlesinger","file":{"fid":"233894","name":"Ari Schlesinger.JPG","image_path":"\/sites\/default\/files\/images\/Ari%20Schlesinger.JPG","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Ari%20Schlesinger.JPG","mime":"image\/jpeg","size":217701,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Ari%20Schlesinger.JPG?itok=FEW2Zg9s"}}},"media_ids":["614366"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"170073","name":"Ari Schlesinger"},{"id":"654","name":"College of Computing"},{"id":"166848","name":"School of Interactive Computing"},{"id":"179742","name":"technology and social change"},{"id":"179743","name":"equity and computing"},{"id":"306","name":"equity"},{"id":"208","name":"computing"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"613426":{"#nid":"613426","#data":{"type":"news","title":"IC Professors John Stasko and Gregory Abowd Earn Test of Time Awards","body":[{"value":"\u003Cp\u003EA pair of professors in the School of Interactive Computing were recognized for test of time awards at two conferences this month, demonstrating the lasting impact of their research in respective fields.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EProfessor \u003Cstrong\u003EGregory Abowd\u003C\/strong\u003E\u0026rsquo;s work with former Ph.D. student \u003Cstrong\u003EShwetak Patel\u003C\/strong\u003E and postdoctoral student and research scientist \u003Cstrong\u003EMatt Reynolds\u003C\/strong\u003E was recognized at UbiComp 2018 in Singapore earlier this month. The paper was presented at the Pervasive 2008 Conference and was titled \u003Cem\u003E\u003Ca href=\u0022https:\/\/pdfs.semanticscholar.org\/9132\/a2f02b3a0a4e928285975ec9789b1210c63c.pdf\u0022\u003EDetecting Human Movement by Differential Air Pressure Sensing in HVAC System Ductwork: An Exploration in Infrastructure Mediated Sensing\u003C\/a\u003E\u003C\/em\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe paper presented an approach to detect movement and room transition throughout an entire house through sensing at only one point in the home. At the time, it was a new class of human activity monitoring they called \u0026ldquo;infrastructure mediated sensing, and it detected things like disruptions in airflow caused by human movement.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis approach presents a cost-effective advantage to installing motion sensors throughout an entire home.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis is the second straight such 10-year recognition Abowd, Patel and Reynolds have received at UbiComp. Their paper \u003Cem\u003E\u003Ca href=\u0022https:\/\/homes.cs.washington.edu\/~shwetak\/papers\/ubicomp2007_flick.pdf\u0022\u003EAt the Flick of a Switch: Detecting and Classifying Unique Electrical Events on the Residential Power Line\u003C\/a\u003E\u003C\/em\u003E, written with \u003Cstrong\u003EJulie Kientz\u003C\/strong\u003E and \u003Cstrong\u003ETom Robertson\u003C\/strong\u003E, \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/596144\/ic-faculty-alumni-awarded-10-year-impact-award-ubicomp-2017\u0022\u003Ewas recognized at UbiComp 2017\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EProfessor \u003Cstrong\u003EJohn Stasko\u003C\/strong\u003E received a similar designation this year at IEEE VIS 2018 for research he performed while on sabbatical at Microsoft Research in Fall 2007. Stasko, along with other members on his team, proposed two alternative trend visualizations that use static depictions of trends: one which shows traces of all trends overlaid simultaneously in one display and a second that uses a small multiples display to show the trend traces side-by-side.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe paper, titled \u003Cem\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~john.stasko\/papers\/infovis08-anim.pdf\u0022\u003EEffectiveness of Animation in Trend Visualization\u003C\/a\u003E\u003C\/em\u003E, evaluates the visualizations and indicates that trend animation is challenging to use and, despite being engaging for participants, it leads to errors and is least effective for analysis.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe paper was presented at InfoVis in 2008. Like Abowd, this is Stasko\u0026rsquo;s second straight year receiving a 10-year legacy award at IEEE VIS. His work with co-authors \u003Cstrong\u003ECarsten G\u0026ouml;rg\u003C\/strong\u003E,\u0026nbsp;\u003Cstrong\u003EZhicheng Liu\u003C\/strong\u003E, and\u0026nbsp;\u003Cstrong\u003EKanupriya Singhal\u003C\/strong\u003E, titled \u003Cem\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~stasko\/papers\/vast07-jigsaw.pdf\u0022\u003EJigsaw: Supporting Investigative Analysis through Interactive Visualization\u003C\/a\u003E\u003C\/em\u003E, was \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/596952\/ic-researchers-earn-test-time-award-vast-2007-paper\u0022\u003Erecognized last year\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Stasko received a test of time designation for a paper at InfoVis 2008, and Abowd one for a paper at UbiComp 2008."}],"uid":"33939","created_gmt":"2018-10-29 17:13:15","changed_gmt":"2018-10-29 17:13:15","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-10-29T00:00:00-04:00","iso_date":"2018-10-29T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"613425":{"id":"613425","type":"image","title":"Test of Time","body":null,"created":"1540833148","gmt_created":"2018-10-29 17:12:28","changed":"1540833148","gmt_changed":"2018-10-29 17:12:28","alt":"Clock","file":{"fid":"233534","name":"time-371226_960_720.jpg","image_path":"\/sites\/default\/files\/images\/time-371226_960_720.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/time-371226_960_720.jpg","mime":"image\/jpeg","size":70728,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/time-371226_960_720.jpg?itok=tZkephGS"}}},"media_ids":["613425"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"11632","name":"john stasko"},{"id":"11002","name":"Gregory Abowd"},{"id":"654","name":"College of Computing"},{"id":"166848","name":"School of Interactive Computing"},{"id":"7730","name":"infovis"},{"id":"4923","name":"Ubicomp"},{"id":"170453","name":"Test of Time Award"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"613299":{"#nid":"613299","#data":{"type":"news","title":"Ant\u00f3n Named as Technologist Advisor to U.S. National Security Court","body":[{"value":"\u003Cp\u003EAnnie I. Ant\u0026oacute;n, a professor in Georgia Tech\u0026rsquo;s School of Interactive Computing, has been named a technologist advisor to the U.S. Foreign Intelligence Surveillance Court (FISC).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EStarting this month, Ant\u0026oacute;n will assist the court in a part-time role. She is the only academic among the three technologists.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe FISC may receive assistance from an \u0026ldquo;amicus curiae\u0026rdquo; (friend of the court), who has expertise in privacy and civil liberties, intelligence collection, communications technology or other relevant areas.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I am honored to be asked to assist with foreign intelligence cases that involve national security, cybersecurity and privacy,\u0026rdquo; Ant\u0026oacute;n said. \u0026ldquo;Technologists play a vital role in helping the courts understand how complex systems operate in practice, in order to assure that systems comply with law.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnt\u0026oacute;n, a Georgia Tech graduate, returned to serve as chair of the School of Interactive Computing from 2012 to 2017.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn 2016, she was one of 12 members of the President\u0026rsquo;s Commission on Enhancing National Cybersecurity.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":" Starting this month, Annie Ant\u00f3n will assist the U.S. Foreign Intelligence Surveillance Court in a part-time role. She is the only academic among the three technologists. "}],"uid":"33939","created_gmt":"2018-10-25 17:17:48","changed_gmt":"2018-10-25 17:17:48","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-10-25T00:00:00-04:00","iso_date":"2018-10-25T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"522611":{"id":"522611","type":"image","title":"Annie Ant\u00f3n photo","body":null,"created":"1460134800","gmt_created":"2016-04-08 17:00:00","changed":"1480708522","gmt_changed":"2016-12-02 19:55:22","alt":"","file":{"fid":"205377","name":"annie-anton1.jpg","image_path":"\/sites\/default\/files\/images\/annie-anton1_0.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/annie-anton1_0.jpg","mime":"image\/jpeg","size":2994372,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/annie-anton1_0.jpg?itok=LgQLUFdO"}}},"media_ids":["522611"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"27641","name":"annie anton"},{"id":"109","name":"Georgia Tech"},{"id":"654","name":"College of Computing"},{"id":"166848","name":"School of Interactive Computing"},{"id":"179497","name":"u.s. foreign intelligence surveillance court"},{"id":"10231","name":"Washington D.C."}],"core_research_areas":[{"id":"145171","name":"Cybersecurity"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ELaura Diamond\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:laura.diamond@gatech.edu\u0022\u003Elaura.diamond@gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"613031":{"#nid":"613031","#data":{"type":"news","title":"Georgia Tech research shapes data literacy and usability at IEEE VIS 2018","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003EDigital data is a growing\u0026nbsp;type of currency, offering\u0026nbsp;insights for transforming businesses and organizations, allowing\u0026nbsp;better decision making, and answering\u0026nbsp;questions people didn\u0026rsquo;t even know they had. Data transactions are as common as convenience store purchases, yet the costs of those transactions are very different.\u0026nbsp;\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EInformation visualization researchers at Georgia Tech are developing ways people can better understand the world\u0026rsquo;s data and how to interpret its meaning through techniques that can surface key insights and make the data meaningful to users.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech faculty and graduate students will present their latest research in information visualization and visual analytics, including 14 papers, at the annual IEEE Visualization (\u003Ca href=\u0022http:\/\/ieeevis.org\/year\/2018\/welcome\u0022\u003EIEEE VIS\u003C\/a\u003E) Conference in Berlin, Germany, Oct. 21-26.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Ca href=\u0022https:\/\/gvu.gatech.edu\/vis-2018\u0022\u003E\u003Cstrong\u003EExplore Research Highlights and Data Graphics\u003C\/strong\u003E\u003C\/a\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOf the 15 researchers, 11 are from the School of Interactive Computing and four represent the School of Computational Science and Engineering in the College of Computing. The faculty authors \u0026ndash;\u0026nbsp;\u003Cstrong\u003ERahul\u0026nbsp;Basole\u003C\/strong\u003E,\u0026nbsp;\u003Cstrong\u003EPolo Chau\u003C\/strong\u003E,\u0026nbsp;\u003Cstrong\u003EAlex Endert\u003C\/strong\u003E, and\u0026nbsp;\u003Cstrong\u003EJohn Stasko\u003C\/strong\u003E\u0026nbsp;\u0026ndash;\u0026nbsp;are members of the VIS Lab and GVU Center.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESchool of Interactive Computing Professor John Stasko, along with collaborators from Microsoft Research, will receive a Test of Time award for their 2007 paper\u0026nbsp;\u003Ca href=\u0022https:\/\/ieeexplore.ieee.org\/document\/4658146\u0022\u003E\u003Cem\u003EEffectiveness of Animation in Trend Visualization\u003C\/em\u003E\u003C\/a\u003E. It is Stasko\u0026rsquo;s second straight year receiving such a designation at IEEE VIS. Read about\u0026nbsp;\u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/news\/596952\/ic-researchers-earn-test-time-award-vast-2007-paper\u0022\u003Elast year\u0026#39;s award\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIEEE VIS is the largest conference on scientific visualization, information visualization, and visual analytics.\u0026nbsp;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech faculty and graduate students will present their latest research in information visualization and visual analytics, including 14 papers, at the annual IEEE Visualization (\u003Ca href=\u0022http:\/\/ieeevis.org\/year\/2018\/welcome\u0022\u003EIEEE VIS\u003C\/a\u003E) Conference in Berlin, Germany, Oct. 21-26.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech faculty and graduate students will present their latest research in information visualization and visual analytics, including 14 papers, at the annual IEEE Visualization (IEEE VIS) Conference in Berlin, Germany, Oct. 21-26."}],"uid":"27592","created_gmt":"2018-10-19 19:18:37","changed_gmt":"2018-10-19 19:33:16","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-10-19T00:00:00-04:00","iso_date":"2018-10-19T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"613034":{"id":"613034","type":"image","title":"Georgia Tech Visualization Lab","body":null,"created":"1539976795","gmt_created":"2018-10-19 19:19:55","changed":"1539977395","gmt_changed":"2018-10-19 19:29:55","alt":"","file":{"fid":"233391","name":"vis_2018_header_image.jpg","image_path":"\/sites\/default\/files\/images\/vis_2018_header_image.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/vis_2018_header_image.jpg","mime":"image\/jpeg","size":600037,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/vis_2018_header_image.jpg?itok=tQ4xPtDh"}},"613033":{"id":"613033","type":"image","title":"Georgia Tech faculty at VIS 2018","body":null,"created":"1539976748","gmt_created":"2018-10-19 19:19:08","changed":"1539977421","gmt_changed":"2018-10-19 19:30:21","alt":"","file":{"fid":"233390","name":"vis 2018 social promo.png","image_path":"\/sites\/default\/files\/images\/vis%202018%20social%20promo.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/vis%202018%20social%20promo.png","mime":"image\/png","size":99228,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/vis%202018%20social%20promo.png?itok=jxJBYNrK"}}},"media_ids":["613034","613033"],"related_links":[{"url":"https:\/\/public.tableau.com\/views\/vis2018_faculty\/Dashboard2?:embed=y\u0026:display_count=yes\u0026:showVizHome=no#2","title":"Faculty at VIS 2018"},{"url":"https:\/\/public.tableau.com\/views\/vis2018_papers\/Dashboard1?:embed=y\u0026:embed_code_version=3\u0026:loadOrderID=1\u0026:display_count=yes\u0026publish=yes\u0026:showVizHome=no","title":"Read Technical Papers"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39431","name":"Data Engineering and Science"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\nCommunications Manager, GVU Center\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"612675":{"#nid":"612675","#data":{"type":"news","title":"Constellations Center Commits to Teaching 200 Students Computer Science in 2018-19 Academic Year","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003EThe Constellations Center for Equity in Computing\u003C\/strong\u003E at Georgia Tech was one of 294 organizations to \u003Ca href=\u0022http:\/\/summit.csforall.org\/searchCommitments\u0022\u003Emake a commitment\u003C\/a\u003E to improving the computer science education landscape at the 2018 \u003Ca href=\u0022http:\/\/summit.csforall.org\/home\u0022\u003ECSforALL Summit\u003C\/a\u003E, Oct. 8-11. The center committed to \u0026ldquo;instilling a sequence of high-level CS courses in secondary schools with low-income communities.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EConstellations fellows will teach the AP CS principles course in seven public high schools in Atlanta Public Schools (APS) during the 2018-19 academic year, educating nearly 200 high school students. The end result of the engagement will be a hybrid model of instructional and online learning providing a pathway to post-secondary computer science and STEM studies. These courses include AP CS Principles, Georgia Tech\u0026rsquo;s Introduction to Computing Using Python, and AP CS A.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Our fellows began teaching AP CS in August and we have already seen so much growth from our students. Many of them have never been trusted with a computer, much less exposed to computer science, and it has been amazing to see them start to realize their potential in this subject and how far they can go in life,\u0026rdquo; said \u003Cstrong\u003ELien Diaz\u003C\/strong\u003E, Constellations Director of Educational Innovation and Leadership.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDiaz \u003Ca href=\u0022https:\/\/twitter.com\/GT_CCEC\/status\/1049753216391495680\u0022\u003Emoderated a panel\u003C\/a\u003E about engaging underrepresented youth in computer science at the summit.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommitment goals at the event ranged from increasing rigor and equity in computing to creating opportunities for youth, supporting local change, and growing the computer science education movement.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe event included talks from guests such as \u0026ldquo;NCIS: New Orleans\u0026rdquo; actor, \u003Cstrong\u003EDaryl \u0026ldquo;Chill\u0026rdquo; Mitchell\u003C\/strong\u003E, who \u003Ca href=\u0022https:\/\/twitter.com\/GT_CCEC\/status\/1049655202754781184\u0022\u003Ereminded the attendees\u003C\/a\u003E that \u0026ldquo;there is nothing a kid can\u0026rsquo;t do if given the opportunity.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EReCAPTCHA creator and \u003Cstrong\u003EDuolingo\u003C\/strong\u003E founder, \u003Cstrong\u003ELuis von Ahn\u003C\/strong\u003E, \u003Ca href=\u0022https:\/\/twitter.com\/GT_CCEC\/status\/1049716915751530501\u0022\u003Einspired the crowd\u003C\/a\u003E with lessons learned from his time creating technology solutions, and \u003Cstrong\u003EDeon Gordon\u003C\/strong\u003E of \u003Cstrong\u003ETech Birmingham\u003C\/strong\u003E encouraged the audience to care not only about teaching children technical skills, but to \u003Ca href=\u0022https:\/\/twitter.com\/GT_CCEC\/status\/1049686852188409856\u0022\u003Ecare for them as a person\u003C\/a\u003E. The \u003Cstrong\u003EDetroit Arts and Sciences Academy\u003C\/strong\u003E chorus wowed attendees with their \u003Ca href=\u0022https:\/\/twitter.com\/GT_CCEC\/status\/1049646504745598976\u0022\u003Ecomputer science remix\u003C\/a\u003E from \u0026ldquo;Frozen.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe summit took place in Detroit, Mich. at Wayne State University. For an interactive visualization of the 227 commitments made, please visit \u003Ca href=\u0022http:\/\/summit.csforall.org\/visualizationView\u0022\u003Ehere.\u003C\/a\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Constellations Center attends CSforALL Summit and pledges to teach 200 students computer science in Atlanta Public Schools during the 2018-19 academic year. "}],"uid":"34773","created_gmt":"2018-10-12 13:38:26","changed_gmt":"2018-10-12 16:20:57","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-10-12T00:00:00-04:00","iso_date":"2018-10-12T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"612682":{"id":"612682","type":"image","title":"Lien Diaz moderates a panel at the CSforALL Summit held in Detroit Oct. 8 - 11. ","body":null,"created":"1539355904","gmt_created":"2018-10-12 14:51:44","changed":"1539355904","gmt_changed":"2018-10-12 14:51:44","alt":"","file":{"fid":"233231","name":"10791647824_IMG_0796 copy.jpg","image_path":"\/sites\/default\/files\/images\/10791647824_IMG_0796%20copy.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/10791647824_IMG_0796%20copy.jpg","mime":"image\/jpeg","size":408574,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/10791647824_IMG_0796%20copy.jpg?itok=YpYedcbH"}}},"media_ids":["612682"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"606703","name":"Constellations Center"},{"id":"1299","name":"GVU Center"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"151","name":"Policy, Social Sciences, and Liberal Arts"}],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"612183":{"#nid":"612183","#data":{"type":"news","title":"Georgia Tech Researchers Develop AI That Can Create Entirely New Games","body":[{"value":"\u003Ch3\u003E\u003Cem\u003EUsing a method dubbed \u0026#39;conceptual expansion,\u0026#39; the system studies old games to create unique mechanics and designs\u003C\/em\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EThe first machine learning-based automated game design tool from a team of researchers at Georgia Tech could empower anyone to make their own games. Utilizing what the researchers dub \u0026ldquo;conceptual expansion,\u0026rdquo; the method recombines machine-learned models of games into new, playable games.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPrior automated game design approaches have used hand-authored or crowd-sourced knowledge, which requires a human author to write instructions for the system to produce games. This approach, however, limits the scope and applications of such systems, according to the Georgia Tech team.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EConceptual expansion, on the other hand, takes in an arbitrary number of games \u0026ndash; Super Mario Bros., Kirby\u0026rsquo;s Adventure, and Mega Man, for example \u0026ndash; and then outputs original games with unique mechanics and level designs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;By having this done using machine learning, it doesn\u0026rsquo;t require me or anyone else to author additional content,\u0026rdquo; said \u003Cstrong\u003EMatthew Guzdial\u003C\/strong\u003E, a Ph.D. student in Georgia Tech\u0026rsquo;s School of Interactive Computing (IC) and the lead on the project. \u0026ldquo;I can just give it a new game to learn on, and it will immediately change its output based on what it sees. If I want a game like Pong and game like Tetris, I show two videos to the system and, bam, here are 30 games or however many it comes up with.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis research builds on \u003Ca href=\u0022https:\/\/gvu.gatech.edu\/ai-uses-less-two-minutes-videogame-footage-recreate-game-engine\u0022\u003Eprevious work\u003C\/a\u003E by the team, which includes Guzdial\u0026rsquo;s adviser, IC Associate Professor \u003Cstrong\u003EMark Riedl\u003C\/strong\u003E. That work attempted to empower artificial intelligence (AI) with creativity through the use of video games. Past iterations have looked at learning level design for a game \u0026ndash; Super Mario Bros., for example \u0026ndash; but this led to output levels very similar to those already in the game.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn an alternative method, the researchers drew on an approach called conceptual blending that allowed them to create entirely new types of levels, like underwater castles, which aren\u0026rsquo;t present in the original franchise.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe latest work has used that foundation to produce entirely new content, though challenges still exist, Guzdial said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;What we have to do now is define some way for the AI to know what is good and what is bad,\u0026rdquo; he said. \u0026ldquo;That\u0026rsquo;s the trick. We haven\u0026rsquo;t quite figured it out yet. We\u0026rsquo;ve gotten some bad stuff out, too.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESome examples: An enemy was invented that couldn\u0026rsquo;t move and couldn\u0026rsquo;t die and a power-up that moved further away any time a player was within a certain distance.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;So, those aren\u0026rsquo;t great,\u0026rdquo; Guzdial said. \u0026ldquo;There\u0026rsquo;s no intentionality there. It\u0026rsquo;s frustrating, but it\u0026rsquo;s interesting to see how it builds these new relationships. At the end of the day, right now, we are looking for something new and something playable. Have I ever seen this before, and can I actually play this game?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe goal, Guzdial said, is not to replace human game designers but to provide non-game designers with the ability to produce original content on their own. He\u0026rsquo;s often heard others describe games as being a combination of multiple games they\u0026rsquo;ve played before.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Why not empower them with the ability to create on their own?\u0026rdquo; he said. \u0026ldquo;I have no illusions about whether experts can make a better game than this engine, obviously. But can we let people who don\u0026rsquo;t know how to make games become developers on their own?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis research is published in a paper titled \u003Cem\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1809.02232.pdf\u0022\u003EAutomated Game Design via Conceptual Expansion\u003C\/a\u003E\u003C\/em\u003E, which will be presented at the \u003Ca href=\u0022https:\/\/sites.google.com\/ncsu.edu\/aiide-2018\/home?authuser=0\u0022\u003E14\u003Csup\u003Eth\u003C\/sup\u003E AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment\u003C\/a\u003E on Nov. 13-17 in Edmonton, Alberta, Canada.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Conceptual expansion takes in an arbitrary number of games and then outputs original games with unique mechanics and level designs."}],"uid":"33939","created_gmt":"2018-10-01 20:25:49","changed_gmt":"2018-10-04 12:20:43","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-10-01T00:00:00-04:00","iso_date":"2018-10-01T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"595803":{"id":"595803","type":"image","title":"Super Mario Bros.","body":null,"created":"1505147296","gmt_created":"2017-09-11 16:28:16","changed":"1505147296","gmt_changed":"2017-09-11 16:28:16","alt":"Super Mario Brothers","file":{"fid":"227053","name":"mario2-cloned_engine.gif","image_path":"\/sites\/default\/files\/images\/mario2-cloned_engine.gif","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/mario2-cloned_engine.gif","mime":"image\/gif","size":4784356,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/mario2-cloned_engine.gif?itok=UwWF9XB6"}}},"media_ids":["595803"],"related_links":[{"url":"https:\/\/public.tableau.com\/views\/Firstmachine-learning-basedautogamedesigner\/Dashboard1?:embed=y\u0026:display_count=yes\u0026:showVizHome=no","title":"Guzdial\u0027s Gaming Research Evolution"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"2835","name":"ai"},{"id":"2556","name":"artificial intelligence"},{"id":"2449","name":"video games"},{"id":"146631","name":"Matthew Guzdial"},{"id":"66281","name":"Mark Riedl"},{"id":"166848","name":"School of Interactive Computing"},{"id":"654","name":"College of Computing"},{"id":"179255","name":"automated game design"},{"id":"179256","name":"super mario brothers"},{"id":"179257","name":"kirby\u0027s adventure"},{"id":"179258","name":"mega man"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"611757":{"#nid":"611757","#data":{"type":"news","title":"Good Vibrations: Passive Haptic Learning Could Be a Key to Rehabilitation","body":[{"value":"\u003Ch3\u003E\u003Cem\u003EIt has shown positive results in spinal injury recovery. Can it do the same for stroke patients?\u003C\/em\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EIt was around 2008, and \u003Cstrong\u003EKevin Huang\u003C\/strong\u003E was a master\u0026rsquo;s student at Georgia Tech under the advisement of School of Interactive Computing (IC) Professor \u003Cstrong\u003EThad Starner\u003C\/strong\u003E. Huang, an intrepid student, had big ideas within the field of haptics and wanted to pursue the creation of a full-bodied exoskeleton.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Of course, that costs about a million dollars,\u0026rdquo; Starner, an associate professor at the time, said with a laugh. \u0026ldquo;So, I said, \u0026lsquo;Here\u0026rsquo;s an idea. Why don\u0026rsquo;t you try making this glove where you put vibrating motors above each finger and see if you can teach people how to play piano passively?\u0026rsquo;\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EStarner was suggesting a new approach that later became known as passive haptic learning. Users would wear the gloves while doing other activities, like reading email or driving, and have the device tap each finger in the appropriate sequence for a piano melody over and over again. The hope was that the repetition would give the wearer the motor memory to later take off the gloves and play the song on the piano, perhaps even a song they had never heard before.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHuang liked the idea and ran with it. Starner, of course, thought the idea had merit or never would have suggested it. But even he was surprised with Huang\u0026rsquo;s results. The glove worked. Users, as was later demonstrated during a live segment on CNN, could quickly learn a tune in a short span of time.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I don\u0026rsquo;t believe this,\u0026rdquo; said Starner, recalling his initial reaction. \u0026ldquo;I had never heard of anything like this before in my life. We went and did a 16-person user study, and it came back with even better results. I figured we had something real there, and it was time to dig a little deeper.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhat Starner didn\u0026rsquo;t know was that the initial whim was laying the foundation for an entirely new field of study, a field that has demonstrated positive results in sensation-based learning for music, Morse code, and even Braille.\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Ca href=\u0022https:\/\/www.freethink.com\/shows\/superhuman\/season-3\/these-gloves-could-offer-rapid-recovery-from-brain-injuries\u0022\u003E\u003Cstrong\u003EVIDEO: These Gloves Can Teach You How to Play Piano. And Maybe Heal Your Brain (Freethink\u0026#39;s Superhuman)\u003C\/strong\u003E\u003C\/a\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003ECurrent IC Ph.D. student \u003Cstrong\u003ECaitlyn Seim\u003C\/strong\u003E and her team of researchers has taken this initial discovery to the next level. She defined this new field of study when she unlocked even more important secrets within the real-world impact of passive haptic learning. If it can improve dexterity in healthy hands, could it do the same for those with limitations? Could it, in fact, lead to rehabilitation for someone who has suffered a traumatic spinal injury? Could it even aid in recovery from a stroke?\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003EThe elephant in the room\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003ESeim was an undergraduate in electrical engineering at Georgia Tech in 2013 around the time Starner and former Ph.D. student \u003Cstrong\u003ETanya Estes\u003C\/strong\u003E were beginning to understand some of the ramifications of their method. She had done some work with IC Professors \u003Cstrong\u003EGregory Abowd\u003C\/strong\u003E and \u003Cstrong\u003EJim Rehg\u003C\/strong\u003E, and Starner approached her about working with him in this field of passive haptic learning.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEstes and Starner had recently partnered with the Shepherd Center for spinal cord and brain injury rehabilitation to test whether the seemingly random stimulation of the fingers, as demonstrated in the piano project, could lead to increased sensation dexterity in the hands for injury patients.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt was a logical next step in the research, Starner said, and the results were encouraging. In the study, participants who were injured more than a year prior wore the Mobile Music Touch glove that led to learned skills on a piano. They participated in simple piano lessons and evaluations indicated stastistically significant improvement among the experimental group.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThey were beginning to scratch the surface of what passive haptic rehabilitation was able to achieve. But, Seim said, there has always been one elephant in the room for anyone in the rehab space.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Stroke,\u0026rdquo; Seim said. \u0026ldquo;It\u0026rsquo;s the clear elephant in the room. It\u0026rsquo;s the No. 1 cause of long-term disability in the United States and a leading cause globally.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENot only is it a huge financial burden, patients also have precious few options for recovery. Exercise-based therapy such as constraint-induced movement is the state of the art. For immobile hands, Botox injections are also common.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;But that\u0026rsquo;s only temporary,\u0026rdquo; Starner said. \u0026ldquo;It\u0026rsquo;s not retraining the body. It\u0026rsquo;s for relief, not getting your hand back.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe exercise-based therapy can help, but it is painful, expensive, and only available to about 50 percent of patients who meet the baseline dexterity level to begin treatment. The other half have been rendered too disabled to be eligible, so compensatory strategies like spousal assistance are encouraged.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003EA new option?\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EWith the positive results in the partial spinal cord injuries study, the thought was that perhaps this stimulation-based method could have a similarly positive impact for stroke patients, as well. Like in the previous study, patients wear the gloves, this time for three hours each day for two months.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeim, who Starner said recruited her own subjects via news groups and mailing lists, takes measurements weekly. Volunteer clinicians also take measurements at the beginning, middle, and end of the cycle.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile the team is currently preparing a publication on their work and is not yet ready to release results, Seim said the findings have been encouraging.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s exciting,\u0026rdquo; Seim said of the study. \u0026ldquo;We aren\u0026rsquo;t looking for complete recovery but if we can actually get patients to regain control of their fingers or hands, they can do so much more for themselves.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis method has already shown plenty of promise. If it impacts stroke recovery, that\u0026rsquo;s already a significant portion of the population. And perhaps passive haptic learning is a key that could unlike even more avenues of study.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I think there\u0026rsquo;s just enormous potential in the area of haptics, wearables, and tapping into this area of passive stimulation,\u0026rdquo; Seim said. \u0026ldquo;As I finish my Ph.D. and see the research landscape, I see that we are uncovering a new paradigm beyond just the awesome applications like learning a melody on the piano or having a great system to teach Braille. These are great applications in themselves, but this is a whole new cognitive approach, and it\u0026rsquo;s very exciting.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Studies have shown that passive haptic learning can help patients suffering from spinal injury. Can it also be an option in stroke recovery?"}],"uid":"33939","created_gmt":"2018-09-20 19:17:31","changed_gmt":"2018-09-21 20:21:27","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-09-20T00:00:00-04:00","iso_date":"2018-09-20T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"611755":{"id":"611755","type":"image","title":"Caitlyn Seim - PHL","body":null,"created":"1537470856","gmt_created":"2018-09-20 19:14:16","changed":"1537470856","gmt_changed":"2018-09-20 19:14:16","alt":"Caitlyn Seim showing haptic glove","file":{"fid":"232896","name":"Seim Banner.jpg","image_path":"\/sites\/default\/files\/images\/Seim%20Banner.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Seim%20Banner.jpg","mime":"image\/jpeg","size":170103,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Seim%20Banner.jpg?itok=QblfJAZi"}}},"media_ids":["611755"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"104221","name":"passive haptic learning"},{"id":"179164","name":"spinal injury recovery"},{"id":"179165","name":"stroke recovery"},{"id":"115211","name":"wearable tech"},{"id":"132141","name":"wearables"},{"id":"10353","name":"wearable computing"},{"id":"1944","name":"Thad Starner"},{"id":"170072","name":"Caitlyn Seim"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"611374":{"#nid":"611374","#data":{"type":"news","title":"School of Interactive Computing Launching Podcast to Address the \u0027Big Issues\u0027 in Computing","body":[{"value":"\u003Cp\u003EInteraction, as one might guess, is a key tenet of what the \u003Ca href=\u0022http:\/\/www.ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E tries to achieve. That could be interaction between people and technology, interaction between researchers and their peer collaborators, or interaction between researchers and the public to achieve an open dialogue over the big issues facing computing today.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo achieve those goals, the School is embarking on an exciting new project, beginning Sept. 18, with the launch of \u003Cstrong\u003EThe Interaction Hour\u003C\/strong\u003E, a podcast hosted by Chair \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/ayanna-howard\u0022\u003E\u003Cstrong\u003EAyanna Howard\u003C\/strong\u003E\u003C\/a\u003E, featuring guest experts expounding on a range of important topics, and crafted by you, the listener.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If you think about computing and where it\u0026rsquo;s going, it\u0026rsquo;s really about the intersection between the human experience and computing, which is really what the School of Interactive Computing is all about,\u0026rdquo; Howard said. How do we ensure that our computing technology addresses the needs of real people in society, and not just the lab?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe podcast will focus on a range of topics that affect people in the real world. Initial episodes focus on ethics in artificial intelligence \u0026ndash; from self-driving cars to lethal autonomous weapons \u0026ndash; with Professor \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/ronald-arkin\u0022\u003E\u003Cstrong\u003ERon Arkin\u003C\/strong\u003E\u003C\/a\u003E, a new approach to security and privacy called \u0026ldquo;social cybersecurity\u0026rdquo; with Assistant Professor \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/sauvik-das\u0022\u003E\u003Cstrong\u003ESauvik Das\u003C\/strong\u003E\u003C\/a\u003E, and an important look at how social media can be used as a tool to detect changes in mental health with Assistant Professor \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/munmun-dechoudhury\u0022\u003E\u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe initial episodes are available online on \u003Ca href=\u0022https:\/\/itunes.apple.com\/us\/podcast\/the-interaction-hour\/id1435564422?mt=2\u0022\u003EiTunes\u003C\/a\u003E, \u003Ca href=\u0022https:\/\/open.spotify.com\/show\/4UZ9Hlniz3FvG8uUGdJOm1?si=FeiB6Dq-QlSx8IzsebDDsQ\u0022\u003ESpotify\u003C\/a\u003E, and \u003Ca href=\u0022https:\/\/www.spreaker.com\/show\/the-interaction-hour\u0022\u003ESpreaker\u003C\/a\u003E, and will be shared on our social media channels \u003Ca href=\u0022http:\/\/www.twitter.com\/ICatGT\u0022\u003ETwitter\u003C\/a\u003E and \u003Ca href=\u0022http:\/\/www.facebook.com\/ICatGT\u0022\u003EFacebook\u003C\/a\u003E over the coming months. Beyond that, however, the podcast will look to the audience to help guide the discussions being had in this podcast.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We really want to hear directly from the public in person, in our classrooms, on the street, or on social media,\u0026rdquo; Howard said. \u0026ldquo;We want you to tell us what you want to know about computing and society, and we will find an expert for you to address that.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe unique aspect of the School is the wide range of research areas of the IC faculty. Topics from virtual reality to health care, ethics to cybersecurity, information visualization and wearable technology, education, robotics, artificial intelligence, and so much more are all within the realm of what IC researchers do.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAsked whether the School could find an expert for almost any range of computing topic, Howard didn\u0026rsquo;t hesitate.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;One hundred percent,\u0026rdquo; she said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EYou can subscribe to the podcast at any of the three locations below, with more options to come in the future:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/itunes.apple.com\/us\/podcast\/the-interaction-hour\/id1435564422?mt=2\u0022\u003EiTunes\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/www.spreaker.com\/show\/the-interaction-hour\u0022\u003ESpreaker\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/open.spotify.com\/show\/4UZ9Hlniz3FvG8uUGdJOm1?si=FeiB6Dq-QlSx8IzsebDDsQ\u0022\u003ESpotify\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EA podcasts page devoted to our production will be launched on the School website next Tuesday.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The podcast, called the Interaction Hour, is launching Tuesday, Sept. 18 and will be available on iTunes, Spotify, and Spreaker."}],"uid":"33939","created_gmt":"2018-09-12 15:57:55","changed_gmt":"2018-09-12 15:57:55","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-09-12T00:00:00-04:00","iso_date":"2018-09-12T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"611373":{"id":"611373","type":"image","title":"Interaction Hour","body":null,"created":"1536767585","gmt_created":"2018-09-12 15:53:05","changed":"1536767585","gmt_changed":"2018-09-12 15:53:05","alt":"The Interaction Hour","file":{"fid":"232747","name":"Podcast Banner Image 2.jpg","image_path":"\/sites\/default\/files\/images\/Podcast%20Banner%20Image%202.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Podcast%20Banner%20Image%202.jpg","mime":"image\/jpeg","size":208315,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Podcast%20Banner%20Image%202.jpg?itok=ZcSS853W"}}},"media_ids":["611373"],"related_links":[{"url":"http:\/\/www.twitter.com\/icatgt","title":"IC on Twitter"},{"url":"http:\/\/www.facebook.com\/icatgt","title":"IC on Facebook"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"179079","name":"The interaction hour"},{"id":"166848","name":"School of Interactive Computing"},{"id":"208","name":"computing"},{"id":"623","name":"Technology"},{"id":"179080","name":"ethics in ai"},{"id":"2835","name":"ai"},{"id":"2556","name":"artificial intelligence"},{"id":"167543","name":"social media"},{"id":"167731","name":"social computing"},{"id":"177228","name":"social cybersecurity"},{"id":"1404","name":"Cybersecurity"},{"id":"179081","name":"college fo computing"},{"id":"825","name":"Ayanna Howard"},{"id":"175376","name":"sauvik das"},{"id":"89321","name":"Munmun De Choudhury"},{"id":"14444","name":"ron arkin"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"610978":{"#nid":"610978","#data":{"type":"news","title":"Georgia Tech to Present Nine Poster Papers at ECCV 2018","body":[{"value":"\u003Cp\u003ENext week, a group of Georgia Tech students and faculty will travel to Munich, Germany to attend the \u003Ca href=\u0022https:\/\/eccv2018.org\/\u0022\u003EEuropean Conference on Computer Vision (ECCV) 2018\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMore than 700 organizations from industry, academia, and government are represented at the 2018 conference, which is held every two years. Georgia Tech will present eight papers during poster sessions at the premier event and, it is among\u0026nbsp;the top 3 percent of participating institutions based on accepted research.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlong with presenting several papers, Georgia Tech faculty members have also participated in organizing ECCV 2018. \u003Cstrong\u003EDevi Parikh\u003C\/strong\u003E, \u003Cstrong\u003EIrfan Essa\u003C\/strong\u003E, \u003Cstrong\u003EDhruv Batra\u003C\/strong\u003E, and \u003Cstrong\u003EFuxin Li\u003C\/strong\u003E served as area chairs for the event.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;ECCV is an exciting conference to participate in. There\u0026rsquo;s a lot of good work that gets presented from top computer vision labs in the world, and it is great that Georgia Tech is one of them! It is a great venue to share our latest ideas and hear what others in the research community are thinking about these days.\u0026rdquo; said \u003Cstrong\u003EDevi Parikh\u003C\/strong\u003E, assistant professor in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing.\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech organized the first \u003Ca href=\u0022\/\/\/Users\/dparikh\/Library\/Containers\/com.apple.mail\/Data\/Library\/Mail%20Downloads\/8035C01B-953E-4839-B1B8-4956B4756504\/%5bhttps:\/visualdialog.org\/challenge\/2018\u0022\u003EVisual Dialog Challenge,\u003C\/a\u003E designed to find methods for artificial intelligence agents to hold a meaningful dialog with humans in natural, conversational language about visual content. Winners will be announced at the conference.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe conference takes place Sept. 8 through 14 in the heart of Munich at the Gasteig Cultural Center.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo see an interactive visualization of the entire ECCV 2018 program, please click \u003Ca href=\u0022https:\/\/public.tableau.com\/views\/ECCV2018-MainProgram\/Dashboard1?:embed=y\u0026amp;:display_count=yes\u0026amp;:showVizHome=no\u0022\u003Ehere.\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor an interactive visualization of ECCV 2018 by institutions with accepted research, please click \u003Ca href=\u0022https:\/\/public.tableau.com\/views\/ECCV2018-Top3\/Dashboard2?:embed=y\u0026amp;:display_count=yes\u0026amp;:showVizHome=no\u0022\u003Ehere.\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAn interactive visualization of ECCV 2018 by people and institutions can be viewed \u003Ca href=\u0022https:\/\/public.tableau.com\/views\/ECCV2018-MainProgram\/Dashboard1?:embed=y\u0026amp;:display_count=yes\u0026amp;:showVizHome=no\u0022\u003Ehere.\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBelow are the titles of Georgia Tech\u0026rsquo;s research being presented this week.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EGeorgia Tech at ECCV 2018\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1804.04259.pdf\u0022\u003ELearning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy Zhaoyang Lv*, GEORGIA TECH; Kihwan Kim, NVIDIA; Alejandro Troccoli, NVIDIA; Deqing Sun, NVIDIA; Kautz Jan, NVIDIA; James Rehg, Georgia Institute of Technology\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ERead our blog post about this paper on the ML@GT blog \u003Ca href=\u0022https:\/\/mlatgt.blog\/2018\/09\/06\/learning-rigidity-and-scene-flow-estimation\/\u0022\u003Ehere.\u003C\/a\u003E \u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/web.engr.oregonstate.edu\/~lif\/1925.pdf\u0022\u003EMulti-object Tracking with Neural Gating using bilinear LSTMs\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy Chanho Kim*, Georgia Tech; Fuxin Li, Oregon State University; James Rehg, Georgia Institute of Technology\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/papers\/Yin_Li_In_the_Eye_ECCV_2018_paper.pdf\u0022\u003EIn the Eye of Beholder: Joint Learning of Gaze and Actions in First Person Vision\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EYin Li*, CMU; Miao Liu, Georgia Tech; James Rehg, Georgia Institute of Technology\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1808.02861.pdf\u0022\u003EChoose Your Neuron: Incorporating Domain Knowledge through Neuron Importance\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy Ramprasaath Ramasamy Selvaraju*, Georgia Tech; Prithvijit Chattopadhyay, Georgia Institute of Technology; Mohamed Elhoseiny, Facebook; Tilak Sharma, Facebook; Dhruv Batra, Georgia Tech \u0026amp; Facebook AI Research; Devi Parikh, Georgia Tech \u0026amp; Facebook AI Research; Stefan Lee, Georgia Institute of Technology\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ERead our blog post about this paper on the ML@GT blog \u003Ca href=\u0022https:\/\/mlatgt.blog\/2018\/09\/05\/choose-your-neuron-incorporating-domain-knowledge-through-neuron-importance\/\u0022\u003Ehere.\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022http:\/\/users.ece.cmu.edu\/~skottur\/papers\/corefnmn_eccv18.pdf\u0022\u003EVisual Coreference Resolution in Visual Dialog using Neural Module Networks\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy Satwik Kottur*, Carnegie Mellon University; Jos\u0026eacute; M. F. Moura, Carnegie Mellon University; Devi Parikh, Georgia Tech \u0026amp; Facebook AI Research; Dhruv Batra, Georgia Tech \u0026amp; Facebook AI Research; Marcus Rohrbach, Facebook AI Research\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1808.00191.pdf\u0022\u003EGraph R-CNN for Scene Graph Generation\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy Jianwei Yang*, Georgia Institute of Technology; Jiasen Lu, Georgia Institute of Technology; Stefan Lee, Georgia Institute of Technology; Dhruv Batra, Georgia Tech \u0026amp; Facebook AI Research; Devi Parikh, Georgia Tech \u0026amp; Facebook AI Research\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ERead our blog post about this paper on the ML@GT \u003Ca href=\u0022https:\/\/mlatgt.blog\/2018\/09\/04\/what-is-graph-r-cnn\/\u0022\u003Eblog here.\u003C\/a\u003E \u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022http:\/\/wyliu.com\/papers\/LiuECCV18.pdf\u0022\u003ESEAL: A Framework Towards Simultaneous Edge Alignment and Learning\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy Zhiding Yu*, NVIDIA; Weiyang Liu, Georgia Tech; Yang Zou, Carnegie Mellon University; Chen Feng, Mitsubishi Electric Research Laboratories (MERL); Srikumar Ramalingam, University of Utah; B. V. K. Vijaya Kumar, CMU, USA; Kautz Jan, NVIDIA\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022http:\/\/www.eye.gatech.edu\/swapnet\/paper.pdf\u0022\u003ESwapNet: Image Based Garment Transfer\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy Amit Raj, Georgia Tech; Patsorn Sangkloy, Georgia Tech; Huiwen Chang, Princeton; James Hays, Georgia Tech; Duygu Ceylan, Adobe; and Jingwan Lu, Adobe\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/papers\/Eunji_Chong_Connecting_Gaze_Scene_ECCV_2018_paper.pdf\u0022\u003E\u003Cstrong\u003EConnecting Gaze, Scene, and Attention: Generalized Attention Estimation via Joint Modeling of Gaze and Scene Saliency\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEunji Chong, Nataniel Ruiz, Yongxin Wang, Yun Zhang, Agata Rozga, James M. Rehg, Georgia Tech\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech faculty and students will travel to Munich, Germany to present their research at the European Conference on Computer Vision (ECCV)."}],"uid":"34773","created_gmt":"2018-09-06 16:16:33","changed_gmt":"2018-09-07 17:56:07","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-09-06T00:00:00-04:00","iso_date":"2018-09-06T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"610984":{"id":"610984","type":"image","title":"ECCV 2018 will be held in Munich, Germany","body":null,"created":"1536253387","gmt_created":"2018-09-06 17:03:07","changed":"1536253387","gmt_changed":"2018-09-06 17:03:07","alt":"","file":{"fid":"232621","name":"Munich_skyline_1-1 copy.jpg","image_path":"\/sites\/default\/files\/images\/Munich_skyline_1-1%20copy.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Munich_skyline_1-1%20copy.jpg","mime":"image\/jpeg","size":667391,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Munich_skyline_1-1%20copy.jpg?itok=B_ELUeZu"}}},"media_ids":["610984"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"610957":{"#nid":"610957","#data":{"type":"news","title":"What If Robots Could Learn Skills from Scratch?","body":[{"value":"\u003Cp\u003EAny machine can learn to move with enough engineering, according to \u003Cstrong\u003EKaren Liu, \u003C\/strong\u003Ebut imagine\u003Cstrong\u003E \u003C\/strong\u003Ewhat could happen if machines were able to evolve and learn new motions over time with very little instruction, just like a human child does.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELiu, an associate professor in the \u003Cstrong\u003ESchool of Interactive Computing\u003C\/strong\u003E and member of the \u003Cstrong\u003EMachine Learning Center at Georgia Tech,\u003C\/strong\u003E conducts research on simulating and controlling human and animal movements in the digital world with virtual \u0026ldquo;agents\u0026rdquo; or using actual robots in the lab.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECreating moving agents in a digital landscape has been around for many years but Liu and her team are teaching agents to move by using artificial intelligence.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn previous iterations, robots and agents have been taught using reinforcement learning (RL), which requires extensive coding and algorithmic development for each movement, no matter how big or small.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn contrast to the common approach of mimicking motion trajectories, Liu\u0026rsquo;s lab wanted to create a virtual agent that learns how to walk on its own.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy combining RL with deep learning, the recent advancement in deep RL has demonstrated that it is possible to use a \u0026ldquo;minimalist\u0026rdquo; approach to learn locomotion, but the resulting motion appears unnatural.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELiu\u0026rsquo;s team proposed to train the agent using curriculum learning with adjustable physical aid to create more natural animal locomotion using the minimalist learning approach.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECurriculum learning is, as it sounds, very similar to how a person goes through their educational process. An agent is given a simpler task at the beginning of the learning process and once it masters the skill, it is able to progress to the next lesson.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne of the challenges researchers face is making sure the agent\u0026rsquo;s motion looks natural.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Without motion trajectory to mimic, most locomotion produced by deep RL methods are too energetic or asymmetrical.\u0026rdquo; said Liu.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo help combat these issues, Liu and her team have introduced a virtual spring to assist an agent to provide physical aid during the training process.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor instance, if the agent needs to walk forward, the spring helps to propel it forward. If it is about to fall, the spring pushes it back up. Because the spring is a physical force, its stiffness can easily be adjusted, making the lesson more or less difficult. As the agent learns the skill, the spring is adjusted before eventually being taken out completely.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor Liu, creating generative models for natural animal motion has always been a fascinating research area. \u0026ldquo;We have been trying to mimic the kinematics and the dynamic characteristics of real animal movements. Thanks to the recent development in deep reinforcement learning, for the first time, we are able to also mimic \u0026ldquo;how\u0026rdquo; the real animals acquire motion skills.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKaren Liu and co-authors Wenhao Yu and Greg Turk recently presented their paper\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1801.08093\u0022\u003E, \u0026ldquo;Learning Symmetric and Low Energy Locomotion\u0026rdquo;\u003C\/a\u003E at \u003Ca href=\u0022https:\/\/s2018.siggraph.org\/attend\/vancouver\/\u0022\u003ESIGGRAPH 2018\u003C\/a\u003E in Vancouver BC, Canada.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A Georgia Tech lab is working to teach robots new skills with minimal data."}],"uid":"34773","created_gmt":"2018-09-05 21:04:06","changed_gmt":"2018-09-07 15:42:45","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-09-05T00:00:00-04:00","iso_date":"2018-09-05T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"610956":{"id":"610956","type":"image","title":"What If Robots Could Learn Skills from Scratch?","body":null,"created":"1536181222","gmt_created":"2018-09-05 21:00:22","changed":"1536181222","gmt_changed":"2018-09-05 21:00:22","alt":"","file":{"fid":"232612","name":"Screen Shot 2018-09-05 at 11.25.27 AM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202018-09-05%20at%2011.25.27%20AM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202018-09-05%20at%2011.25.27%20AM.png","mime":"image\/png","size":517127,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202018-09-05%20at%2011.25.27%20AM.png?itok=9ASjP8m5"}}},"media_ids":["610956"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"611005":{"#nid":"611005","#data":{"type":"news","title":"IC Researchers Utilizing OMSCS as Test Bed for Wearable Tech in Online Learning","body":[{"value":"\u003Cp\u003EA team of researchers in the School of Interactive Computing (IC) and the Online Master of Science in Computer Science (OMSCS) program will investigate the feasibility of using wearable technologies and other types of sensing data to provide context in online learning environments.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis is the first funded project that uses OMSCS as a test bed.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUnder a $300,000 grant from the National Science Foundation, IC Assistant Professor \u003Cstrong\u003ELauren Wilcox\u003C\/strong\u003E, along with co-principal investigators \u003Cstrong\u003EBetsy DiSalvo\u003C\/strong\u003E (IC), \u003Cstrong\u003EThomas Ploetz\u003C\/strong\u003E (IC), and \u003Cstrong\u003EDavid Joyner\u003C\/strong\u003E (OMSCS), will set up an infrastructure for using wearable technology and interaction analytics to capture students\u0026rsquo; experiences with online courses. They will also investigate which personal interactive computing technologies are effective in capturing and modeling context, and what correlations exist between wearable data, analytics from online behavior, self-reports of stress and anxiety and learning outcomes. The initial aim is to determine if the use of wearable technologies could better inform online course delivery, and improve retention and learning outcomes.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;When we are instructing online courses, we lose an important view into the student experience,\u0026rdquo; Wilcox said. \u0026ldquo;Some students might be paying attention while others aren\u0026rsquo;t. Some might be paying attention, but they still aren\u0026rsquo;t learning. We want to better understand these scenarios and use our knowledge of them to inform better online learning experiences.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Analyzing data from wearables worn by students during online course instruction could help us understand and recognize these scenarios,\u0026rdquo; Ploetz added.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESuch a study may not be possible at the vast majority of research institutions, but the presence of the OMSCS program at Georgia Tech affords a unique opportunity. Because of the high volume of participation and enrollment, not to mention the number of quality dedicated professors, the researchers expect to establish reliable conclusions and create a foundation for future research\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We think the OMSCS program is a strong test bed for this research because students are motivated to succeed and course completion rates are very high, and yet the course content and assessments remain extremely rigorous,\u0026rdquo; said Joyner, the associate director of student experience in the College of Computing and a longtime lecturer for OMSCS.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;These students experience the same stress, engagement, discouragement, and triumph as traditional students, but online instructors cannot see these states. Wearable technologies may help identify when these states occur and whether they correlate to desirable learning outcomes.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe project is a two-year study during which the researchers will establish ground truth on student success and satisfaction. Are they generally happy? Do they disengage or pay greater attention to a specific lecture? What events trigger indicators of stress or anxiety, and at what point is that detrimental to learning?\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The idea is not to provide data on individuals to instructors,\u0026rdquo; Wilcox said, addressing concerns of student privacy. \u0026ldquo;First, we hope to see whether we can collect these data points, understand what they might mean for learning, and then provide anonymous aggregated feedback to instructors. It\u0026rsquo;s also about how we can help adapt these learning experiences to the individual students.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDown the road, it could also be an important test bed for things like test anxiety or understanding what a flow state \u0026ndash; or, colloquially, being \u0026ldquo;in the zone\u0026rdquo; \u0026ndash; looks like and what features of a lecture lend themselves to it.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;One of the most exciting aspects of deploying this type of research in OMSCS is the potential scale for future research,\u0026rdquo; DiSalvo said. \u0026ldquo;This grant is laying the groundwork for future research on designing learning, building upon theories of learning with design-based research both at a scale and with detail of individual behavior and feedback that we have not had access to in the past.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile the initial applications of this approach are in online courses, access to this type of data could be used to design many new learning environments.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;With just-in-time feedback to the students, we could provide customized learning that really moves away from the traditional class structure,\u0026rdquo; DiSalvo said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBeyond learning, with more and more aspects of daily life going online, Wilcox said that she could also see implications of the findings from this study influencing the design of other online environments, such as job training.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The ultimate goal is to create online learning environments that promote positive human interactions and consider human health and wellness an integral part of the design,\u0026rdquo; she said.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The research project is under a two-year, $300,000 grant from the National Science Foundation for faculty Lauren Wilcox, Betsy DiSalvo, Thomas Ploetz, and David Joyner."}],"uid":"33939","created_gmt":"2018-09-06 18:52:47","changed_gmt":"2018-09-06 18:52:47","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-09-06T00:00:00-04:00","iso_date":"2018-09-06T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"611004":{"id":"611004","type":"image","title":"Online learning stock","body":null,"created":"1536259875","gmt_created":"2018-09-06 18:51:15","changed":"1536259875","gmt_changed":"2018-09-06 18:51:15","alt":"Fingers typing on a laptop keyboard","file":{"fid":"232624","name":"online learning.jpg","image_path":"\/sites\/default\/files\/images\/online%20learning.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/online%20learning.jpg","mime":"image\/jpeg","size":68702,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/online%20learning.jpg?itok=CYZYPb3r"}}},"media_ids":["611004"],"related_links":[{"url":"http:\/\/www.omscs.gatech.edu","title":"Online Master of Science Computer Science"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"1051","name":"Computer Science"},{"id":"208","name":"computing"},{"id":"2483","name":"interactive computing"},{"id":"166848","name":"School of Interactive Computing"},{"id":"654","name":"College of Computing"},{"id":"69631","name":"Online Master of Science in Computer Science"},{"id":"121521","name":"OMSCS"},{"id":"14511","name":"online learning"},{"id":"109121","name":"Lauren Wilcox"},{"id":"11961","name":"betsy disalvo"},{"id":"176045","name":"thomas ploetz"},{"id":"145291","name":"David Joyner"},{"id":"77691","name":"wearable technology"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"610021":{"#nid":"610021","#data":{"type":"news","title":"Oh, the Places They\u0027ll Go: Professor Gregory Abowd Looks Back on 30 Ph.D. Graduates","body":[{"value":"\u003Cp\u003EIf you go outside School of Interactive Computing Professor \u003Cstrong\u003EGregory Abowd\u003C\/strong\u003E\u0026rsquo;s office, you might find a rather odd collection of keepsakes.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA wooden toilet seat sits on a table in the front right corner of his lab space, a metal toilet paper holder not far away. A bobblehead sits behind that, unmistakably the bearded figure of the longtime College of Computing faculty resting his left foot on the top of a miniature toilet. The smallest of these trinkets is simply just a carving of a toilet, and behind it all, yes, the blue plastic door of a portable toilet you might see at a concert or other outdoor venue.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEach item was given to Abowd by his students, curious selections without the proper context. They might indicate underlying animosity if not for the bright smile they bring to Abowd\u0026rsquo;s face as he describes the origin of each.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne man\u0026rsquo;s chamber pot, he might say, is another man\u0026rsquo;s reminder of where\u0026rsquo;s he\u0026rsquo;s been and where his students have gone. Earlier this year, he graduated his 30\u003Csup\u003Eth\u003C\/sup\u003E Ph.D. student and, among his impressive list of accomplishments, that\u0026rsquo;s the one he cites as the most important.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003EA unique tradition\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EThe story goes like this:\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhen Abowd was a graduate student at the University of Oxford, his supervisor described his thesis research as \u0026ldquo;a beautiful porcelain pot\u0026rdquo; that he had forgotten to \u0026ndash; we\u0026rsquo;ll just say, \u0026ldquo;fill.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;These rather disparaging words have since had a profoundly positive influence on me,\u0026rdquo; stated a posted explanation of the bathroom fixtures around his lab space. \u0026ldquo;Shortly after successfully defending my thesis, a good friend, who knew how deflating my supervisor\u0026rsquo;s words were, gave me a little figurine of a toilet and asked me to keep it nearby in order to preserve my humility throughout my career.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESince then, it has become tradition to continue to keep Abowd humble throughout each accomplishment of his career \u0026ndash; being awarded tenure, a promotion to full professor, distinguished professor, regents\u0026rsquo; professor, and J.Z. Liang Chair.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe remembers each of his 30 Ph.D. grads like family. Comparing them to his own 11 brothers and sisters, whom he can understandably recite in order, he shows an uncanny ability to do the same with his students. Prompted with an easy one \u0026ndash; his first Ph.D. graduate \u0026ndash; Abowd didn\u0026rsquo;t hesitate.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Oh, \u003Cstrong\u003EKurt Stirewalt\u003C\/strong\u003E,\u0026rdquo; Abowd immediately said, as if stunned to receive such a softball question. \u0026ldquo;He went to Michigan State, but he\u0026rsquo;s no longer there.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIndeed, Stirewalt graduated in 1997, earned an NSF CAREER Award in 2000, and reached the level of associate professor with tenure at Michigan State. Since then, he has gone on to become vice president of application architecture at LogicBox.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOk, a tougher one then: Ph.D. graduate No. 17. A slight hesitation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;That would be \u003Cstrong\u003ETracy Westeyn\u003C\/strong\u003E, right?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERight. He advised her with Professor \u003Cstrong\u003EThad Starner\u003C\/strong\u003E through her graduation in 2010. She has since gone on to a career in Washington, D.C.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne more: No. 9. A long pause.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Would that be Lonnie?\u0026rdquo; Abowd asked. He\u0026rsquo;s one off. \u003Cstrong\u003ELonnie Harvel\u003C\/strong\u003E was No. 8 and graduated in 2005. \u0026ldquo;So, the one right after. That would be \u003Cstrong\u003EGiovanni\u003C\/strong\u003E (\u003Cstrong\u003EIachello\u003C\/strong\u003E).\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERight again. Iachello graduated in 2006 and is now the head of international and new markets at LinkedIn.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003EWhere they\u0026rsquo;ve gone\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EThe rest of \u003Cstrong\u003E\u003Ca href=\u0022https:\/\/public.tableau.com\/views\/Abowds30PhDs\/Dashboard1?:embed=y\u0026amp;:display_count=yes\u0026amp;publish=yes\u0026amp;:showVizHome=no\u0022\u003EAbowd\u0026#39;s academic family\u003C\/a\u003E\u003C\/strong\u003E\u0026nbsp;is an equally impressive reminder of the quality of the students that have come through his lab. One former student is at Google, another at Samsung Electronics. There\u0026rsquo;s one at Amazon Lab126 and one at Intel Labs. One has an ever-expanding list of patents, and a host of others have gone on to become academic faculty around the world \u0026ndash; Sweden, India, Korea, Texas, New York, Washington, and more.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAbowd hesitated when asked whether 30 Ph.D. graduates was a big number, then put it into context.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s a big number when you think that most of them have gone on to academic positions,\u0026rdquo; Abowd said. \u0026ldquo;I wouldn\u0026rsquo;t want to say that\u0026rsquo;s the best option for everyone, but it is something I\u0026rsquo;m proud of.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe impact is exponential. Many of his graduate students have gone on to have graduate students of their own. While his 30 students\u0026rsquo; destinations are relatively easy to map, the task gets significantly more challenging with second and third generations that number in the hundreds.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThey get together as often as possible, usually at conferences like the ACM CHI Conference on Human Factors in Computing Systems or Ubicomp. At CHI 2018, Abowd estimated about 50 or 60 former students, including masters and undergraduates, attended their get-together. Abowd has connected with these students in a profound way \u0026ndash; from one-on-one mentorship to \u0026ldquo;making music\u0026rdquo; together in a band (okay, they were just pretending).\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s the proudest academic accomplishment for me,\u0026rdquo; Abowd said of his students. \u0026ldquo;The students and the quality of the students. We are teachers first, and you develop very close long-term relationships with them. Almost all of them, I am still in contact with, so it really is like your own children.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Gregory Abowd graduated his 30th Ph.D. student in the spring. They have each gone on to impressive careers in academia and industry."}],"uid":"33939","created_gmt":"2018-08-20 17:56:53","changed_gmt":"2018-08-24 21:05:38","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-08-20T00:00:00-04:00","iso_date":"2018-08-20T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"610020":{"id":"610020","type":"image","title":"Gregory Abowd 30 PhDs","body":null,"created":"1534787486","gmt_created":"2018-08-20 17:51:26","changed":"1534787486","gmt_changed":"2018-08-20 17:51:26","alt":"Gregory Abowd stands at his office door","file":{"fid":"232306","name":"Abowd Main.jpg","image_path":"\/sites\/default\/files\/images\/Abowd%20Main.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Abowd%20Main.jpg","mime":"image\/jpeg","size":134951,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Abowd%20Main.jpg?itok=PR5Rj6i2"}},"610350":{"id":"610350","type":"image","title":"Where Are They Now? ","body":null,"created":"1535144557","gmt_created":"2018-08-24 21:02:37","changed":"1535144632","gmt_changed":"2018-08-24 21:03:52","alt":"","file":{"fid":"232415","name":"Abowd\u0027s 30 PhDs_Mercury.png","image_path":"\/sites\/default\/files\/images\/Abowd%27s%2030%20PhDs_Mercury.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Abowd%27s%2030%20PhDs_Mercury.png","mime":"image\/png","size":614465,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Abowd%27s%2030%20PhDs_Mercury.png?itok=aU-lW1AB"}}},"media_ids":["610020","610350"],"related_links":[{"url":"https:\/\/public.tableau.com\/views\/Abowds30PhDs\/Dashboard1?:embed=y\u0026:display_count=yes\u0026publish=yes\u0026:showVizHome=no","title":"Data Interactive - Abowd\u0027s Graduates Across the World"},{"url":"http:\/\/ubicomp.cc.gatech.edu\/index.html","title":"The Georgia Tech Ubicomp Group"},{"url":"https:\/\/www.ic.gatech.edu\/academics\/phd-programs","title":"School of Interactive Computing Ph.D. Programs"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"11002","name":"Gregory Abowd"},{"id":"115361","name":"Ph.D"},{"id":"10353","name":"wearable computing"},{"id":"178784","name":"Kurt Stirewalt"},{"id":"178785","name":"Tracy Westeyn"},{"id":"1944","name":"Thad Starner"},{"id":"178786","name":"Lonnie Harvel"},{"id":"178787","name":"Giovanni Iachello"},{"id":"178788","name":"The School of Interactive Computing"},{"id":"654","name":"College of Computing"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"610180":{"#nid":"610180","#data":{"type":"news","title":"IC\u2019s John Stasko Recognized as Regents Professor By University System of Georgia","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing\u0026rsquo;s (IC) \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/people\/john-stasko\u0022\u003E\u003Cstrong\u003EJohn Stasko\u003C\/strong\u003E\u003C\/a\u003E was appointed Regents Professor, the highest academic and research recognition bestowed by the Board of Regents of the University System of Georgia, earlier this month.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EStasko was one of four Georgia Tech faculty members to earn the title in the latest announcement and the only from the College of Computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I feel very honored and grateful to receive this title,\u0026rdquo; Stasko said. \u0026ldquo;For a professor to have their research and teaching recognized in this way is the highest compliment. It would not have been possible without my students and faculty colleagues here over the past 30 years. I\u0026rsquo;d like to tank them for their contributions and their friendship.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EStasko joined the Georgia Tech faculty in 1989. His primary research area has been in human-computer interaction, with a focus on information and visual analytics. He is the director of the \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/gvu\/ii\/\u0022\u003EInformation Interfaces Research Group\u003C\/a\u003E at Georgia Tech whose mission is to help people take advantage of information to enrich their lives. He was \u003Ca href=\u0022http:\/\/www.chi.gatech.edu\/2016\/chi-academy-2016\/\u0022\u003Enamed to the prestigious CHI Academy in 2016\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech Provost \u003Cstrong\u003ERafael L. Bras\u003C\/strong\u003E offered his congratulations to all of the new designees.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Congratulations to my esteemed colleagues on this new distinction,\u0026rdquo; said Bras, who serves as executive vice president for Academic Affairs and the K. Harrison Brown Family Chair.. \u0026ldquo;The world\u0026rsquo;s best and brightest scholars and researchers can be found at Georgia Tech, and this recognition is evidence of their relentless pursuit of excellence in teaching, research, and scholarship.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIC Chair \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/people\/ayanna-howard\u0022\u003E\u003Cstrong\u003EAyanna Howard\u003C\/strong\u003E\u003C\/a\u003E also offered a note of support.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;John has long been an admired leader within the School of Interactive Computing,\u0026rdquo; she said. \u0026ldquo;In a school full of the best and brightest minds in computing, he has distinguished himself through his devotion to research and education. We couldn\u0026rsquo;t be prouder of him for this well-deserved distinction.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEach year, college deans may nominate two academic faculty members for the Regents Professor title and one research faculty member for the Regents Researcher title. The other three new Regents Professors are \u003Cstrong\u003EAjay Kohli\u003C\/strong\u003E (Professor, Scheller College of Business), \u003Cstrong\u003ETimothy Lieuwen\u003C\/strong\u003E (Professor, School of Aerospace Engineering), and \u003Cstrong\u003ECatherine L. Ross\u003C\/strong\u003E (Professor, School of City and Regional Planning). The Regents Researcher is \u003Cstrong\u003EMichael O. Rodgers\u003C\/strong\u003E (Principal Research Scientist, School of Civil and Environmental Engineering).\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Stasko was one of four Georgia Tech faculty members to earn the title in the latest announcement and the only from the College of Computing."}],"uid":"33939","created_gmt":"2018-08-22 17:37:16","changed_gmt":"2018-08-22 17:37:16","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-08-22T00:00:00-04:00","iso_date":"2018-08-22T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"394731":{"id":"394731","type":"image","title":"John Stasko","body":null,"created":"1449246346","gmt_created":"2015-12-04 16:25:46","changed":"1475895089","gmt_changed":"2016-10-08 02:51:29","alt":"John Stasko","file":{"fid":"75643","name":"stasko14.jpg","image_path":"\/sites\/default\/files\/images\/stasko14.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/stasko14.jpg","mime":"image\/jpeg","size":61355,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/stasko14.jpg?itok=7W7zKdFy"}}},"media_ids":["394731"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"11632","name":"john stasko"},{"id":"19401","name":"Regents Professors"},{"id":"1966","name":"usg"},{"id":"171841","name":"University System of Georgia Board of Regents"},{"id":"166848","name":"School of Interactive Computing"},{"id":"654","name":"College of Computing"},{"id":"825","name":"Ayanna Howard"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"609771":{"#nid":"609771","#data":{"type":"news","title":"Irfan Essa Gives Invited Talk on Computational Video for Sports","body":[{"value":"\u003Cp\u003EFrom the introduction of the \u0026ldquo;1\u003Csup\u003Est\u003C\/sup\u003E and Ten\u0026rdquo; line in NFL broadcasts in 1998 to the use of the Hawk-Eye system for line calls in tennis and cricket, sports viewers now expect to see graphics on their screens to help explain the action.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EIrfan Essa\u003C\/strong\u003E, director of the \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EMachine Learning Center at Georgia Tech (ML@GT)\u003C\/a\u003E, recently gave an invited talk, \u003Cem\u003EComputational Video for Sports: Challenges for Large-Scale Video Analysis\u003C\/em\u003E, detailing why technology areas such as computer vision and augmented reality are so prevalent in sports broadcasts. During his talk, which took place at the \u003Ca href=\u0022http:\/\/www.vap.aau.dk\/cvsports\/\u0022\u003E4th International Workshop on Computer Vision in Sports (CVsports)\u003C\/a\u003E\u0026nbsp;in June, he also discussed the challenges that computer vision scientists face when creating technology to improve the sports industry.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne such challenge is computer vision and machine learning being used to create models that can identify common sports scenes and help write news captions. But what humans easily interpret is often not so simple for machines.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhen analyzing a photo of Georgia Tech\u0026rsquo;s head football coach Paul Johnson getting a celebratory Gatorade bath on the field, a computer algorithm misidentified a\u0026nbsp;camera in the scene as a hair dryer, and the resulting caption read \u0026ldquo;Man taking shower while others watch.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EImproving computer vision so that it is able to better account for things like context and have better training models is one of the next steps in the field.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EScientists are also working on how to make this type of technology available for use on lower-quality video. Current computer vision techniques work well for broadcast-quality video in part because of the detail available in high-definition, but researchers would like to make the techniques accurate and cost-effective for lower-quality video so that high school sports can also take advantage of their benefits.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy 2019, video content is expected to account for 80 percent of the world\u0026rsquo;s total web content. Computer vision\u0026rsquo;s ability to understand context, analyze lower-quality video content, and establish better metrics to analyze the data are all key next steps for this segment of computer science, according to Essa.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe workshop took place as part of the CVF\/IEEE \u003Cstrong\u003E\u003Ca href=\u0022http:\/\/cvpr2018.thecvf.com\/\u0022\u003EComputer Vision and Pattern Recognition (CVPR)\u003C\/a\u003E\u003C\/strong\u003E conference in Salt Lake City, Utah. Georgia Tech researchers and alumni presented work in computer vision and were among 6,000 attendees at the conference. Throughout the week, \u003Ca href=\u0022http:\/\/ml.gatech.edu\/hg\/item\/607130\u0022\u003Eattendees presented their latest research papers\u003C\/a\u003E through oral presentations, spotlights, poster sessions, and workshops.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Machine Learning Director, Irfan Essa, discussed how computer vision technology is being used in the sports industry at the 4th International Workshop on Computer Vision in Sports (CVsports)."}],"uid":"34773","created_gmt":"2018-08-15 14:22:20","changed_gmt":"2018-08-16 15:38:02","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-08-15T00:00:00-04:00","iso_date":"2018-08-15T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"609738":{"id":"609738","type":"image","title":"IN OR OUT: Hawk-Eye is a vision processing technology that tracks balls with millimeter accuracy to give viewers an up-close view of the action. Photo credit: Wikimedia Commons","body":null,"created":"1534268749","gmt_created":"2018-08-14 17:45:49","changed":"1534343189","gmt_changed":"2018-08-15 14:26:29","alt":"","file":{"fid":"232205","name":"The_decision_of_In_or_Out_with_the_help_of_Technology_at_Wimbledon.jpg","image_path":"\/sites\/default\/files\/images\/The_decision_of_In_or_Out_with_the_help_of_Technology_at_Wimbledon.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/The_decision_of_In_or_Out_with_the_help_of_Technology_at_Wimbledon.jpg","mime":"image\/jpeg","size":874538,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/The_decision_of_In_or_Out_with_the_help_of_Technology_at_Wimbledon.jpg?itok=iBhNuB7G"}}},"media_ids":["609738"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"143","name":"Digital Media and Entertainment"}],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallison.blinder@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"609490":{"#nid":"609490","#data":{"type":"news","title":"School of IC Chair Ayanna Howard Named CMD-IT 2018 Richard A. Tapia Award Winner","body":[{"value":"\u003Cp\u003EThe \u003Ca href=\u0022http:\/\/www.cmd-it.org\/\u0022\u003ECenter for Minorities and People with Disabilities in Information Technology\u003C\/a\u003E (CMD-IT) announced School of Interactive Computing Chair \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/people\/ayanna-howard\u0022\u003E\u003Cstrong\u003EAyanna Howard\u003C\/strong\u003E\u003C\/a\u003E as the winner of the Richard A. Tapia Achievement Award for Scientific Scholarship, Civic Science and Diversifying Computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Ayanna Howard has been a leading innovator and researcher in the fields of robotics, computer vision, and artificial intelligence,\u0026rdquo; said Valerie Taylor, CMD-IT CEO and President. \u0026ldquo;Applications of her work have included the development of assistive robots in the home, therapy gaming apps, and remote exploration of extreme environments.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Throughout her career she has focused on bringing girls, underrepresented minorities, and people with disabilities into computing through programs related to robotics. Ayanna\u0026rsquo;s focus on engaging people with disabilities resulted in the creation of Zyrobotics, LLC., which provides inclusive mobile technologies that make learning accessible.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe Richard A. Tapia Award is awarded annually to an individual who demonstrates significant research leadership and strong commitment and contributions to diversifying computing. It will be presented at the \u003Ca href=\u0022http:\/\/www.tapiaconference.org\/\u0022\u003E2018 ACM Richard Tapia Celebration of Diversity in Computing Conference\u003C\/a\u003E. Themed \u0026ldquo;Diversity: Roots of Innovation,\u0026rdquo; the Tapia Conference will be held Sept. 19-22, in Orlando, Florida.\u0026nbsp; The conference brings together students, faculty, researchers, and professionals from all backgrounds and ethnicities in computing to promote and celebrate diversity in computing. The Tapia Conference is sponsored by the Association of Computing Machinery (ACM) and presented by CMD-IT.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe vision of CMD-IT is to contribute to the national need for an effective workforce in computing and IT through inclusive programs and initiatives focused on minorities and people with disabilities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;CMD-IT is focused on improving engagement among diverse communities in computing,\u0026rdquo; Howard said. \u0026ldquo;This is something I have long considered among my missions as a researcher and an educator. To be recognized by such a wonderful organization is truly an honor.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor more information and to register for the Tapia Conference, visit \u003Ca href=\u0022http:\/\/www.tapiaconference.org\u0022\u003Ewww.tapiaconference.org\u003C\/a\u003E.\u0026nbsp;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The Richard A. Tapia Award is awarded annually to an individual who demonstrates significant research leadership and strong commitment and contributions to diversifying computing."}],"uid":"33939","created_gmt":"2018-08-08 21:11:08","changed_gmt":"2018-08-08 21:11:08","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-08-08T00:00:00-04:00","iso_date":"2018-08-08T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"599486":{"id":"599486","type":"image","title":"Ayanna Howard headshot","body":null,"created":"1512405411","gmt_created":"2017-12-04 16:36:51","changed":"1512405411","gmt_changed":"2017-12-04 16:36:51","alt":"Ayanna Howard","file":{"fid":"228550","name":"Howard 2.jpg","image_path":"\/sites\/default\/files\/images\/Howard%202.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Howard%202.jpg","mime":"image\/jpeg","size":327205,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Howard%202.jpg?itok=r92ozUio"}}},"media_ids":["599486"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"825","name":"Ayanna Howard"},{"id":"166848","name":"School of Interactive Computing"},{"id":"654","name":"College of Computing"},{"id":"433","name":"IC"},{"id":"2435","name":"ECE"},{"id":"66891","name":"Georgia Tech School of Electrical and Computer Engineering"},{"id":"109","name":"Georgia Tech"},{"id":"505","name":"gatech"},{"id":"5022","name":"Richard Tapia"},{"id":"175368","name":"Tapia Celebration of Diversity in Computing"},{"id":"170724","name":"TAPIA"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"608094":{"#nid":"608094","#data":{"type":"news","title":"Fifth Summer of Civic Data Science Program Presents Community-Focused Solutions","body":[{"value":"\u003Cp\u003EStudents presented data-oriented solutions for civic problems, from public health to environmentalism, at the \u003Ca href=\u0022http:\/\/civicdatascience.gatech.edu\/\u0022\u003ECivic Data Science\u003C\/a\u003E (CDS) finale on July 19.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe 10-week summer program brings college students from across the country to Georgia Tech to use data science research and applications for direct civic and social impact. The National Science Foundation\u0026ndash;funded program is now in its fifth year, previously under the name Data Science for Social Good.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis year\u0026rsquo;s projects addressed gentrification, sustainable transportation, pest control, and environmental monitoring. Each project pairs with local organizations, such as the City of Atlanta and neighborhood planning units, to ensure the work can help the community.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Community organizations help make sure the kind of problems we\u0026rsquo;re working on are grounded in reality,\u0026rdquo; said School of Computer Science (SCS) Professor and program co-director \u003Ca href=\u0022https:\/\/www.scs.gatech.edu\/people\/11077\/ellen-zeguras\u0022\u003E\u003Cstrong\u003EEllen Zegura\u003C\/strong\u003E\u003C\/a\u003E. \u0026ldquo;It\u0026rsquo;s public problem solving.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt also lets students see how their data skills can be used outside the classroom.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The CDS program is unique in that it provides the perfect balance between research and application,\u0026rdquo; said \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/michael-koohang-5611ba14a\/\u0022\u003E\u003Cstrong\u003EMichael Koohang\u003C\/strong\u003E\u003C\/a\u003E, a rising fourth-year student at Middle Georgia State University. \u0026ldquo;While we conducted formal research during the program, we were also applying our discoveries to tangible pieces of work that had almost immediate impact on local communities.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor many students, who often come from smaller liberal arts colleges, this was their first opportunity in an environment as large and well-resourced as Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I knew I was interested in attending graduate school before I came to Tech, but working full-time in a lab, with a mentor and team, has given me invaluable insight about the day-to-day of research,\u0026rdquo; said Wellesley College rising third-year student \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/annabel-rothschild-488327124\/\u0022\u003E\u003Cstrong\u003EAnnabel Rothschild\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETech\u0026rsquo;s focus on interdisciplinary research also showed students the potential fields they could go into.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Prior to coming to CDS, I had a lot of difficulty trying to figure out how to combine both my majors, statistical and data sciences and government, into something that excited me,\u0026rdquo; said \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/arielle-dror-8a488215b\/\u0022\u003E\u003Cstrong\u003EArielle Dror\u003C\/strong\u003E\u003C\/a\u003E, a rising third-year student at Smith College. \u0026nbsp;\u0026ldquo;Spending time in a bigger university than my own home institution showed me the exciting world of interdisciplinary research.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EZegura co-runs the program with \u003Ca href=\u0022https:\/\/www.lmc.gatech.edu\/\u0022\u003ELiterature, Media, and Communication\u003C\/a\u003E (LCM) Associate Professor \u003Ca href=\u0022https:\/\/ledantec.net\/\u0022\u003E\u003Cstrong\u003EChristopher Le Dantec\u003C\/strong\u003E\u003C\/a\u003E. They also use the support of faculty mentors: \u003Ca href=\u0022https:\/\/spp.gatech.edu\/\u0022\u003ESchool of Public Policy\u003C\/a\u003E Assistant Professor \u003Ca href=\u0022https:\/\/spp.gatech.edu\/people\/person\/omar-isaac-asensio\u0022\u003E\u003Cstrong\u003EOmar Asensio\u003C\/strong\u003E\u003C\/a\u003E, LCM Associate Professor \u003Ca href=\u0022https:\/\/www.iac.gatech.edu\/people\/faculty\/disalvo\u0022\u003E\u003Cstrong\u003ECarl DiSalvo\u003C\/strong\u003E\u003C\/a\u003E\u003Cstrong\u003E,\u003C\/strong\u003E LCM Assistant Professor \u003Ca href=\u0022https:\/\/www.iac.gatech.edu\/people\/faculty\/loukissas\u0022\u003E\u003Cstrong\u003EYanni Loukissas\u003C\/strong\u003E\u003C\/a\u003E\u003Cem\u003E, \u003C\/em\u003ESCS research scientist \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/amanda-meng\u0022\u003E\u003Cstrong\u003EAmanda Meng\u003C\/strong\u003E\u003C\/a\u003E\u003Cem\u003E, \u003C\/em\u003Eand \u003Ca href=\u0022https:\/\/www.ce.gatech.edu\/\u0022\u003ESchool of Civil and Environmental Engineering\u003C\/a\u003E Associate Professor\u003Cem\u003E \u003C\/em\u003E\u003Ca href=\u0022https:\/\/ce.gatech.edu\/people\/faculty\/5861\/overview\u0022\u003E\u003Cstrong\u003EKari Watkins\u003C\/strong\u003E\u003C\/a\u003E\u003Cem\u003E.\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe four projects included:\u003C\/p\u003E\r\n\r\n\u003Ch2\u003E\u003Cstrong\u003EProject:\u003C\/strong\u003E Rat Watch by \u003Cstrong\u003EWinne Luo\u003C\/strong\u003E and Michael Koohang\u003C\/h2\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EMentors: \u003C\/strong\u003ECarl DiSalvo, Amanda Meng, Ellen Zegura\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EProblem:\u003C\/strong\u003E Rats are everywhere, but no reliable public data is kept because rat control is outside city jurisdiction and only homeowners can report rats. Without oversight, rats can increase disease, asthma, stress, and cause infrastructure damage.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ESolution: \u003C\/strong\u003ELuo and Koohang created an SMS chat bot where residents could report rat sightings via text. They created an interactive map with this data that lets users toggle between layers of code violations to see where rats are. The map can help city officials direct mitigation efforts and provide citizens with a tool to engage the government into action.\u003C\/p\u003E\r\n\r\n\u003Ch2\u003E\u003Cstrong\u003EProject:\u003C\/strong\u003E Atlanta Map Room by Annabel Rothschild and \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/muniba-khan-bb1883b4\/\u0022\u003E\u003Cstrong\u003EMuniba Khan\u003C\/strong\u003E\u003C\/a\u003E\u003C\/h2\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EMentors:\u003C\/strong\u003E Yanni Loukissas\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EProblem: \u003C\/strong\u003EMaps often depict an idealized environment created by the population in power and not everyone\u0026rsquo;s reality. This team wanted to document and reflect upon the connections and disjunctions between civic data and lived experience in Atlanta.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ESolution\u003C\/strong\u003E: They created the Atlanta Map Room in Technology Square Research Building, where everyone can collaborate on large-scale, interpretive maps. Using an app that allows users to select an area of the city to focus on and project it on a piece of paper, users can write their experiences on the map and bring their narrative back to the data. The map allows users to critique data and recognize it may not always tell the full story.\u003C\/p\u003E\r\n\r\n\u003Ch2\u003E\u003Cstrong\u003EProject: \u003C\/strong\u003EPopular Sentiment of U.S. Electric Vehicle Drivers by Arielle Dror, \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/emerson-wenzel\/\u0022\u003E\u003Cstrong\u003EEmerson Wenzel\u003C\/strong\u003E\u003C\/a\u003E, and \u003Cstrong\u003EKevin Alvarez\u003C\/strong\u003E\u003C\/h2\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EMentors: \u003C\/strong\u003EOmar Asensio\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EProblem:\u003C\/strong\u003E Although electric vehicles make up just 2 percent of car sales today, they will be 55 percent in 2050. Despite this boom, charging station experiences are less than accessible.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ESolution\u003C\/strong\u003E: This group pulled data from the app Plugshare, where users rate electric vehicle charging stations, to determine how well the current electric vehicle structure serves drivers. They used machine learning to automatically classify all the reviews as having a negative or positive sentiment. Overall roughly 40 percent of drivers have a poor experience at charging stations, a problem that needs to be fixed as the market expands.\u003C\/p\u003E\r\n\r\n\u003Ch2\u003E\u003Cstrong\u003EProject:\u003C\/strong\u003E Seeing Like A Bike by Nic Alton and \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/saumik-narayanan-533b1b132\/\u0022\u003E\u003Cstrong\u003ESaumik Naraynan\u003C\/strong\u003E\u003C\/a\u003E\u003C\/h2\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EMentors: \u003C\/strong\u003EChristopher Le Dantec and Kari Watkins\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EProblem: \u003C\/strong\u003ETraffic in Atlanta grows worse every year, but better bike infrastructure can alleviate congestion. Yet the heaviest trafficked routes often have higher pollution, which adversely affects cyclists\u0026rsquo; health.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ESolution: \u003C\/strong\u003EStudents attached low-cost air quality sensors to bikes and ran a series of calibration tests against high-precision sensing equipment. The data will enable a large-scale deployment of bikes to collect air quality data from around the city, determining \u0026nbsp;which routes are too unhealthy for cyclists.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Students worked on environmental, public health, and civic-minded projects for the fifth annual Civic Data Science program."}],"uid":"34541","created_gmt":"2018-07-26 14:28:23","changed_gmt":"2018-07-31 14:30:00","author":"Tess Malone","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-07-26T00:00:00-04:00","iso_date":"2018-07-26T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"608096":{"id":"608096","type":"image","title":"CDS group","body":null,"created":"1532615454","gmt_created":"2018-07-26 14:30:54","changed":"1532615454","gmt_changed":"2018-07-26 14:30:54","alt":"CDS group photo","file":{"fid":"231942","name":"29744050898_e54ef5b162_k.jpg","image_path":"\/sites\/default\/files\/images\/29744050898_e54ef5b162_k.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/29744050898_e54ef5b162_k.jpg","mime":"image\/jpeg","size":858101,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/29744050898_e54ef5b162_k.jpg?itok=bgNBBUKG"}}},"media_ids":["608096"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50875","name":"School of Computer Science"},{"id":"1299","name":"GVU Center"},{"id":"545781","name":"Institute for Data Engineering and Science"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39431","name":"Data Engineering and Science"},{"id":"39501","name":"People and Technology"},{"id":"39511","name":"Public Service, Leadership, and Policy"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ETess Malone, Communications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:tess.malone@cc.gatech.edu\u0022\u003Etess.malone@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["tess.malone@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"607617":{"#nid":"607617","#data":{"type":"news","title":"Georgia Tech Solves \u0027Texture Fill\u0027 Problem with Machine Learning","body":[{"value":"\u003Cp\u003EA new machine learning technique developed at Georgia Tech may soon give budding fashionistas and other designers the freedom to create realistic, high-resolution visual content without relying on complicated 3-D rendering programs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/texturegan.eye.gatech.edu\/\u0022\u003ETextureGAN\u003C\/a\u003E is the first deep image synthesis method that can realistically spread multiple textures across an object. With this new approach, users drag one or more texture patches onto a sketch \u0026mdash; say of a handbag or a skirt \u0026mdash;\u0026nbsp;and the network texturizes the sketch to accurately account for 3-D surfaces and lighting.\u003C\/p\u003E\r\n\r\n\u003Ch5\u003E\u003Ca href=\u0022https:\/\/youtu.be\/bCBDPfWdpDc\u0022 target=\u0022_blank\u0022\u003E[VIDEO: See\u0026nbsp;TextureGAN\u0026nbsp;in action]\u003C\/a\u003E\u003C\/h5\u003E\r\n\r\n\u003Cp\u003EPrior to this work, producing realistic images of this kind could be tedious and time-consuming, particularly for those with limited experience. And, according to the researchers, existing machine learning-based methods are not particularly good at generating high-resolution texture details.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003EUsing a neural network to improve results\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The \u0026lsquo;texture fill\u0026rsquo; operation is difficult for a deep network to learn because it not only has to propagate the color, but also has to learn how to synthesize the structure of texture across 3-D shapes,\u0026rdquo; said \u003Cstrong\u003EWenqi Xian\u003C\/strong\u003E, computer science (CS) major and co-lead developer.\u003C\/p\u003E\r\n\r\n\u003Ch5\u003E\u003Ca href=\u0022https:\/\/youtu.be\/XWr0Fg5XbPs?t=1h32m44s\u0022 target=\u0022_blank\u0022\u003E[VIDEO:\u0026nbsp;Wenqi\u0026nbsp;Xian presents TextureGAN at CVPR\u0026nbsp;2018]\u003C\/a\u003E\u003C\/h5\u003E\r\n\r\n\u003Cp\u003EThe researchers initially trained a type of neural network called a conditional generative adversarial network (GAN) on sketches and textures extracted from thousands of ground-truth photographs. In this approach,\u0026nbsp;a generator neural network creates images that a discriminator neural network then evaluates for accuracy. The goal is for both to get increasingly better at their respective tasks, which leads to more realistic outputs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo ensure that the results look as realistic as possible, researchers fine-tuned the new system to minimize pixel-to-pixel style differences between generated images and training data. But the results were not quite what the team had expected.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003EProducing more realistic images\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We realized that we needed a stronger constraint to preserve high-level texture in our outputs,\u0026rdquo; said Georgia Tech CS Ph.D. student \u003Cstrong\u003EPatsorn Sangkloy\u003C\/strong\u003E. \u0026ldquo;That\u0026rsquo;s when we developed an additional discriminator network that we trained on a separate texture dataset. Its only job is to be presented with two samples and ask \u0026lsquo;are these the same or not?\u0026rsquo;\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWith its sole focus on a single question, this type of discriminator is much harder to fool. This, in turn, leads the generator to produce images that are not only realistic, but also true to the texture patch the user placed onto the sketch.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe work was presented in June at the conference on\u0026nbsp;\u003Ca href=\u0022http:\/\/cvpr2018.thecvf.com\/\u0022 target=\u0022_blank\u0022\u003EComputer Vision and Pattern Recognition (CVPR) 2018\u003C\/a\u003E held in Salt Lake City and is funded through National Science Foundation award 1561968. \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E Associate Professor \u003Cstrong\u003EJames Hays\u003C\/strong\u003E advises Xian and Sangkloy. Georgia Tech is collaborating on this research with Adobe Research, University of California at Berkeley, and Argo AI.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A new technique allows users to spread textures across sketches of objects to create high resolution images."}],"uid":"32045","created_gmt":"2018-07-10 17:52:19","changed_gmt":"2018-07-12 14:29:39","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-07-10T00:00:00-04:00","iso_date":"2018-07-10T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"607631":{"id":"607631","type":"image","title":"Georgia Tech Using Machine Learning to Solve Texture Fill Problem","body":null,"created":"1531254179","gmt_created":"2018-07-10 20:22:59","changed":"1531254179","gmt_changed":"2018-07-10 20:22:59","alt":"Georgia Tech Using Machine Learning to Solve Texture Fill Problem","file":{"fid":"231786","name":"zebra-texture-11297063007KgE.jpg","image_path":"\/sites\/default\/files\/images\/zebra-texture-11297063007KgE.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/zebra-texture-11297063007KgE.jpg","mime":"image\/jpeg","size":462426,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/zebra-texture-11297063007KgE.jpg?itok=KwMlptQX"}}},"media_ids":["607631"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[{"id":"9167","name":"machine learning"},{"id":"178516","name":"texture GAN"},{"id":"178517","name":"neural network"},{"id":"109581","name":"deep learning"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAlbert Snedeker, Communications Manager\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:albert.snedeker@cc.gatech.edu?subject=Texture%20Patch\u0022\u003Ealbert.snedeker@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["albert.snedeker@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"607634":{"#nid":"607634","#data":{"type":"news","title":"App Developed by College of Computing Undergrads is a One-Stop Shop to Report Human Trafficking","body":[{"value":"\u003Cp\u003EAn updated mobile application designed by undergraduates in Georgia Tech\u0026#39;s College of Computing on behalf of\u0026nbsp;\u003Ca href=\u0022https:\/\/airlineamb.org\/\u0022\u003EAirline Ambassadors International\u003C\/a\u003E could drastically reduce human trafficking through airlines by giving flight attendants necessary tools to effectively pinpoint threats and tip off authorities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFirst developed in 2015, the application, called TIP Line, received a needed update from students working in Georgia Tech\u0026rsquo;s CS Junior Design class, making reporting to authorities faster and more reliable by bringing trained users directly into contact with local law enforcement at the destination airport rather than relying on largely unreliable national hotlines.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETIP Line leverages trained airline professionals who have graduated from the AAI C-TIP (Counter-Trafficking in Persons) training class and been given a registration key to use the app, ensuring that law enforcement will take any tip from the app seriously.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EInstead of making tips to one of over 190 global national hotlines, many of which only function during local work hours and also suffer from a high rate of false reporting, TIP Line\u0026rsquo;s trained reporters are automatically brought into contact with the correct authorities, many of whom have also taken the peer-to-peer training class with the airline personnel.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETIP Line challenges the state of the art in reporting human trafficking by air because of its peer-to-peer and time sensitive nature, as well as its capabilities in providing a data rich format that allows video, photo, voice, and text to be anonymously transmitted to assigned law enforcement in real time.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAccording to the \u003Ca href=\u0022http:\/\/www.ilo.org\/global\/lang--en\/index.htm\u0022\u003EInternational Labor Organization\u003C\/a\u003E, forced labor and human trafficking is estimated to be more than a $150 billion industry, of which there are 40.3 million victims. In Atlanta alone, the sex trade is thought to generate $290 million annually. In Dallas, the total is over $350 million.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Human trafficking is one of the fastest-growing activities in transnational crime,\u0026rdquo; said \u003Cstrong\u003EWilliam Cheng\u003C\/strong\u003E, one of the Georgia Tech students who worked on the app. \u0026ldquo;However, it has a weakness. When a human trafficker is transporting a victim in the air, the favored method of transport, they become vulnerable because they are in a public location surrounded by airport security and an unwilling victim. With this vulnerability in mind, our team aims to drastically reduce trafficking by giving flight attendants the proper tools to recognize and report.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe app allows a user to choose who to contact with the information. A geo-location function can help decide which phone number is appropriate, or users can select a destination airport to find the best contact in the app\u0026rsquo;s database. If a different authority is required, users can scroll through a list of available numbers also stored in the database.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;There is a growing global trend in the airline industry for reporting to appropriate airport police and not national tip lines,\u0026rdquo; said \u003Cstrong\u003EDavid Rivard\u003C\/strong\u003E, a member of the AAI board and the organization\u0026rsquo;s liaison with the Georgia Tech team. \u0026ldquo;Interpol, for example, has a new program called \u003Cem\u003EAIRCop\u003C\/em\u003E that makes available to signatory airports its 24-7 crime database for identification of perpetrators for all things trafficking.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThen, those reporting are given the option to provide a description, video, audio, or photo as evidence to the local authority, giving them additional information to discern the threat and how to apprehend the perpetrator and victim.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Perhaps most importantly, an app reporting solution creates a concerned citizens network, which is most important for combatting crime networks,\u0026rdquo; Rivard said. \u0026ldquo;TIP Line Version 2.0 (the current version) collects data and can distill it into actionable intelligence.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECurrently, this version of the app is being used by over 7,000 trainees \u0026ndash; airline flight crews, airport staffs, and others \u0026ndash; who can monitor over 168,000,000 passengers each year and is a model for other transport services like Uber who seek to add similar services into their applications.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe TIP team, as the Georgia Tech students are affectionately called, aims to present the app to Interpol in hopes of further integrating it with enforcement agencies and, eventually, taking it beyond just human trafficking.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In terms of reporting, especially in the time-critical air transport environment, we can no longer afford to live in the telephone age,\u0026rdquo; Rivard said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe Georgia Tech students \u0026ndash; Cheng, \u003Cstrong\u003EKenta Kawaguchi\u003C\/strong\u003E, \u003Cstrong\u003EKyle Al-Shafei\u003C\/strong\u003E, \u003Cstrong\u003EMicah Jo\u003C\/strong\u003E, and \u003Cstrong\u003EHeather Schirra\u003C\/strong\u003E \u0026ndash; were connected with the program through School of Interactive Computing Professor Emeritus \u003Cstrong\u003EJim Foley\u003C\/strong\u003E, who heard through a colleague that AAI was utilizing a rudimentary first version of the app that needed improvements. Cheng and Schirra are currently continuing work on the app.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETIP Line is available to trained users on iPhone and Android.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The app was developed in tandem with Airline Ambassadors International and is available to airline professionals trained in identifying trafficking."}],"uid":"33939","created_gmt":"2018-07-11 14:35:45","changed_gmt":"2018-07-11 14:35:45","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-07-11T00:00:00-04:00","iso_date":"2018-07-11T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"607633":{"id":"607633","type":"image","title":"AAI Tip Line","body":null,"created":"1531319522","gmt_created":"2018-07-11 14:32:02","changed":"1531319522","gmt_changed":"2018-07-11 14:32:02","alt":"AAI TIP Line App","file":{"fid":"231787","name":"Screen Shot 2018-07-11 at 10.30.54 AM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202018-07-11%20at%2010.30.54%20AM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202018-07-11%20at%2010.30.54%20AM.png","mime":"image\/png","size":35614,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202018-07-11%20at%2010.30.54%20AM.png?itok=6z_sJiRA"}}},"media_ids":["607633"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"134","name":"Student and Faculty"},{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"654","name":"College of Computing"},{"id":"78531","name":"Jim Foley"},{"id":"178520","name":"Airline Ambassadors International"},{"id":"178521","name":"TIP Line"},{"id":"62081","name":"human trafficking"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"},{"id":"71901","name":"Society and Culture"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["david.mitchell@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"607374":{"#nid":"607374","#data":{"type":"news","title":"Georgia Tech Ranks in the Top 20 for Most Accepted Papers at ICML 2018","body":[{"value":"\u003Cp\u003EWith more than\u0026nbsp;300 universities and companies represented, the Georgia Institute of Technology\u0026nbsp;ranks #14 for the number of accepted papers at the\u0026nbsp;\u003Ca href=\u0022https:\/\/icml.cc\/\u0022\u003EInternational Conference on Machine Learning (ICML)\u003C\/a\u003E\u0026nbsp;in Stockholm, Sweden, July 10-15.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EICML is the leading international machine learning conference and is supported by the\u0026nbsp;\u003Ca href=\u0022http:\/\/www.machinelearning.org\/\u0022\u003EInternational Machine Learning Society (IMLS)\u003C\/a\u003E. This year marks the 35th year for the conference.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThroughout the five-day conference, more than seven Georgia Tech faculty members and eight students will present 11 papers through oral presentations, poster sessions, tutorials, invited talks, and workshops.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe group will have representatives from the School of Interactive Computing, the School of Computational Science and Engineering, the Machine Learning Center at Georgia Tech, and the GVU Center. Associate Director of Georgia Tech\u0026rsquo;s Machine Learning Center, \u003Cstrong\u003ELe Song\u003C\/strong\u003E, leads the pack contributing to six of the 11 papers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026#39;s great to see that Georgia Tech continues to make great scientific contributions to the international machine learning community, especially in the area of deep learning over graphs and reinforcement learning,\u0026rdquo; said Song.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe conference takes place in conjunction with the\u0026nbsp;\u003Ca href=\u0022https:\/\/www.ijcai-18.org\/\u0022\u003EInternational Joint Conference on Artificial Intelligence (IJCAI)\u003C\/a\u003E, the\u0026nbsp;\u003Ca href=\u0022http:\/\/celweb.vuse.vanderbilt.edu\/aamas18\/\u0022\u003EInternational Conference on Autonomous Agents and Multiagent Systems (AAMAS)\u003C\/a\u003E, and the\u0026nbsp;\u003Ca href=\u0022http:\/\/iccbr18.com\/\u0022\u003EInternational Conference on Case-Based\u0026nbsp;Reasoning\u0026nbsp;(ICCBR).\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBelow are the titles and abstracts from each of Georgia Tech\u0026rsquo;s papers. An\u0026nbsp;\u003Ca href=\u0022https:\/\/public.tableau.com\/views\/GeorgiaTechICML2018\/Dashboard1?:embed=y\u0026amp;:display_count=yes\u0026amp;:showVizHome=no\u0022\u003Einteractive data graphic\u003C\/a\u003E is available and allows users to\u0026nbsp;explore the papers, authors, and presentation schedule. Coverage will\u0026nbsp;be available during the conference on Twitter at @mlatgt and Instagram at @mlatgeorgiatech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EGeorgia Tech at ICML 2018\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1708.04783.pdf\u0022\u003ENon-convex Conditional Gradient Sliding\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003Echao qu (technion) \u0026middot; Yan Li (Georgia Institute of Technology) \u0026middot; Huan Xu (Georgia Tech)\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAbstract: We investigate a projection free method, namely conditional gradient sliding on batched, stochastic and finite-sum non-convex problem. CGS is a smart combination of Nesterov\u0026#39;s accelerated gradient method and Frank-Wolfe (FW) method, and outperforms FW in the convex setting by saving gradient computations. However, the study of CGS in the non-convex setting is limited. In this paper, we propose the non-convex conditional gradient sliding (NCGS) which surpasses the non-convex Frank-Wolfe method in batched, stochastic and finite-sum setting.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1802.07814.pdf\u0022\u003ELearning to Explain: An Information-Theoretic Perspective on Model Interpretation\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EJianbo Chen (University of California, Berkeley) \u0026middot; Le Song (Georgia Institute of Technology) \u0026middot; Martin Wainwright (University of California at Berkeley) \u0026middot; Michael Jordan (UC Berkeley)\u0026nbsp;\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAbstract: We introduce instance wise feature selection as a methodology for model interpretation. Our method is based on learning a function to extract a subset of features that are most informative for each given example. This feature selector is trained to maximize the mutual information between selected features and the response variable, where the conditional distribution of the response variable given the input is the model to be explained. We develop an efficient variational approximation to the mutual information, and show the effectiveness of our method on a variety of synthetic and real data sets using both quantitative metrics and human evaluation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1710.01410.pdf\u0022\u003ELearning Registered Point Processes from Idiosyncratic Observations\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EHongteng Xu (InfiniaML, Inc) \u0026middot; Lawrence Carin (Duke) \u0026middot; Hongyuan Zha (Georgia Institute of Technology)\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAbstract: A parametric point process model is developed, with modeling based on the assumption that sequential observations often share latent phenomena, while also possessing idiosyncratic effects. An alternating optimization method is proposed to learn a \u0026quot;registered\u0026quot; point process that accounts for shared structure, as well as \u0026quot;warping\u0026quot; functions that characterize idiosyncratic aspects of each observed sequence. Under reasonable constraints, in each iteration we update the sample-specific warping functions by solving a set of constrained nonlinear programming problems in parallel, and update the model by maximum likelihood estimation. The justifiability, complexity and robustness of the proposed method are investigated in detail, and the influence of sequence stitching on the learning results is examined empirically. Experiments on both synthetic and real-world data demonstrate that the method yields explainable point process models, achieving encouraging results compared to state-of-the-art methods.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1710.10568.pdf\u0022\u003EStochastic Training of Graph Convolutional Networks\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EJianfei Chen (Tsinghua University) \u0026middot; Jun Zhu (Tsinghua University) \u0026middot; Le Song (Georgia Institute of Technology)\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAbstract: Graph convolutional networks (GCNs) are powerful deep neural networks for graph-structured data. However, GCN computes the representation of a node recursively from its neighbors, making the receptive field size grow exponentially with the number of layers. Previous attempts on reducing the receptive field size by subsampling neighbors do not have a convergence guarantee, and their receptive field size per node is still in the order of hundreds. In this paper, we develop control variate-based algorithms, which allow sampling an arbitrarily small neighbor size. Furthermore, we prove new theoretical guarantee for our algorithms to converge to a local optimum of GCN. Empirical results show that our algorithms enjoy a similar convergence with the exact algorithm using only two neighbors per node. The runtime of our algorithms on a large Reddit dataset is only one seventh of previous neighbor sampling algorithms.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/openreview.net\/pdf?id=SyPMT6gAb\u0022\u003EVariance Regularized Counterfactual Risk Minimization via Variational Divergence Minimization\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EHang Wu (Georgia Institute of Technology) \u0026middot; May Wang (Georgia Institute of Technology)\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;Abstract: Off-policy learning, the task of evaluating and improving policies using historic data collected from a logging policy, is important because on-policy evaluation is usually expensive and has adverse impacts. One of the major challenges of off-policy learning is to derive counterfactual estimators that also have low variance and thus low generalization error.\u003Cbr \/\u003E\r\nIn this work, inspired by learning bounds for importance sampling problems, we present a new counterfactual learning principle for off-policy learning with bandit feedbacks. Our method regularizes the generalization error by minimizing the distribution divergence between the logging policy and the new policy, and removes the need for iterating through all training samples to compute sample variance regularization in prior work. With neural network policies, our end-to-end training algorithms using variational divergence minimization showed significant improvement over conventional baseline algorithms and are also consistent with our theoretical results.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1802.03493.pdf\u0022\u003EMore Robust Doubly Robust Off-policy Evaluation\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EMehrdad Farajtabar (Georgia Tech) \u0026middot; Yinlam Chow (DeepMind) \u0026middot; Mohammad Ghavamzadeh (Google DeepMind and INRIA)\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAbstract: We study the problem of off-policy evaluation (OPE) in reinforcement learning (RL), where the goal is to estimate the performance of a policy from the data generated by another policy(ies). In particular, we focus on the doubly robust (DR) estimators that consist of an importance sampling (IS) component and a performance model, and utilize the low (or zero) bias of IS and low variance of the model at the same time. Although the accuracy of the model has a huge impact on the overall performance of DR, most of the work on using the DR estimators in OPE has been focused on improving the IS part, and not much on how to learn the model. In this paper, we propose alternative DR estimators, called more robust doubly robust (MRDR), that learn the model parameter by minimizing the variance of the DR estimator. We first present a formulation for learning the DR model in RL. We then derive formulas for the variance of the DR estimator in both contextual bandits and RL, such that their gradients with reference to the model parameters can be estimated from the samples, and propose methods to efficiently minimize the variance. We prove that the MRDR estimators are strongly consistent and asymptotically optimal. Finally, we evaluate MRDR in bandits and RL benchmark problems, and compare its performance with the existing methods.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1806.02371.pdf\u0022\u003EAdversarial Attack on Graph Structured Data\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EHajun Dai (Georgia Tech) \u0026middot; Hui Li (Ant Financial Services Group) \u0026middot; Tian Tian () \u0026middot; Xin Huang (Ant Financial) \u0026middot; Lin Wang () \u0026middot; Jun Zhu (Tsinghua University) \u0026middot; Le Song (Georgia Institute of Technology)\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAbstract: Deep learning on graph structures has shown exciting results in various applications. However, little attention has been paid to the robustness of such models, in contrast to numerous research works for image or text adversarial attack and defense. In this paper, we focus on the adversarial attacks that fool the model by modifying the combinatorial structure of data. We first propose a reinforcement learning based attack method that learns the generalizable attack policy, while only requiring prediction labels from the target classifier. Also, variants of genetic algorithms and gradient methods are presented in the scenario where prediction confidence or gradients are available. We use both synthetic and real-world data to show that, a family of Graph Neural Network models is vulnerable to these attacks, in both graph-level and node-level classification tasks. We also show such attacks can be used to diagnose the learned classifiers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u0026nbsp;\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1806.02934.pdf\u0022\u003ELearn from Your Neighbor: Learning Multi-modal Mappings from Sparse Annotations\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EAshwin Kalyan (Georgia Tech) \u0026middot; Stefan Lee (Georgia Institute of Technology) \u0026middot; Anitha Kannan (Curai) \u0026middot; Dhruv Batra (Georgia Institute of Technology \/ Facebook AI Research)\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;Abstract: Many structured prediction problems (particularly in vision and language domains) are ambiguous, with multiple outputs being \u0026lsquo;correct\u0026rsquo; for an input \u0026ndash; e.g. there are many ways of describing an image, multiple ways of translating a sentence; however, exhaustively annotating the applicability of all possible outputs is intractable due to exponentially large output spaces (e.g. all English sentences). In practice, these problems are cast as multi-class prediction, with the likelihood of only a sparse set of annotations being maximized \u0026ndash; unfortunately penalizing for placing beliefs on plausible but unannotated outputs. We make and test the following hypothesis \u0026ndash; for a given input, the annotations of its neighbors may serve as an additional supervisory signal. Specifically, we propose an objective that transfers supervision from neighboring examples. We first study the properties of our developed method in a controlled toy setup before reporting results on multi-label classification and two image-grounded sequence modeling tasks \u0026ndash; captioning and question generation. We evaluate using standard task-specific metrics and measures of output diversity, finding consistent improvements over standard maximum likelihood training and other baselines.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u0026nbsp;\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1710.07742.pdf\u0022\u003ETowards Black-box Iterative Machine Teaching\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EWeiyang Liu (Georgia Tech) \u0026middot; Bo Dai (Georgia Institute of Technology) \u0026middot; Xingguo Li (University of Minnesota) \u0026middot; Zhen Liu (Georgia Tech) \u0026middot; James Rehg (Georgia Tech) \u0026middot; Le Song (Georgia Institute of Technology)\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAbstract: In this paper, we make an important step towards the black-box machine teaching by considering the cross-space machine teaching, where the teacher and the learner use different feature representations and the teacher can not fully observe the learner\u0026#39;s model. In such scenario, we study how the teacher is still able to teach the learner to achieve faster convergence rate than the traditional passive learning. We propose an active teacher model that can actively query the learner (i.e., make the learner take exams) for estimating the learner\u0026#39;s status and provably guide the learner to achieve faster convergence. The sample complexities for both teaching and query are provided. In the experiments, we compare the proposed active teacher with the omniscient teacher and verify the effectiveness of the active teacher model.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1704.01665.pdf\u0022\u003ELearning Steady-States of Iterative Algorithms over Graphs\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EHajun Dai (Georgia Tech) \u0026middot; Zornitsa Kozareva () \u0026middot; Bo Dai (Georgia Institute of Technology) \u0026middot; Alex Smola (Amazon) \u0026middot; Le Song (Georgia Institute of Technology)\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAbstract: The design of good heuristics or approximation algorithms for NP-hard combinatorial optimization problems often requires significant specialized knowledge and trial-and-error. Can we automate this challenging, tedious process, and learn the algorithms instead? In many real-world applications, it is typically the case that the same optimization problem is solved again and again on a regular basis, maintaining the same problem structure but differing in the data. This provides an opportunity for learning heuristic algorithms that exploit the structure of such recurring problems. In this paper, we propose a unique combination of reinforcement learning and graph embedding to address this challenge. The learned greedy policy behaves like a meta-algorithm that incrementally constructs a solution, and the action is determined by the output of a graph-embedding network capturing the current state of the solution. We show that our framework can be applied to a diverse range of optimization problems over graphs, and learns effective algorithms for the Minimum Vertex Cover, Maximum Cut and Traveling Salesman problems.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1712.10285.pdf\u0022\u003ESBEED: Convergent Reinforcement Learning with Nonlinear Function Approximation\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EBo Dai (Georgia Institute of Technology) \u0026middot; Albert Shaw (Georgia Tech) \u0026middot; Lihong Li (Microsoft Research) \u0026middot; Lin Xiao (Microsoft Research) \u0026middot; Niao He (UIUC) \u0026middot; Zhen Liu (Georgia Tech) \u0026middot; Jianshu Chen (Microsoft Research) \u0026middot; Le Song (Georgia Institute of Technology)\u003C\/em\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAbstract: When function approximation is used, solving the Bellman optimality equation with stability guarantees has remained a major open problem in reinforcement learning for decades. The fundamental difficulty is that the Bellman operator may become an expansion in general, resulting in oscillating and even divergent behavior of popular algorithms like Q-learning. In this paper, we revisit the Bellman equation, and reformulate it into a novel primal-dual optimization problem using Nesterov\u0026#39;s smoothing technique and the Legendre-Fenchel transformation. We then develop a new algorithm, called Smoothed Bellman Error Embedding, to solve this optimization problem where any differentiable function class may be used. We provide what we believe to be the first convergence guarantee for general nonlinear function approximation, and analyze the algorithm\u0026#39;s sample complexity. Empirically, our algorithm compares favorably to state-of-the-art baselines in several benchmark control problems.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech Presents 11 Papers at ICML 2018"}],"uid":"34773","created_gmt":"2018-06-28 16:21:14","changed_gmt":"2018-06-29 16:06:30","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-06-28T00:00:00-04:00","iso_date":"2018-06-28T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"607375":{"id":"607375","type":"image","title":"ICML 2018","body":null,"created":"1530202921","gmt_created":"2018-06-28 16:22:01","changed":"1530202921","gmt_changed":"2018-06-28 16:22:01","alt":"","file":{"fid":"231679","name":"icml.jpg","image_path":"\/sites\/default\/files\/images\/icml.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/icml.jpg","mime":"image\/jpeg","size":431371,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/icml.jpg?itok=EnZmp09R"}},"607413":{"id":"607413","type":"image","title":"Explore GT@ICML 2018","body":null,"created":"1530286568","gmt_created":"2018-06-29 15:36:08","changed":"1530286568","gmt_changed":"2018-06-29 15:36:08","alt":"","file":{"fid":"231693","name":"GT_ICML2018_viz.gif","image_path":"\/sites\/default\/files\/images\/GT_ICML2018_viz.gif","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/GT_ICML2018_viz.gif","mime":"image\/gif","size":288286,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/GT_ICML2018_viz.gif?itok=sWOdhYS5"}}},"media_ids":["607375","607413"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"37041","name":"Computational Science and Engineering"},{"id":"576481","name":"ML@GT"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[{"id":"9167","name":"machine learning"},{"id":"654","name":"College of Computing"},{"id":"6381","name":"Conferences"}],"core_research_areas":[{"id":"39431","name":"Data Engineering and Science"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie Blinder\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallison.blinder@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["allison.blinder@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"607130":{"#nid":"607130","#data":{"type":"news","title":"Georgia Tech Presenting 13 Papers at Premier Computer Vision Conference CVPR","body":[{"value":"\u003Cp\u003EA host of Georgia Tech students and faculty will travel to Salt Lake City, Utah, this week to attend the conference on \u003Ca href=\u0022http:\/\/cvpr2018.thecvf.com\/\u0022\u003EComputer Vision and Pattern Recognition\u003C\/a\u003E (CVPR) 2018.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECVPR is the premier annual computer vision event and comprises a main conference and several co-located workshops and short courses. As in years past, faculty and students in the \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC) and associated research units \u0026ndash; the \u003Ca href=\u0022http:\/\/ml.gatech.edu\u0022\u003ECenter for Machine Learning\u003C\/a\u003E, the \u003Ca href=\u0022http:\/\/gvu.gatech.edu\u0022\u003EGVU Center\u003C\/a\u003E, and the \u003Ca href=\u0022http:\/\/robotics.gatech.edu\u0022\u003EInstitute for Robotics and Intelligent Machines\u003C\/a\u003E \u0026ndash; will participate at all levels of the conference.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;CVPR is the top event in computer vision, and Georgia Tech has long had a substantial presence at the conference,\u0026rdquo; said \u003Cstrong\u003EIrfan Essa\u003C\/strong\u003E, IC professor and director of the Center for Machine Learning. \u0026ldquo;This year, we have a number of faculty and student researchers participating in the technical program and we\u0026rsquo;re excited to share our research with the rest of the community.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMore than 10 faculty members and many more student researchers sharing 13 papers in oral, spotlight, poster, and demo presentations will represent Georgia Tech at the five-day event..\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe conference will take place June 18-22, with the main technical program set to begin on June 19. Essa will provide a workshop talk at the conference.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBelow are titles and abstracts of Georgia Tech\u0026rsquo;s research being presented this week. The visualization below shows all of Georgia Tech\u0026rsquo;s research, as well as dates, times, and locations for the associated talks.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EGeorgia Tech at CVPR 2018\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022http:\/\/fredhohman.com\/papers\/18-interactive-cvpr.pdf\u0022\u003EInteractive Classification for Deep Learning Interpretation\u003C\/a\u003E \u003C\/strong\u003E(Angel Cabrera, Fred Hohman, Jason Lin, Polo Chau)\u003C\/p\u003E\r\n\r\n\u003Cp\u003EABSTRACT: We present an interactive system enabling users to manipulate images to explore the robustness and sensitivity of deep learning image classifiers. Using modern web technologies to run in-browser inference, users can remove image features using inpainting algorithms and obtain new classifications in real time, which allows them to ask a variety of \u0026ldquo;what if\u0026rdquo; questions by experimentally modifying images and seeing how the model reacts. Our system allows users to compare and contrast what image regions humans and machine learning models use for classification, revealing a wide range of surprising results ranging from spectacular failures (e.g., a water bottle image becomes a concert when removing a person) to impressive resilience (e.g., a baseball player image remains correctly classified even without a glove or base).\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022http:\/\/openaccess.thecvf.com\/content_cvpr_2018\/papers\/Kundu_3D-RCNN_Instance-Level_3D_CVPR_2018_paper.pdf\u0022\u003E3D-RCNN: Instance-Level 3D Object Reconstruction via Render-and-Compare\u003C\/a\u003E \u003C\/strong\u003E(Abhijit Kundu, Yin Li, Jim Rehg)\u003C\/p\u003E\r\n\r\n\u003Cp\u003EABSTRACT: We present a fast inverse-graphics framework for instance-level 3D scene understanding. We train a deep convolutional network that learns to map image regions to the full 3D shape and pose of all object instances in the image. Our method produces a compact 3D representation of the scene, which can be readily used for applications like autonomous driving. Many traditional 2D vision outputs, like instance segmentations and depth-maps, can be obtained by simply rendering our output 3D scene model. We exploit class-specific shape priors by learning a low dimensional shape-space from collections of CAD models. We present novel representations of shape and pose, that strive towards better 3D equivariance and generalization. In order to exploit rich supervisory signals in the form of 2D annotations like segmentation, we propose a differentiable Render-and-Compare loss that allows 3D shape and pose to be learned with 2D supervision. We evaluate our method on the challenging real-world datasets of Pascal3D+ and KITTI, where we achieve state-of-the-art results.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1711.11543.pdf\u0022\u003EEmbodied Question Answering\u003C\/a\u003E \u003C\/strong\u003E(Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, Dhruv Batra)\u003C\/p\u003E\r\n\r\n\u003Cp\u003EABSTRACT: We present a new AI task \u0026ndash; Embodied Question Answering (EmbodiedQA) \u0026ndash; where an agent is spawned at a random location in a 3D environment and asked a question (\u0026lsquo;What color is the car?\u0026rsquo;). In order to answer, the agent must first intelligently navigate to explore the environment, gather information through first-person (egocentric) vision, and then answer the question (\u0026lsquo;orange\u0026rsquo;). This challenging task requires a range of AI skills \u0026ndash; active perception, language understanding, goal-driven navigation, commonsense reasoning, and grounding of language into actions. In this work, we develop the environments, end-to-end-trained reinforcement learning agents, and evaluation protocols for EmbodiedQA.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1711.06330.pdf\u0022\u003EAttend and Interact: Higher-Order Object Interactions for Video Understanding\u003C\/a\u003E \u003C\/strong\u003E(Chih-Yao Ma, Asim Kadav, Iain Melvin, Zsolt Kira, Ghassan AlRegib, Hans Peter Graf)\u003C\/p\u003E\r\n\r\n\u003Cp\u003EABSTRACT: Human actions often involve complex interactions across several inter-related objects in the scene. However, existing approaches to fine-grained video understanding or visual relationship detection often rely on single object representation or pairwise object relationships. Furthermore, learning interactions across multiple objects in hundreds of frames for video is computationally infeasible and performance may suffer since a large combinatorial space has to be modeled. In this paper, we propose to efficiently learn higher-order interactions between arbitrary subgroups of objects for fine-grained video understanding. We demonstrate that modeling object interactions significantly improves accuracy for both action recognition and video captioning, while saving more than 3-times the computation over traditional pairwise relationships. The proposed method is validated on two large-scale datasets: Kinetics and ActivityNet Captions. Our SINet and SINet-Caption achieve state-of-the-art performances on both datasets even though the videos are sampled at a maximum of 1 FPS. To the best of our knowledge, this is the first work modeling object interactions on open domain large-scale video datasets, and we additionally model higher-order object interactions which improves the performance with low computational costs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1712.00377.pdf\u0022\u003EDon\u0026rsquo;t Just Assume; Look and Answer: Overcoming Priors for Visual Question Answering\u003C\/a\u003E \u003C\/strong\u003E(Aishwarya Agrawal, Dhruv Batra, Devi Parikh, Aniruddha Kembhavi)\u003C\/p\u003E\r\n\r\n\u003Cp\u003EABSTRACT: A number of studies have found that today\u0026rsquo;s Visual Question Answering (VQA) models are heavily driven by superficial correlations in the training data and lack sufficient image grounding. To encourage development of models geared toward the latter, we propose a new setting for VQA where for every question type, train and test sets have different prior distributions of answers. Specifically, we present new splits of the VQA v1 and VQA v2 datasets, which we call Visual Question Answering under Changing Priors (VQACP v1 and VQA-CP v2 respectively). First, we evaluate several existing VQA models under this new setting and show that their performance degrades significantly compared to the original VQA setting. Second, we propose a novel Grounded Visual Question Answering model (GVQA) that contains inductive biases and restrictions in the architecture specifically designed to prevent the model from \u0026lsquo;cheating\u0026rsquo; by primarily relying on priors in the training data. Specifically, GVQA explicitly disentangles the recognition of visual concepts present in the image from the identification of plausible answer space for a given question, enabling the model to more robustly generalize across different distributions of answers. GVQA is built off an existing VQA model \u0026ndash; Stacked Attention Networks (SAN). Our experiments demonstrate that GVQA significantly outperforms SAN on both VQA-CP v1 and VQA-CP v2 datasets. Interestingly, it also outperforms more powerful VQA models such as Multimodal Compact Bilinear Pooling (MCB) in several cases. GVQA offers strengths complementary to SAN when trained and evaluated on the original VQA v1 and VQA v2 datasets. Finally, GVQA is more transparent and interpretable than existing VQA models.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1711.06368.pdf\u0022\u003EMobile Video Object Detection With Temporally-Aware Feature Maps\u003C\/a\u003E \u003C\/strong\u003E(Mason Liu, Menglong Zhu)\u003C\/p\u003E\r\n\r\n\u003Cp\u003EABSTRACT: This paper introduces an online model for object detection in videos designed to run in real-time on low-powered mobile and embedded devices. Our approach combines fast single-image object detection with convolutional long short-term memory (LSTM) layers to create an interweaved recurrent-convolutional architecture. Additionally, we propose an efficient Bottleneck-LSTM layer that significantly reduces computational cost compared to regular LSTMs. Our network achieves temporal awareness by using Bottleneck-LSTMs to refine and propagate feature maps across frames. This approach is substantially faster than existing detection methods in video, outperforming the fastest single-frame models in model size and computational cost while attaining accuracy comparable to much more expensive single-frame models on the Imagenet VID 2015 dataset. Our model reaches a real-time inference speed of up to 15 FPS on a mobile CPU.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1804.08071.pdf\u0022\u003EDecoupled Networks\u003C\/a\u003E \u003C\/strong\u003E(Weiyang Liu, Zhen Liu, Zhiding Yu, Bo Dai, Rongmei Lin, Yisen Wang, Jim Rehg, Le Song)\u003C\/p\u003E\r\n\r\n\u003Cp\u003EABSTRACT: Inner product-based convolution has been a central component of convolutional neural networks (CNNs) and the key to learning visual representations. Inspired by the observation that CNN-learned features are naturally decoupled with the norm of features corresponding to the intra-class variation and the angle corresponding to the semantic difference, we propose a generic decoupled learning framework which models the intra-class variation and semantic difference independently. Specifically, we first reparametrize the inner product to a decoupled form and then generalize it to the decoupled convolution operator which serves as the building block of our decoupled networks. We present several effective instances of the decoupled convolution operator. Each decoupled operator is well motivated and has an intuitive geometric interpretation. Based on these decoupled operators, we further propose to directly learn the operator from data. Extensive experiments show that such decoupled reparameterization renders significant performance gain with easier convergence and stronger robustness.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1712.03342.pdf\u0022\u003EGeometry-Aware Learning of Maps for Camera Localization\u003C\/a\u003E \u003C\/strong\u003E(Samarth Brahmbhatt, Jinwei Gu, Kihwan Kim, James Hays, Jan Kautz)\u003C\/p\u003E\r\n\r\n\u003Cp\u003EABSTRACT: Maps are a key component in image-based camera localization and visual SLAM systems: they are used to establish geometric constraints between images, correct drift in relative pose estimation, and relocalize cameras after lost tracking. The exact definitions of maps, however, are often application-specific and hand-crafted for different scenarios (e.g. 3D landmarks, lines, planes, bags of visual words). We propose to represent maps as a deep neural net called MapNet, which enables learning a data-driven map representation. Unlike prior work on learning maps, MapNet exploits cheap and ubiquitous sensory inputs like visual odometry and GPS in addition to images and fuses them together for camera localization. Geometric constraints expressed by these inputs, which have traditionally been used in bundle adjustment or pose-graph optimization, are formulated as loss terms in MapNet training and also used during inference. In addition to directly improving localization accuracy, this allows us to update the MapNet (i.e., maps) in a self-supervised manner using additional unlabeled video sequences from the scene. We also propose a novel parameterization for camera rotation which is better suited for deep-learning based camera pose regression. Experimental results on both the indoor 7-Scenes dataset and the outdoor Oxford RobotCar dataset show significant performance improvement over prior work.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1804.00092.pdf\u0022\u003EIterative Learning With Open-Set Noisy Labels (\u003C\/a\u003E\u003C\/strong\u003EYisen Wang, Weiyang Liu, Xingjun Ma, James Bailey, Hongyuan Zha, Le Song, Shu-Tao Xia)\u003C\/p\u003E\r\n\r\n\u003Cp\u003EABSTRACT: Large-scale datasets possessing clean label annotations are crucial for training Convolutional Neural Networks (CNNs). However, labeling large-scale data can be very costly and error-prone, and even high-quality datasets are likely to contain noisy (incorrect) labels. Existing works usually employ a closed-set assumption, whereby the samples associated with noisy labels possess a true class contained within the set of known classes in the training data. However, such an assumption is too restrictive for many applications, since samples associated with noisy labels might in fact possess a true class that is not present in the training data. We refer to this more complex scenario as the open-set noisy label problem and show that it is nontrivial in order to make accurate predictions. To address this problem, we propose a novel iterative learning framework for training CNNs on datasets with open-set noisy labels. Our approach detects noisy labels and learns deep discriminative features in an iterative fashion. To benefit from the noisy label detection, we design a Siamese network to encourage clean labels and noisy labels to be dissimilar. A reweighting module is also applied to simultaneously emphasize the learning from clean labels and reduce the effect caused by noisy labels. Experiments on CIFAR-10, ImageNet and real-world noisy (web-search) datasets demonstrate that our proposed model can robustly train CNNs in the presence of a high proportion of open-set as well as closed-set noisy labels.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1803.09845.pdf\u0022\u003ENeural Baby Talk\u003C\/a\u003E \u003C\/strong\u003E(Jiasen Lu, Jianwei Yang, Dhruv Batra, Devi Parikh)\u003C\/p\u003E\r\n\r\n\u003Cp\u003EABSTRACT: We introduce a novel framework for image captioning that can produce natural language explicitly grounded in entities that object detectors find in the image. Our approach reconciles classical slot filling approaches (that are generally better grounded in images) with modern neural captioning approaches (that are generally more natural sounding and accurate). Our approach first generates a sentence \u0026lsquo;template\u0026rsquo; with slot locations explicitly tied to specific image regions. These slots are then filled in by visual concepts identified in the regions by object detectors. The entire architecture (sentence template generation and slot filling with object detectors) is end-to-end differentiable. We verify the effectiveness of our proposed model on different image captioning tasks. On standard image captioning and novel object captioning, our model reaches state-of-the-art on both COCO and Flickr30k datasets. We also demonstrate that our model has unique advantages when the train and test distributions of scene compositions \u0026ndash; and hence language priors of associated captions \u0026ndash; are different.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1706.02823.pdf\u0022\u003ETextureGAN: Controlling Deep Image Synthesis With Texture Patches\u003C\/a\u003E \u003C\/strong\u003E(Wenqi Xian, Patsorn Sangkloy, Varun Agrawal, Amit Raj, Jingwan Lu, Chen Fang, Fisher Yu, James Hays)\u003C\/p\u003E\r\n\r\n\u003Cp\u003EABSTRACT: In this paper, we investigate deep image synthesis guided by sketch, color, and texture. Previous image synthesis methods can be controlled by sketch and color strokes but we are the first to examine texture control. We allow a user to place a texture patch on a sketch at arbitrary locations and scales to control the desired output texture. Our generative network learns to synthesize objects consistent with these texture suggestions. To achieve this, we develop a local texture loss in addition to adversarial and content loss to train the generative network. We conduct experiments using sketches generated from real images and textures sampled from a separate texture database and results show that our proposed algorithm is able to generate plausible images that are faithful to user controls. Ablation studies show that our proposed pipeline can generate more realistic images than adapting existing methods directly\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1801.02753.pdf\u0022\u003ESktechyGAN: Towards Diverse and Realistic Sketch to Image Synthesis\u003C\/a\u003E\u003C\/strong\u003E (Wengling Chen, James Hays)\u003C\/p\u003E\r\n\r\n\u003Cp\u003EABSTRACT: Synthesizing realistic images from human drawn sketches is a challenging problem in computer graphics and vision. Existing approaches either need exact edge maps, or rely on retrieval of existing photographs. In this work, we propose a novel Generative Adversarial Network (GAN) approach that synthesizes plausible images from 50 categories including motorcycles, horses and couches. We demonstrate a data augmentation technique for sketches which is fully automatic, and we show that the augmented data is helpful to our task. We introduce a new network building block suitable for both the generator and discriminator which improves the information flow by injecting the input image at multiple scales. Compared to state-of-the-art image translation methods, our approach generates more realistic images and achieves significantly higher Inception Scores.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1711.06798.pdf\u0022\u003EMorphNet: Fast \u0026amp; Simple Resource-Constrained Structure Learning of Deep Networks\u003C\/a\u003E\u003C\/strong\u003E (Ariel Gordon, Elad Eban, Ofir Nachum, Bo Chen, Hao Wu, Tien-Ju Yang, Edward Choi)\u003C\/p\u003E\r\n\r\n\u003Cp\u003EABSTRACT: We present MorphNet, an approach to automate the design of neural network structures. MorphNet iteratively shrinks and expands a network, shrinking via a resourceweighted sparsifying regularizer on activations and expanding via a uniform multiplicative factor on all layers. In contrast to previous approaches, our method is scalable to large networks, adaptable to specific resource constraints (e.g. the number of floating-point operations per inference), and capable of increasing the network\u0026rsquo;s performance. When applied to standard network architectures on a wide variety of datasets, our approach discovers novel structures in each domain, obtaining higher performance while respecting the resource constraint.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ETitle photo credit: Steve Greenwood\u003C\/strong\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"More than 10 faculty members and many more students will be present at the five-day event in Salt Lake City."}],"uid":"33939","created_gmt":"2018-06-18 15:16:51","changed_gmt":"2018-06-18 21:51:53","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-06-18T00:00:00-04:00","iso_date":"2018-06-18T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"607128":{"id":"607128","type":"image","title":"CVPR logo","body":null,"created":"1529333993","gmt_created":"2018-06-18 14:59:53","changed":"1529333993","gmt_changed":"2018-06-18 14:59:53","alt":"Georgia Tech @ CVPR 2018","file":{"fid":"231584","name":"Cityscapes SunsetSkyline_Steve_Greenwood_crop2_text.png","image_path":"\/sites\/default\/files\/images\/Cityscapes%20SunsetSkyline_Steve_Greenwood_crop2_text.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Cityscapes%20SunsetSkyline_Steve_Greenwood_crop2_text.png","mime":"image\/png","size":663108,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Cityscapes%20SunsetSkyline_Steve_Greenwood_crop2_text.png?itok=N2hgcXgb"}}},"media_ids":["607128"],"related_links":[{"url":"http:\/\/gvu.gatech.edu\/georgia-tech-cvpr-2018","title":"Georgia Tech at CVPR 2018"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"607017":{"#nid":"607017","#data":{"type":"news","title":"Second-Year Stefamikha Suwisar Takes Top Prize in IC T-Shirt Design Contest","body":[{"value":"\u003Cp\u003EAfter a close vote via social media and a web-based survey, the School of Interactive Computing (IC) has identified its t-shirt design contest winner.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESecond-year Industrial Design major \u003Cstrong\u003EStefamikha Suwisar\u003C\/strong\u003E overcame three other finalists in a closely-contested vote for the victory. Her design, which features a lightbulb and brain combination at the center of a number of examples of computer science in use, demonstrated her idea of the intersection between human and machine.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;My design illustrates how ideas emerge from the brain to constantly fit the research opportunities within IC,\u0026rdquo; she said. \u0026ldquo;It depicts the eight threads for the future of computer science education in the United States: devices, info internetworks, intelligence, media, modelling and simulation, people, systems and architecture, and theory.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESuwisar said that she has always been interested in art and science. Her major, Industrial Design, combines both. It doesn\u0026rsquo;t only focus on the appearance of a product, but also how it functions, is manufactured, and ultimately the value and experience it provides for users.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In the future, I aspire to help people and ultimately make life better through my designs,\u0026rdquo; she said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe other finalists for the contest were computational media undergraduate student John Britti, computer science undergraduate student Brian Cochran, and GVU Center research technologist Tim Trent.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe School appreciates all the fantastic submissions and the wonderful voter turnout. Stay tuned for information on how to procure a t-shirt after the finished product has been produced.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Suwisar\u0027s design focused on the interaction between human and machine, and depicted eight threads of CS education."}],"uid":"33939","created_gmt":"2018-06-13 16:20:05","changed_gmt":"2018-06-13 16:20:05","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-06-13T00:00:00-04:00","iso_date":"2018-06-13T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"607016":{"id":"607016","type":"image","title":"IC T-Shirt Contest Winner","body":null,"created":"1528906490","gmt_created":"2018-06-13 16:14:50","changed":"1528906490","gmt_changed":"2018-06-13 16:14:50","alt":"","file":{"fid":"231536","name":"Screen Shot 2018-06-13 at 12.13.46 PM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202018-06-13%20at%2012.13.46%20PM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202018-06-13%20at%2012.13.46%20PM.png","mime":"image\/png","size":160352,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202018-06-13%20at%2012.13.46%20PM.png?itok=rHXO1lcc"}}},"media_ids":["607016"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"178289","name":"stefamikha suwisar"},{"id":"166848","name":"School of Interactive Computing"},{"id":"654","name":"College of Computing"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"606946":{"#nid":"606946","#data":{"type":"news","title":"Marissa Gonzales Using Own Educational Experience as Inspiration for Research","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003EMarissa Gonzales\u003C\/strong\u003E\u0026rsquo; educational experience is not an uncommon one.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe School of Interactive Computing Ph.D. student grew up in California, where she attended an exclusive high school populated, more or less, by students of privilege who had the time and resources to engage with educators, devote themselves to their studies, and ultimately come out of high school with the skills necessary for academic success.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut Gonzales wasn\u0026rsquo;t like most of her classmates. It took a lot of effort on both her and her mom\u0026rsquo;s part to make her time in high school a success.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEach day, Gonzales woke up at 4:30 a.m. so that her mom could take her to school in time to make it to work early in the morning. She made it to the high school about an hour and a half early every day and then, after school, went straight to her job at a t-shirt printing shop. There, Gonzales worked evenings to earn money to help supplement her family\u0026rsquo;s income, a practice she continued when she attended the University of California, Irvine. She paid her own student loans and sent money home to help her family make ends meet.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESchool, she said, was like an obstacle.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;As much as I loved it, it was limiting,\u0026rdquo; she said. \u0026ldquo;I didn\u0026rsquo;t have a computer. It became a restriction. There was no way for me to catch up because I was already so far behind on access.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGonzales\u0026rsquo; story can be told by millions of other Americans. For most, that\u0026rsquo;s where the story ends \u0026ndash; an educational deficit never closed because of a lack of access and resources. For her, though, that experience has served as inspiration for her research into the benefits and pitfalls of online educational environments. Gonzales believes that online learning has potential to reach students who, like herself, had limited access to one of the fundamental components of education: time. Online learning has opened up opportunities for students to learn asynchronously from one another, allowing them to participate in courses as their schedules allow for it.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDegrees like the Georgia Tech \u003Ca href=\u0022http:\/\/omscs.gatech.edu\/\u0022\u003EOnline Master of Science in Computer Science\u003C\/a\u003E (OMSCS) program, which has made enormous breakthroughs and turned online learning on its head, aim to broaden access to these types of students. Many in Gonzales\u0026rsquo; position are unable to achieve similar results in traditional education. In theory, by making quality education available online, online learning could provide the same opportunity to those underserved communities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut what are the properties of a flourishing online classroom? Why do they work? Who do they work for? And, ultimately, how can the academic community design environments that provide access to quality higher education for all?\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Cstrong\u003EMarissa, meet Jill\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EGonzales came to Georgia Tech in 2016 to pursue her Ph.D. after graduating from Irvine with a degree in informatics, concentrating on human-computer interaction.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhen she arrived, she approached Professor \u003Cstrong\u003EAshok Goel\u003C\/strong\u003E, who had just achieved \u003Ca href=\u0022https:\/\/www.chronicle.com\/article\/When-the-Teaching-Assistant-Is\/238114\u0022\u003Einternational attention for Jill Watson\u003C\/a\u003E, an artificially intelligent teaching assistant that answered students\u0026rsquo; questions in the online section of his Knowledge-Based AI class.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I literally just took (Goel) aside and said, \u0026lsquo;Look, I\u0026rsquo;m interested in your work,\u0026rsquo;\u0026rdquo; Gonzales said. \u0026ldquo;\u0026lsquo;I really like this concept going with the virtual TA, and I\u0026rsquo;d like to help.\u0026rsquo;\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EInitially, she saw the opportunity as one to evaluate the system. How did it affect the students? Was it helping them become more engaged or helping overall grades? As she began to dig in to the project, though, she realized that there was an opportunity and a need for more.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;When we learn, there\u0026rsquo;s a lot of factors that affect how we learn or affect our feeling about learning, about the classroom, the teacher, the material,\u0026rdquo; Gonzales explained. \u0026ldquo;How much do we value the experience and how much does that value impact our overall performance? Do we feel like we\u0026rsquo;re getting something out of it? Are we learning to use specific strategies for academic improvement and reflecting on our performance?\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;These are all things that go on in residential classrooms. What about online classrooms, where the sense of a learning community is perhaps obscured, and where students aren\u0026rsquo;t just working with the teaching staff, but with intelligent agents?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Cstrong\u003EUnderstanding Design Implications for Online Systems\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EAs she dug, Gonzales concluded that she needed to evaluate more than just how AIs could ease the load on teaching staff, making them more available to provide additional in-depth assistance to students online. Instead, she needed to take a more holistic view about the students\u0026rsquo; online experience.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESince she began, Gonzales has performed evaluations after each semester for both the residential and online sections of the \u003Ca href=\u0022https:\/\/www.omscs.gatech.edu\/cs-7637-knowledge-based-artificial-intelligence-cognitive-systems\u0022\u003EKnowledge-Based AI class\u003C\/a\u003E in which Jill Watson and other AIs are used.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe goal is to gain a more complete understanding of the online educational experience and how the design and implementation of these AI assistants, among other design decisions in online learning environments, can help or hurt the process of offering quality education online. Online learning, Gonzales said, isn\u0026rsquo;t going anywhere anytime soon. As OMSCS has shown, a quality education can be achieved beyond just a residential program. But it is important that researchers get in front of potential future challenges as online opportunities become more common.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We have to get in front of it and understand what works and what doesn\u0026rsquo;t work before the demand becomes too great to keep up,\u0026rdquo; she said. \u0026ldquo;Ultimately, online learning should broaden access to more populations, but it\u0026rsquo;s important that we design and implement programs that provide a complete educational experience.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn her two years at Georgia Tech, Gonzales has been \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/news\/591828\/ic-phd-student-marissa-gonzales-receives-goizueta-foundation-fellowship\u0022\u003Eawarded the Goizueta Foundation Fellowship\u003C\/a\u003E, which is designed to help attract and promote doctoral students of Hispanic\/Latino origin, and the \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/news\/601381\/georgia-tech-focus-intel-diversity-fellowship-helping-ics-marissa-gonzales-study-online\u0022\u003EIntel Diversity Fellowship\u003C\/a\u003E from the Georgia Tech Focus Program.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EShe aims to pick her dissertation topic in the next few months and is on track to complete her degree in 2021.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A challenging educational experience as a teen has helped inform and drive Gonzales\u0027 current research in online educational environments."}],"uid":"33939","created_gmt":"2018-06-12 15:59:50","changed_gmt":"2018-06-12 15:59:50","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-06-12T00:00:00-04:00","iso_date":"2018-06-12T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"606945":{"id":"606945","type":"image","title":"Marissa Gonzales","body":null,"created":"1528818866","gmt_created":"2018-06-12 15:54:26","changed":"1528818866","gmt_changed":"2018-06-12 15:54:26","alt":"Marissa Gonzales","file":{"fid":"231509","name":"Marissa Gonzales rotator.jpg","image_path":"\/sites\/default\/files\/images\/Marissa%20Gonzales%20rotator.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Marissa%20Gonzales%20rotator.jpg","mime":"image\/jpeg","size":119951,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Marissa%20Gonzales%20rotator.jpg?itok=xoWXx39q"}}},"media_ids":["606945"],"related_links":[{"url":"http:\/\/omscs.gatech.edu","title":"Georgia Tech Online Master of Science in Computer Science"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"66244","name":"C21U"},{"id":"1305","name":"Georgia Tech Academic Advising Network (GTAAN)"},{"id":"431631","name":"OMS"}],"categories":[],"keywords":[{"id":"176882","name":"marissa gonzales"},{"id":"112431","name":"ashok goel"},{"id":"169183","name":"Jill Watson"},{"id":"121521","name":"OMSCS"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"606473":{"#nid":"606473","#data":{"type":"news","title":"Colleagues Celebrate 25 Years, Bid Farewell to Departing Professor Mark Guzdial","body":[{"value":"\u003Cp\u003EAfter 25 years of service to Georgia Tech, longtime College of Computing Professor \u003Cstrong\u003EMark Guzdial\u003C\/strong\u003E is heading back to his home state to teach and continue his research at the University of Michigan.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGuzdial, along with his wife and College of Computing research scientist \u003Cstrong\u003EBarbara Ericson\u003C\/strong\u003E, leave lasting academic impacts on the College for their revolutionary research into computer science education, developing innovative technology to improve learning and leading a charge in examining and increasing equity \u0026ndash; specifically with regard to women and minorities \u0026ndash; in computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDuring his time at Georgia Tech, Guzdial has led such initiatives as Georgia Computes, a National Science Foundation Broadening Participation in Computing alliance focused on increasing the number and diversity of computing students in the state of Georgia. His work has reached beyond Georgia, leading a national conversation in equity and education at conferences such as the ACM Special Interest Group on Computer Science Education and International Computing Education Research conference, among others.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEricson, who recently completed her Ph.D. in Human Centered Computing, was the Director for Computing Outreach in the College. Her work, in conjunction with the national CSforAll initiative established by former president Barack Obama, improved the quality and quantity of secondary computing teachers in the state.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGuzdial and Ericson were awarded the Karl V. Karlstrom Outstanding Educator Award in 2010 and Guzdial the IEEE Computer Science and Engineering Undergraduate Teaching Award in 2012 for contributions to computing education. Guzdial became a Fellow for the Association for Computing Machinery in 2014. Ericson also won the 2012 A. Richard Newton Educator Award for efforts to attract more females to computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Mark and Barb\u0026rsquo;s work has helped position the College of Computing as a thought leader in computer science education,\u0026rdquo; John P. Imlay Dean of Computing Zvi Galil said. \u0026ldquo;We are extremely appreciative for their service and will miss them greatly. I wish them great success at the University of Michigan.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch2\u003E\u003Ca href=\u0022https:\/\/www.flickr.com\/photos\/ccgatech\/albums\/72157694064803572\/with\/42327563181\/\u0022\u003E\u003Cstrong\u003EPHOTOS:\u0026nbsp; Click here for photos from Guzdial\u0026rsquo;s 25 Years of Service Reception\u003C\/strong\u003E\u003C\/a\u003E\u003C\/h2\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech colleagues celebrated Guzdial\u0026rsquo;s 25 years of service to the Institute at a reception earlier this month. Many praised the impact on Georgia Tech and shared stories of long-lasting friendships. Professor \u003Cstrong\u003EAmy Bruckman\u003C\/strong\u003E said Guzdial was \u0026ldquo;the reason (she\u0026rsquo;s) here\u0026rdquo; at Georgia Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EProfessor \u003Cstrong\u003EJohn Stasko\u003C\/strong\u003E shared how he, Guzdial, and Professor \u003Cstrong\u003EGregory Abowd\u003C\/strong\u003E have had\u0026nbsp;a standing monthly Saturday lunch and how much the camaraderie has meant to him.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Probably most of you all know about Mark\u0026rsquo;s contributions in CS Ed and, in many ways, he gives us the presence in that area,\u0026rdquo; Stasko said. \u0026ldquo;There are things beyond that, though. Gregory, Mark and I have been having breakfast together on Saturdays for years and years. Kind of like a bunch of old men getting together \u0026ndash; well, I guess now it\u0026rsquo;s not really \u0026lsquo;like\u0026rsquo; old men.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;But that\u0026rsquo;s been great. We\u0026rsquo;ll miss him certainly for all of his academic contributions, but many of us miss him as a close friend, too.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAbowd echoed Stasko\u0026rsquo;s words, calling Guzdial a \u0026ldquo;brother\u0026rdquo; and lamenting the fact that he, a Notre Dame graduate, now has to like something about the University of Michigan.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I\u0026rsquo;m really angry at Mark and Barbara because I grew up in Detroit and went to Notre Dame, and all my family went to Notre Dame,\u0026rdquo; he joked. \u0026ldquo;I grew up despising everything to do with the University of Michigan. And I\u0026rsquo;m so mad that now I have to love some piece of that university. But I think I\u0026rsquo;ll get over it.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECollege of Computing Professor Emeritus \u003Cstrong\u003EJim Foley\u003C\/strong\u003E, a Michigan graduate, said he was happy that his colleagues could bring their \u0026ldquo;great spirits\u0026rdquo; to his alma mater.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"After 25 years of service to Georgia Tech, longtime College of Computing Professor Mark Guzdial is heading back to his home state to teach and continue his research at the University of Michigan."}],"uid":"33939","created_gmt":"2018-05-24 19:00:49","changed_gmt":"2018-05-24 19:00:49","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-05-24T00:00:00-04:00","iso_date":"2018-05-24T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"606472":{"id":"606472","type":"image","title":"Mark Guzdial Farewell","body":null,"created":"1527188412","gmt_created":"2018-05-24 19:00:12","changed":"1527188412","gmt_changed":"2018-05-24 19:00:12","alt":"Mark Guzdial and Barbara Ericson at Guzdial\u0027s 25 Years of Service Reception","file":{"fid":"231314","name":"_MG_1360.jpg","image_path":"\/sites\/default\/files\/images\/_MG_1360.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/_MG_1360.jpg","mime":"image\/jpeg","size":145185,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/_MG_1360.jpg?itok=7urkka-a"}}},"media_ids":["606472"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"10469","name":"Mark Guzdial"},{"id":"141461","name":"Barbara Ericson; Director of Computing Outreach"},{"id":"654","name":"College of Computing"},{"id":"166848","name":"School of Interactive Computing"},{"id":"11002","name":"Gregory Abowd"},{"id":"8472","name":"amy bruckman"},{"id":"11632","name":"john stasko"},{"id":"78531","name":"Jim Foley"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"606097":{"#nid":"606097","#data":{"type":"news","title":"Wearable Ring, Wristband Allow Users to Control Smart Tech With Hand Gestures","body":[{"value":"\u003Cp\u003ENew technology created by a team of Georgia Tech researchers could make controlling text or other mobile applications as simple as \u0026ldquo;1-2-3.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing acoustic chirps emitted from a ring and received by a wristband, like a smartwatch, the system is able to recognize 22 different micro finger gestures that could be programmed to various commands \u0026mdash; including a T9 keyboard interface, a set of numbers, or application commands like playing or stopping music.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA \u003Ca href=\u0022https:\/\/youtu.be\/a-R45u5sqFc\u0022\u003Evideo demonstration of the technology\u003C\/a\u003E shows how, at a high rate of accuracy, the system can recognize hand poses using the 12 bones of the fingers and digits \u0026lsquo;1\u0026rsquo; through \u0026lsquo;10\u0026rsquo; in American Sign Language (ASL).\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Some interaction is not socially appropriate,\u0026rdquo; said \u003Cstrong\u003ECheng Zhang\u003C\/strong\u003E, the Ph.D. student in the School of Interactive Computing who led the effort. \u0026ldquo;A wearable is always on you, so you should have the ability to interact through that wearable at any time in an appropriate and discreet fashion. When we\u0026rsquo;re talking, I can still make some quick reply that doesn\u0026rsquo;t interrupt our interaction.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESince one of the\u0026nbsp;goals\u0026nbsp;was to enter digits using only one hand, the team decided to use\u0026nbsp;ASL, which already has well defined hand postures for each digit. In this manner, the user might select options from a numbered list, call a phone number, or do simple calculations.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe system is called \u003Cem\u003EFingerPing\u003C\/em\u003E. Unlike other technology that requires the use of a glove or a more obtrusive wearable, this technique is limited to just a thumb ring and a watch. The ring produces acoustic chirps that travel through the hand and are picked up by receivers on the watch. There are specific patterns in which sound waves travel through structures, including the hand, that can be altered by the manner in which the hand is posed. Utilizing those poses, the wearer can achieve up to 22 pre-programmed commands.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe gestures are small and non-invasive, as simple as tapping the tip of a finger or posing your hand in classic \u0026ldquo;1,\u0026rdquo; \u0026ldquo;2,\u0026rdquo; and \u0026ldquo;3\u0026rdquo; gestures.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The receiver recognizes these tiny differences,\u0026rdquo; Zhang said. \u0026ldquo;The injected sound from the thumb will travel at different paths inside the body with different hand postures. For instance, when your hand is open there is only one direct path from the thumb to the wrist. Any time you do a gesture where you close a loop, the sound will take a different path and that will form a unique signature.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EZhang said that the research is a proof of concept for a technique that could be expanded and improved upon in the future.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research was presented last month at the 2018 ACM Conference on Human Factors in Computing Systems (CHI). The paper is titled FingerPing: Recognizing Fine-grained Hand Poses Using Active Acoustic On-body Sensing (Cheng Zhang, \u003Cstrong\u003EQiuyue Xue\u003C\/strong\u003E, \u003Cstrong\u003EAnandghan Waghmare\u003C\/strong\u003E, \u003Cstrong\u003ERuichen Meng\u003C\/strong\u003E, \u003Cstrong\u003ESumeet Jain\u003C\/strong\u003E, \u003Cstrong\u003EYizeng Han\u003C\/strong\u003E, \u003Cstrong\u003EXinyu Li\u003C\/strong\u003E, \u003Cstrong\u003EKenneth Cunefare\u003C\/strong\u003E, \u003Cstrong\u003EThomas Ploetz\u003C\/strong\u003E, \u003Cstrong\u003EThad Starner\u003C\/strong\u003E, \u003Cstrong\u003EOmer Inan\u003C\/strong\u003E, \u003Cstrong\u003EGregory Abowd\u003C\/strong\u003E).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearchers on this team, including Zhang, have worked on similar unique gesture techniques in the past. Zhang graduated from Georgia Tech in May and will join the Information Science Department at Cornell University as a tenure-track assistant professor.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Using acoustic chirps emitted from a ring and received by a wristband, like a smartwatch, the system is able to recognize 22 different micro finger gestures that could be programmed to various commands."}],"uid":"33939","created_gmt":"2018-05-11 17:13:54","changed_gmt":"2018-05-24 13:47:29","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-05-11T00:00:00-04:00","iso_date":"2018-05-11T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"606096":{"id":"606096","type":"image","title":"FingerPing 1","body":null,"created":"1526058685","gmt_created":"2018-05-11 17:11:25","changed":"1526058685","gmt_changed":"2018-05-11 17:11:25","alt":"FingerPing Ring and Wristband","file":{"fid":"231158","name":"Screen Shot 2018-05-11 at 1.08.10 PM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202018-05-11%20at%201.08.10%20PM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202018-05-11%20at%201.08.10%20PM.png","mime":"image\/png","size":170906,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202018-05-11%20at%201.08.10%20PM.png?itok=JFfG2X5R"}}},"media_ids":["606096"],"related_links":[{"url":"http:\/\/www.news.gatech.edu\/2017\/11\/29\/wearable-computing-ring-allows-users-write-words-and-numbers-thumb","title":"Using a Ring to Draw and Write"},{"url":"http:\/\/www.news.gatech.edu\/2017\/11\/29\/wearable-computing-ring-allows-users-write-words-and-numbers-thumb","title":"Controlling Smartwatch with Breaths and Swipes"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"10353","name":"wearable computing"},{"id":"1944","name":"Thad Starner"},{"id":"177958","name":"cheng zhang"},{"id":"11002","name":"Gregory Abowd"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71891","name":"Health and Medicine"},{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["david.mitchell@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"606296":{"#nid":"606296","#data":{"type":"news","title":"VOTE NOW: IC Selects Four Finalists in T-Shirt Design Contest","body":[{"value":"\u003Cp\u003E\u0026ldquo;Interactive computing\u0026rdquo; means different things to different people.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor some, it may mean a person\u0026rsquo;s physical interaction with computing through tangible technological devices. For others, it might mean a school \u0026ndash; the School of Interactive Computing, for example \u0026ndash; filled with a diverse set of research. Still others might think of the progression of computing from classic personal computers to those pushing boundaries through machine learning and artificial intelligence.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA few weeks ago, we asked students, faculty, staff, and friends of the School of Interactive Computing to come up with concepts for a t-shirt design that demonstrate what those words mean to them.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAfter sifting through all our submissions \u0026ndash; and we received a number of great ones \u0026ndash; we have narrowed the contest down to four finalists. Check out the finalists below and be sure to vote on\u0026nbsp;Facebook\u0026nbsp;or \u003Ca href=\u0022https:\/\/www.surveymonkey.com\/r\/7VXLTKM\u0022\u003Ethis survey\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Cstrong\u003EJohn Britti, Computational Media undergraduate student\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EBritti provided a futuristic look at human interaction with a computing interface, utilizing the classic Buzz Gold color. Finalist selectors liked his design for its universal depiction of the intersection between humans and computers.\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Cstrong\u003EBrian Cochran, Computer Science undergraduate student\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003ECochran submitted a selection of computing characters that could be the basis of a series of t-shirt designs now and in the future. Finalist selectors liked his design because of its fun interpretation of computing and that it provides what every organization or event needs \u0026ndash; a mascot.\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Cstrong\u003EStefamikha Suwisar, Industrial Design undergraduate student\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003ESuwisar\u0026rsquo;s design depicts the diverse research that comes from the many human sources within the School of Interactive Computing. Finalist selectors liked her design because it captured in an image the breadth of computing research that comes out of the School.\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Cstrong\u003ETim Trent, GVU Center research technologist\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003ETrent provided an initial concept for a series of t-shirts that highlight the many IC research areas in a nostalgic way. Finalist selectors liked his submission because, while only an initial concept, it provides a fun theme to depict the many \u0026ldquo;flavors\u0026rdquo; of interactive computing.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"After selecting four finalists to the School of Interactive Computing t-shirt design contest, the vote is now up to you."}],"uid":"33939","created_gmt":"2018-05-17 18:41:26","changed_gmt":"2018-05-17 18:41:26","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-05-17T00:00:00-04:00","iso_date":"2018-05-17T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"606289":{"id":"606289","type":"image","title":"IC T-Shirt Design Contest Finalists","body":null,"created":"1526579963","gmt_created":"2018-05-17 17:59:23","changed":"1526579963","gmt_changed":"2018-05-17 17:59:23","alt":"IC T-Shirt Design Contest Finalists","file":{"fid":"231236","name":"Screen Shot 2018-05-17 at 1.52.54 PM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202018-05-17%20at%201.52.54%20PM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202018-05-17%20at%201.52.54%20PM.png","mime":"image\/png","size":409958,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202018-05-17%20at%201.52.54%20PM.png?itok=M45SCdEP"}}},"media_ids":["606289"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"178030","name":"school of interactive computing; t-shirt design contest; college of computing"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"605630":{"#nid":"605630","#data":{"type":"news","title":"IC Researchers Highlight Design Implications as Venezuelans Turn to Facebook for Barter, Exchange","body":[{"value":"\u003Cp\u003EConsider a scenario in which economic turmoil and hyperinflation have made it nearly impossible to purchase many of life\u0026rsquo;s basic necessities. There are food and medicine shortages, and scammers purchase what is available in bulk in an effort to manage the flow and pricing of supplies at the expense of other citizens.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHow, then, might honest citizens go about navigating the challenging circumstances to procure the items they need to survive?\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt\u0026rsquo;s a familiar environment to Venezuelan citizens who, since an economic crisis gripped the country in 2014, have faced such barriers in their daily lives. Out of necessity, many have turned to online solidarity economies like Facebook groups that are dedicated to a fairer system of barter and exchange.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile these groups present attempts at mitigating some of the more turbulent aspects of the Venezuelan economy, they come with their own set of challenges, as well. In a paper being presented at the ACM CHI Conference on Human Factors in Computing Systems (CHI), Georgia Tech researchers have examined the development of these online social ecosystems. They offer ideas for how the design structures of Facebook\u0026rsquo;s groups can better support such solidarity economies.\u003C\/p\u003E\r\n\r\n\u003Ch3\u003EWhy turn to online solidarity economies?\u003C\/h3\u003E\r\n\r\n\u003Cp\u003ETo understand how social media sites like Facebook can more effectively implement their group design, it\u0026rsquo;s important to understand how and why these groups came about in the first place, said \u003Cstrong\u003EHayley Evans\u003C\/strong\u003E, School of Interactive Computing (IC) Ph.D. student and first author on the paper (\u003Ca href=\u0022http:\/\/delivery.acm.org\/10.1145\/3180000\/3173802\/paper228.pdf?ip=128.61.126.162\u0026amp;id=3173802\u0026amp;acc=OPEN\u0026amp;key=A79D83B43E50B5B8%2E5E2401E94B5C98E0%2E4D4702B0C3E38B35%2E6D218144511F3437\u0026amp;__acm__=1524839288_61a897260aea37fdb7b733d1a782b1a9\u0022\u003E\u003Cem\u003EFacebook in Venezuela: Understanding Solidarity Economies in Low-Trust Environments\u003C\/em\u003E\u003C\/a\u003E).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn 2014, the price of crude oil, Venezuela\u0026rsquo;s main export, collapsed, setting the stage for an economic and political crisis that has continued to deteriorate in the succeeding years. The country\u0026rsquo;s GDP has declined at an average of 6.83 percent over the past five years, there have been food shortages, failing hospitals, high rights of inflation, calls for humanitarian aid, and political opposition both domestically and internationally.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESuch instability paved the way for the rise of \u0026ldquo;bachequeros,\u0026rdquo; individuals who place their own self-interests over those of the group by charging or demanding barters at high prices. Often, these individuals will buy goods in bulk with the intention of controlling the supply and price. With low levels of trust in the traditional exchange of goods, as well as high scarcity, many Venezuelans migrated to online economies most commonly taking the form of Facebook groups.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEvans and her co-authors examined three such groups, all on Facebook \u0026ndash; a large one with over 45,000 members, a mid-sized collection, and one that is just slightly over 1,000. Other groups, as small as 10-20 also existed, likely made up of closer family and friends and individuals searching for a specific item, Evans said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlthough group administrators seek out fairness in moderation and price-setting, in many ways they are still operating as a self-regulated free-for-all.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The government stopped setting the prices,\u0026rdquo; Evans said. \u0026ldquo;So, they kind of triangulate \u0026ndash; they remember what the government set prices at in the past, what they\u0026rsquo;ve seen the price at online, and what they feel is fair.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It can be as ambiguous as it sounds. \u0026lsquo;Fair\u0026rsquo; is highly dependent on the person and what they believe.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor example, one person might believe something is fair if it is double in price, but an absolute necessity. A mother whose son has asthma, Evans said, would be thankful to find asthma medication at only double the price. Someone else in a different situation might not.\u003C\/p\u003E\r\n\r\n\u003Ch3\u003ENavigating design flaws\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EBut with such ambiguity comes a stiff challenge in moderating such economies. Often, when an individual posts an item at a price the group deems unfair, they can lose credibility and, with it, the ability to barter in these groups. Attempts at regulations \u0026ndash; like a three-strikes-and-out policy \u0026ndash; have been made by at least one group administrator.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut those are difficult to enforce because Facebook\u0026rsquo;s design doesn\u0026rsquo;t offer any tracking method.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We start to see that there\u0026rsquo;s flaws in the infrastructure and there\u0026rsquo;s flaws in Facebook,\u0026rdquo; Evans said. \u0026ldquo;So, this group, which set out to create a more stable community, becomes like every other group that is too big, difficult to manage, and doesn\u0026rsquo;t have the right tools.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEvans and her team highlight some key design implications, based on \u003Ca href=\u0022http:\/\/mysite.du.edu\/~lavita\/edpx_3770_13s\/_docs\/kollock_design_%20princ_for_online_comm%20copy.pdf\u0022\u003EPeter Kollock\u0026rsquo;s design principles for online communities\u003C\/a\u003E. Interestingly, Evans said, while Venezuelan bartering groups violate all of them to some degree, they still work due to necessity. Looking at Kollock, though, they were able to come up with four:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003Ebuyer\/seller reviews\u003C\/li\u003E\r\n\t\u003Cli\u003Ean equitable marketplace indicator\u003C\/li\u003E\r\n\t\u003Cli\u003Eprominent rule placement,\u003C\/li\u003E\r\n\t\u003Cli\u003Eand tools for tracking offenses and implementing sanctions.\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;These design affordances have worked well on other platforms like eBay or Amazon,\u0026rdquo; Evans said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEvans added that one of the most interesting takeaways was the appropriation of the platform. While Facebook was designed for college students in 2004, it has become a vital tool to Venezuelans in an unpredictable economic crisis.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This ingenuity merits attention,\u0026rdquo; Evans said. \u0026ldquo;Furthermore, we hope that there will be some incentive for Facebook to review this use, be it for business or humanitarian reasons.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe paper was co-authored by Evans, IC Ph.D. student \u003Cstrong\u003EMarisol Wong-Villacres\u003C\/strong\u003E, IC Ph.D. student\u003Cstrong\u003E Daniel Castro\u003C\/strong\u003E, former IC Assistant Professor \u003Cstrong\u003EEric Gilbert\u003C\/strong\u003E, IC research scientist \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/people\/7087\/rosa-arriagas\u0022\u003E\u003Cstrong\u003ERosa Arriaga\u003C\/strong\u003E\u003C\/a\u003E, IC Ph.D. student \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/content\/michaelanne-dye\u0022\u003E\u003Cstrong\u003EMichaelanne Dye\u003C\/strong\u003E\u003C\/a\u003E, and IC Professor \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/people\/7127\/amy-bruckmans\u0022\u003E\u003Cstrong\u003EAmy Bruckman\u003C\/strong\u003E\u003C\/a\u003E. It was presented this week at CHI in Montreal, Canada.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"In a paper being presented at CHI, Georgia Tech researchers have examined the development of these online social ecosystems."}],"uid":"33939","created_gmt":"2018-04-27 14:40:39","changed_gmt":"2018-04-27 14:40:39","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-04-27T00:00:00-04:00","iso_date":"2018-04-27T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"605629":{"id":"605629","type":"image","title":"Facebook in Venezuela","body":null,"created":"1524838602","gmt_created":"2018-04-27 14:16:42","changed":"1524838602","gmt_changed":"2018-04-27 14:16:42","alt":"Facebook logo with hands surrounding it","file":{"fid":"230932","name":"Facebook3.jpg","image_path":"\/sites\/default\/files\/images\/Facebook3.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Facebook3.jpg","mime":"image\/jpeg","size":94616,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Facebook3.jpg?itok=TZ37gzPp"}}},"media_ids":["605629"],"related_links":[{"url":"http:\/\/www.chi.gatech.edu\/2018\/","title":"Georgia Tech at CHI 2018"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"10835","name":"Facebook"},{"id":"177811","name":"Facebook groups"},{"id":"177812","name":"venezuela"},{"id":"177813","name":"Amy Bruckman; Michaelanne Dye; School of Interactive Computing; Hayley Evans"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"605526":{"#nid":"605526","#data":{"type":"news","title":"Georgia Tech Research Into Cuban \u0027Offline Internet\u0027 Could Inform Future Definitions of Connectivity","body":[{"value":"\u003Cp\u003EA pervasive assumption says that internet access is determined by wires or some unseen signal that delivers information from a source, through the cloud, and onto your hard drive in a matter of seconds. Often, though, environment and resources determine how digital media and information technology is shared and consumed.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn a paper being presented at the \u003Ca href=\u0022https:\/\/chi2018.acm.org\/\u0022\u003EACM CHI Conference on Human Factors in Computing Systems\u003C\/a\u003E, researchers in the Georgia Tech \u003Ca href=\u0022http:\/\/cc.gatech.edu\u0022\u003ECollege of Computing\u003C\/a\u003E outline a unique, and positively thriving, media ecology that operates mostly independent of traditional internet norms. Understanding the success of such a system could better inform how the internet is deployed in similar environments.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETitled \u003Ca href=\u0022https:\/\/static1.squarespace.com\/static\/59f549a3b7411c736b42936a\/t\/5a61b255e2c483c497384fd1\/1516352085908\/ElPaquete.pdf\u0022\u003E\u003Cem\u003EEl Paquete Semanal: The Week\u0026rsquo;s Internet in Havana\u003C\/em\u003E\u003C\/a\u003E, the paper examines a human-centered \u0026ldquo;offline\u0026rdquo; internet that, despite lacking widespread affordable internet access in the traditional sense, nonetheless delivers information and entertainment in a locally-relevant platform.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAccess to the web in Cuba has, historically, been prohibitive. Up until recently, as much as 95 percent of the population were without access. And, while that is changing with the introduction of public Wi-Fi hotspots, the public internet is slow and too expensive for a large portion of the population. To use the internet, the Cuban people must prioritize time and money.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EYet, School of Interactive Computing (IC) Ph.D. student \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/content\/michaelanne-dye\u0022\u003E\u003Cstrong\u003EMichaelanne Dye\u003C\/strong\u003E\u003C\/a\u003E realized in past research in the country that many still had access to things like movies or television entertainment, new versions of software and more, often before she would have gained access to it in the United States.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe reason?\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEl Paquete Semanal (\u0026quot;the Weekly Package\u0026quot;), a weekly \u0026ldquo;offline\u0026rdquo; internet that delivers a terabyte worth of multimedia, digital content, and news in an offline form. El Paquete is compiled by people with internet access, sold to \u0026ldquo;paqueteros\u0026rdquo; (packagers), and distributed throughout communities in the form of data on a USB drive.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe content is often delivered by hand or sold in physical stores that have popped up in apartment fronts. Individuals can enter the shop, select the content they want and pay a price per unit size of data.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s done in a way that is incredibly affordable and accessible to most people,\u0026rdquo; Dye said. \u0026ldquo;People from all socioeconomic statuses use this network.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENot unlike YouTube, it also affords local artists the opportunity to reach new audiences. Local content, like recording or visual artists, is included in El Paquete and shared throughout the city, country, and beyond.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Their work will go into El Paquete, and it\u0026rsquo;s making its way out of Cuba,\u0026rdquo; Dye said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELocal journalists challenge the norm of government-run media channels, distributing their own literature through El Paquete. Whereas non-government journalists typically sent news outside of the country for publishing in the past, now it can be delivered weekly in this offline format.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs it has become more pervasive and successful, El Paquete challenges what is typically viewed as \u0026ldquo;internet access.\u0026rdquo; Dye argues that, while some attempts to establish more traditional access have failed, this has had unparalleled success.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It goes back to this larger question of how the internet is designed?\u0026rdquo; she said. \u0026ldquo;And does it have to be this way? As communities are brought online, how do you make information access or communication technologies that are flexible and adaptable to the local condition?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEl Paquete offers one benefit that traditional internet lacks: a distinctly human infrastructure.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Every system has a human element, but this infrastructure is literally held together by humans, not wires,\u0026rdquo; Dye said. \u0026ldquo;So, the human element of it makes visible that this is a negotiated, relevant, and participatory internet that is very adaptable to a variety of cases.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBecause it avoids automation, though, it requires painstaking work to be maintained.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;There\u0026rsquo;s affordances that the system provides that the internet doesn\u0026rsquo;t provide us,\u0026rdquo; Dye said. \u0026ldquo;At the same time, there are limitations.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUltimately, though, what are the limitations of the traditional internet and is it necessarily the right decision to replicate it in its entirety from the top down in a one-size-fits-all version?\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Who are we to say that this is exactly what everyone needs access to?\u0026rdquo; Dye said. \u0026ldquo;Who determines what is valuable for people? This paper argues that there are varying successful iterations of the internet and that local norms and values should play a role in determining how access is delivered in different locales.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearch for this project was accomplished through in-depth interviews with the local population of Havana over the course of two years, as well as personal participation by authors in the system. The paper is being presented at the ACM CHI Conference on Human Factors in Computing Systems, April 21-26 in Montr\u0026eacute;al, Canada. Dye\u0026rsquo;s co-authors are \u003Cstrong\u003EDavid Nemer\u003C\/strong\u003E (University of Kentucky), \u003Cstrong\u003EJosiah Mangiameli\u003C\/strong\u003E (Independent), IC Professor \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/people\/7127\/amy-bruckmans\u0022\u003E\u003Cstrong\u003EAmy Bruckman\u003C\/strong\u003E\u003C\/a\u003E, and School of International Affairs and IC Assistant Professor \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/people\/7054\/neha-kumars\u0022\u003E\u003Cstrong\u003ENeha Kumar\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"El Paquete Semanal -- \u0022the Weekly Package\u0022 -- is an offline method of delivering digital content to communities in Havana, Cuba."}],"uid":"33939","created_gmt":"2018-04-25 16:57:08","changed_gmt":"2018-04-25 16:57:08","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-04-25T00:00:00-04:00","iso_date":"2018-04-25T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"605524":{"id":"605524","type":"image","title":"Havana, Cuba","body":null,"created":"1524675160","gmt_created":"2018-04-25 16:52:40","changed":"1524675160","gmt_changed":"2018-04-25 16:52:40","alt":"A street in Havana, Cuba","file":{"fid":"230879","name":"Cuba1.jpg","image_path":"\/sites\/default\/files\/images\/Cuba1.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Cuba1.jpg","mime":"image\/jpeg","size":306292,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Cuba1.jpg?itok=k4-i4vyU"}}},"media_ids":["605524"],"related_links":[{"url":"http:\/\/www.chi.gatech.edu\/2018\/","title":"Georgia Tech at CHI 2018"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"177780","name":"El Paquete Semanal"},{"id":"1027","name":"chi"},{"id":"177781","name":"ACM CHI Conference on Human Factors in Computing Systems"},{"id":"8494","name":"HCI"},{"id":"177782","name":"Amy Bruckman; Michaelanne Dye; School of Interactive Computing; Cuba; Neha Kumar"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Edavid.mitchell@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"604801":{"#nid":"604801","#data":{"type":"news","title":"School of IC T-Shirt Design Contest: Design Our Shirt For a Chance at $200!","body":[{"value":"\u003Cp\u003EAt the School of Interactive Computing, we feel like we have it all.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearch? If you search far and wide, you\u0026rsquo;d be hard-pressed to find a school quite as diverse as ours. Faculty? Ours are international thought leaders who perpetually move the needle forward in their respective fields. Students? Our bright minds regularly accept appointments in industry and at high-profile academic institutions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut, you know what we\u0026rsquo;re missing? A T-shirt. \u003Cstrong\u003EAnd that\u0026rsquo;s where you come in.\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWe are producing a new shirt for the IC community, and we\u0026rsquo;re asking YOU to design it. Dust off those graphics skills and put together a concept of a design for what \u0026ldquo;interactive computing\u0026rdquo; means to you.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt could be anything! And don\u0026rsquo;t worry about whether your design is perfect. You can send us an initial concept for the contest, and we\u0026rsquo;ll worry about the little details later.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDid we mention the best part? If your design wins, \u003Cstrong\u003Ewe\u0026rsquo;ll give you a $200 Amazon gift card\u003C\/strong\u003E for your troubles. Pretty sweet, right?\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESo, who can participate?\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThat\u0026rsquo;s the cool thing. Because we\u0026rsquo;re such an interactive and collaborative family, we consider so many both within and outside of the school to be a part of our community. Faculty, staff, students, alumni, associated centers and research institutes, friends of the school \u0026ndash; you name it. If you\u0026rsquo;ve ever participated within our community, we encourage you to submit!\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnd submitting is simple. Just save your design\/concept in JPEG format and email to our school communications officer, David Mitchell, at \u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E. In the email, be sure to include your name and association with the school (current or former faculty or staff, student or alumni, friend of the school, etc.) so that we can give you proper credit.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhen submissions are closed, we will select a few finalists and put it up for a vote on our social media channels.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOk, fine. There ARE some rules. Here are a few:\u003C\/p\u003E\r\n\r\n\u003Col\u003E\r\n\t\u003Cli\u003EYou can\u0026rsquo;t use the name of our school, college, or institute. That means \u0026ldquo;School of Interactive Computing,\u0026rdquo; \u0026ldquo;College of Computing,\u0026rdquo; and \u0026ldquo;Georgia Tech,\u0026rdquo; and all varying forms thereof, are out. But using those words is a little on-the-nose anyway, isn\u0026rsquo;t it?\u003C\/li\u003E\r\n\t\u003Cli\u003EAs much as we all love \u003Ca href=\u0022https:\/\/en.wikipedia.org\/wiki\/Buzz_(mascot)\u0022\u003EBuzz\u003C\/a\u003E (and would LOVE to see him as a robot), he\u0026rsquo;s out too, due to copyright guidelines. Sorry, everyone.\u003C\/li\u003E\r\n\t\u003Cli\u003EBy participating in this contest, you are agreeing to the disclaimer below.\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EBE CREATIVE!\u003C\/strong\u003E Our school and our community mean something different to everybody, so don\u0026rsquo;t be afraid to go outside the box.\u003C\/li\u003E\r\n\u003C\/ol\u003E\r\n\r\n\u003Cp\u003ESo, to summarize: \u003Cstrong\u003EDesign our T-shirt, leave your mark on the school, and earn some cash\u003C\/strong\u003E, to boot. Not too bad.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECheck out the timeline below and start designing! We can\u0026rsquo;t wait to see what you come up with.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EImportant dates\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EContest is OPEN \u0026ndash; April 6\u003C\/li\u003E\r\n\t\u003Cli\u003ESubmission deadline \u0026ndash; May 4\u003C\/li\u003E\r\n\t\u003Cli\u003EFinalists announced \u0026ndash; May 11\u003C\/li\u003E\r\n\t\u003Cli\u003EVoting on finalists \u0026ndash; May 11-18\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EDisclaimer\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EThis Competition is in no way sponsored, endorsed or administered by, or associated with, Facebook, Twitter, Instagram, or Amazon. You are providing your information to the Georgia Tech College of Computing and not to Facebook, Twitter, Instagram, or Amazon. The information you provide will be used only by and for Georgia Tech.\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EBy submitting your design, you are agreeing to allow Georgia Tech\u0026#39;s College of Computing\u0026nbsp;to utilize your images for marketing and communications purposes.\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EThis campaign is open to all people of all ages affiliated, whether currently or formerly, with Georgia Tech\u0026rsquo;s School of Interactive Computing. No purchase or payment of any kind is necessary to enter or win. Grand prize winner is subject to\u0026nbsp;selection by the Georgia Tech College of Computing. We reserve the right to disqualify submissions, without notice, and for any reason. By submitting, you agree to release and hold harmless Georgia Tech and the Georgia Tech College of Computing and their employees and affiliates, Facebook, Twitter, Instagram or any and all Internet access and service providers from and against all claims and damages arising in connection with your entry in the campaign and contest, including your receipt or use of giveaways to be distributed in connection with the campaign and contest\u003C\/em\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"We are producing a new shirt for the IC community, and we\u2019re asking YOU to design it. Dust off those graphics skills and put together a concept of a design for what \u201cinteractive computing\u201d means to you."}],"uid":"33939","created_gmt":"2018-04-06 13:19:17","changed_gmt":"2018-04-06 13:19:17","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-04-06T00:00:00-04:00","iso_date":"2018-04-06T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"604799":{"id":"604799","type":"image","title":"IC T-Shirt Contest Flag","body":null,"created":"1523019780","gmt_created":"2018-04-06 13:03:00","changed":"1523019780","gmt_changed":"2018-04-06 13:03:00","alt":"Design our T-Shirts. Leave your mark. Earn some cash.","file":{"fid":"230581","name":"Screen Shot 2018-04-06 at 9.00.53 AM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202018-04-06%20at%209.00.53%20AM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202018-04-06%20at%209.00.53%20AM.png","mime":"image\/png","size":394116,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202018-04-06%20at%209.00.53%20AM.png?itok=MnKD2dNL"}}},"media_ids":["604799"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"166848","name":"School of Interactive Computing"},{"id":"654","name":"College of Computing"},{"id":"47511","name":"t-shirt design contest"},{"id":"4887","name":"GVU Center"},{"id":"12888","name":"IPaT"},{"id":"81491","name":"Institute for Robotics and Intelligent Machines (IRIM)"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"604645":{"#nid":"604645","#data":{"type":"news","title":"College of Computing Sends Ph.D., Online Master\u2019s Students to Women in Cybersecurity Conference Chicago","body":[{"value":"\u003Cp\u003EThis year\u0026rsquo;s \u003Ca href=\u0022https:\/\/www.wicys.net\/\u0022\u003EWomen in Cybersecurity (WiCyS) Conference\u003C\/a\u003E was bigger and better attended than ever before, much like the cybersecurity industry at large. In celebration of the Georgia Tech College of Computing\u0026rsquo;s role at the forefront of this booming field, the College took the lead in sending, for the first time, students on scholarship to attend the conference, as well as hosting a celebration for online Master of Science in Computer Science (OMSCS) students from the Chicago-area.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHeld in Chicago from March 23-24, WiCyS drew some 4,000 attendees and more than 50 sponsors--both corporate and academic--including the College of Computing. The four student scholar representatives sent by the College were Tina Fatouros (OMSCS), Jenna McGrath (Ph.D. Public Policy), Stacey Truex (Ph.D. Computer Science), and Chenzi Wang (OMSCS). Attendees had the opportunity to enjoy meals and network with other women in the field of cybersecurity, attend technical and career-focused talks like \u0026ldquo;Practical Network Forensics,\u0026rdquo; \u0026ldquo;Teaching Cyber Ethics and Societal Impacts in Introduction Computing Courses,\u0026rdquo; and \u0026ldquo;Watson for Cybersecurity and IBM\u0026rsquo;s Cyber Range,\u0026rdquo; and also take advantage of the conference\u0026rsquo;s Career Fair, at which the College hosted a recruitment booth.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Overall, it was a great conference and definitely worth the time to attend,\u0026rdquo; said Fatouros, who works in security and compliance for AT\u0026amp;T. \u0026ldquo;There was a good mix of students, professionals, and teaching faculty attending, which provided many opportunities to interact and network. I was able to pick up job-relevant information from all of the sessions, including workshops, distinguished speakers, and lightning talks. I work in cybersecurity and left WiCyS more focused and encouraged about the many challenging, rewarding, and attainable opportunities in my field.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile in Chicago, the College of Computing and Wenke Lee, co-executive director of the \u003Ca href=\u0022https:\/\/cyber.gatech.edu\/\u0022\u003EInstitute for Information Security \u0026amp; Privacy (IISP)\u003C\/a\u003E, hosted a celebration for local OMSCS students and College alumni. The event was held at the Chicago Athletic Association Hotel, and the 35 attendees enjoyed an evening of celebration with their fellow students -- many of whom met in person for the first time. Lee shared news of all of the exciting cybersecurity research success at Georgia Tech, as well as tips and tricks on how to succeed in the OMSCS program (in which he teaches \u003Ca href=\u0022https:\/\/www.omscs.gatech.edu\/cs-6035-introduction-to-information-security\u0022\u003EIntroduction to Information Security\u003C\/a\u003E).\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It was fantastic to see so many students from OMSCS come out for this social,\u0026rdquo; said Lee. \u0026ldquo;I always enjoy meeting students to hear what they are motivated to do in their careers and their feedback about the program.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EIf you are interested in learning more about cybersecurity at the Institute for Information Security \u0026amp; Privacy (IISP) at Georgia Tech, visit \u003Ca href=\u0022https:\/\/cyber.gatech.edu\/\u0022\u003Ehttps:\/\/cyber.gatech.edu\/\u003C\/a\u003E\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EIf you are interested in learning more about the online Master of Science in Computer Science (OMSCS) program, visit \u003Ca href=\u0022http:\/\/www.omscs.gatech.edu\/\u0022\u003Ehttp:\/\/www.omscs.gatech.edu\/\u003C\/a\u003E.\u003C\/em\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"In celebration of the Georgia Tech College of Computing\u2019s role in cybersecurity, the College sent students on scholarship to attend WiCyS Conference, as well as hosting a celebration for online M.S. CS students."}],"uid":"27998","created_gmt":"2018-04-03 15:21:16","changed_gmt":"2018-04-03 15:29:31","author":"Brittany Aiello","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-04-03T00:00:00-04:00","iso_date":"2018-04-03T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"604642":{"id":"604642","type":"image","title":"Stacey Truex, Ph.D. Computer Science, and Jenna McGrath, Ph.D. Public Policy","body":null,"created":"1522768603","gmt_created":"2018-04-03 15:16:43","changed":"1522769296","gmt_changed":"2018-04-03 15:28:16","alt":"Pictured left to right: Stacey Truex, Ph.D. Computer Science, and Jenna McGrath, Ph.D. Public Policy","file":{"fid":"230518","name":"Screen Shot 2018-03-29 at 4.12.39 PM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202018-03-29%20at%204.12.39%20PM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202018-03-29%20at%204.12.39%20PM.png","mime":"image\/png","size":2254978,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202018-03-29%20at%204.12.39%20PM.png?itok=Pu3qREeC"}},"604643":{"id":"604643","type":"image","title":"Brittany Aiello, OMSCS Communications, and Tina Fatouros, OMSCS student and WiCyS scholarship attendee","body":null,"created":"1522768683","gmt_created":"2018-04-03 15:18:03","changed":"1522769236","gmt_changed":"2018-04-03 15:27:16","alt":"Pictured left to right: Brittany Aiello, OMSCS Communications, and Tina Fatouros, OMSCS student and WiCyS scholarship attendee","file":{"fid":"230519","name":"TinaandBrittany-WiCyS2018.jpg","image_path":"\/sites\/default\/files\/images\/TinaandBrittany-WiCyS2018.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/TinaandBrittany-WiCyS2018.jpg","mime":"image\/jpeg","size":633837,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/TinaandBrittany-WiCyS2018.jpg?itok=E6R3ogQE"}},"604644":{"id":"604644","type":"image","title":"Wenke Lee and OMSCS Chicago-area students","body":null,"created":"1522768781","gmt_created":"2018-04-03 15:19:41","changed":"1522769198","gmt_changed":"2018-04-03 15:26:38","alt":"Wenke Lee and OMSCS Chicago-area students","file":{"fid":"230520","name":"Screen Shot 2018-03-29 at 4.11.58 PM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202018-03-29%20at%204.11.58%20PM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202018-03-29%20at%204.11.58%20PM.png","mime":"image\/png","size":2309202,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202018-03-29%20at%204.11.58%20PM.png?itok=d-1qAfNa"}}},"media_ids":["604642","604643","604644"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1305","name":"Georgia Tech Academic Advising Network (GTAAN)"},{"id":"1299","name":"GVU Center"},{"id":"431631","name":"OMS"},{"id":"131901","name":"Provost"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"177620","name":"WiCyS"},{"id":"10893","name":"wenke lee"},{"id":"1404","name":"Cybersecurity"},{"id":"177624","name":"women in cybersecurity"},{"id":"1270","name":"conference"},{"id":"4833","name":"chicago"},{"id":"121521","name":"OMSCS"},{"id":"69631","name":"Online Master of Science in Computer Science"},{"id":"654","name":"College of Computing"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EBrittany Aiello, OMSCS Communications\u003C\/p\u003E\r\n\r\n\u003Cp\u003Ebaiello@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["baiello@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"604288":{"#nid":"604288","#data":{"type":"news","title":"Bust a Move: IC Ph.D. Student Caitlyn Seim Tests Passive Haptic Learning for Dance at Get a Move On Hackathon","body":[{"value":"\u003Cp\u003EEarlier this month, School of Interactive Computing Ph.D. student \u003Cstrong\u003ECaitlyn Seim\u003C\/strong\u003E participated in the College of Computing\u0026rsquo;s \u003Ca href=\u0022https:\/\/cchackathon.github.io\/geta-moveon\/\u0022\u003EGet a Move On hackathon\u003C\/a\u003E, which focused on music, dance, fitness, gaming, and sports.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs a graduate researcher who has studied wearable computing devices that provide haptic, or tactile, stimulation in Professor \u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/thad-starner\u0022\u003EThad Starner\u0026rsquo;s\u003C\/a\u003E\u003C\/strong\u003E lab, she took the opportunity to apply what she knew to the lower body. Most of what she and Starner have worked on in the past was focused on upper-body learning \u0026ndash; teaching piano, Braille, making you faster at typing \u0026ndash; but in this case, she wanted to focus on dance.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;While brainstorming for the hackathon, I reached out to Carnegie Hall tap dancer \u003Cstrong\u003EChristopher Erk\u003C\/strong\u003E,\u0026rdquo; she said. \u0026ldquo;He was immediately interested and provided us with three elementary tap routines that we could integrate into the wearable.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing a technique called passive haptic learning, individuals can learn new skills through tactile cues provided by a wearable device such as a watch or glove. While continuing normal daily tasks, the instructional stimuli repeat in the background and help them learn. In the past, Starner\u0026rsquo;s lab has been able to produce results in skills like piano playing or learning Morse code.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeim\u0026rsquo;s thought leading up to the hackathon was that she could have similar success in affecting muscle memory for dance.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Every song is a new pattern of key presses,\u0026rdquo; said Seim, referring to the computerized haptic gloves that helped teach the finger patterns of different piano songs. \u0026ldquo;Likewise, every dance is a new pattern of steps. This is what inspired me to prototype a wearable to teach dance steps.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOver the course of two days at the hackathon, Seim worked with \u003Cstrong\u003EDavid Purcell\u003C\/strong\u003E, a student in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/www.omscs.gatech.edu\/\u0022\u003Eonline master of science in computer science\u003C\/a\u003E (OMSCS) program, to create the prototype. It comes in the form of haptic socks, which are cordless, and is synchronized and programmed to teach a routine sent by Erk using tactile taps from embedded motors.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResults thus far are limited to the pilot program tested by Seim and another student. But, Seim said, the paradigm is exactly like the hands. Seim\u0026rsquo;s team finished in the top five overall and second place in hardware at the hackathon.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeim is looking at potential dance-related collaborations to continue the project. Interested students should \u003Ca href=\u0022mailto:seimresearch@gmail.com\u0022\u003Econtact Seim via email\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Most of what Caitlyn Seim and Starner have worked on in the past was focused on upper-body learning \u2013 teaching piano, Braille, making you faster at typing \u2013 but in this case, she wanted to focus on dance."}],"uid":"33939","created_gmt":"2018-03-26 20:55:21","changed_gmt":"2018-03-26 20:55:21","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-03-26T00:00:00-04:00","iso_date":"2018-03-26T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"604287":{"id":"604287","type":"image","title":"PHL for Dance","body":null,"created":"1522097460","gmt_created":"2018-03-26 20:51:00","changed":"1522097460","gmt_changed":"2018-03-26 20:51:00","alt":"Passive Haptic Learning for Dance","file":{"fid":"230335","name":"diagram.png","image_path":"\/sites\/default\/files\/images\/diagram.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/diagram.png","mime":"image\/png","size":243119,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/diagram.png?itok=0U1qVEbQ"}}},"media_ids":["604287"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"170072","name":"Caitlyn Seim"},{"id":"1944","name":"Thad Starner"},{"id":"10353","name":"wearable computing"},{"id":"61371","name":"Hackathon"},{"id":"104221","name":"passive haptic learning"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"603980":{"#nid":"603980","#data":{"type":"news","title":"College of Computing Rises to No. 8 in U.S. News Rankings","body":[{"value":"\u003Ch3\u003E\u003Cstrong\u003E\u003Cem\u003E\u003Cstrong\u003EMove\u003C\/strong\u003E\u0026nbsp;is GT Computing\u0026rsquo;s second jump in last three rankings\u003C\/em\u003E\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EThe Georgia Tech College of Computing continued its climb up the U.S. News and World Report rankings of graduate computer science programs, rising one spot to No. 8 in the 2018 rankings that were released March 20.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe new position represents Georgia Tech\u0026rsquo;s second jump in the last three CS rankings, all released since 2012, and is the highest U.S. News has ever ranked the \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003ECollege of Computing\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003EGT Computing\u0026#39;s global impact\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We are thrilled but not surprised at this latest recognition of the work we\u0026rsquo;re doing in GT Computing,\u0026rdquo; said \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/zvi-galil\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EZvi Galil\u003C\/strong\u003E\u003C\/a\u003E, John P. Imlay Jr. Dean of Computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I attribute this to our visible leadership in computing education and research, to the fact that we are now the largest computing program in the United States counting both faculty and students\u0026ndash;and likely number 2 in terms of faculty size\u0026ndash;and to the \u003Ca href=\u0022http:\/\/gtcomputing2017.cc.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003Eglobal impact we are having\u003C\/a\u003E both through our research and the work of our thousands of alumni.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EU.S. News ranks computer science programs through a reputational survey. With our average score of 4.4, Georgia Tech has now tied with Princeton and one spot ahead of No. 10 University of Texas-Austin. In the 2018 rankings, Georgia Tech rose in both points and ranking\u0026mdash;and was the only Top 10 program to rise in either.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe College also achieved rankings in the following specialties:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/content\/artificial-intelligence-machine-learning\u0022 target=\u0022_blank\u0022\u003EArtificial Intelligence\u003C\/a\u003E (No. 7)\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/www.scs.gatech.edu\/content\/programming-languages-software-engineering\u0022 target=\u0022_blank\u0022\u003EProgramming Language\u003C\/a\u003E (No. 16)\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/www.scs.gatech.edu\/content\/systems\u0022 target=\u0022_blank\u0022\u003ESystems\u003C\/a\u003E (No. 10)\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/www.scs.gatech.edu\/content\/theory\u0022\u003ETheory\u003C\/a\u003E (No. 9)\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003ECoincidentally, the No. 8 overall ranking matches the spot Georgia Tech earned in last year\u0026rsquo;s \u003Ca href=\u0022https:\/\/www.timeshighereducation.com\/world-university-rankings\/2018\/subject-ranking\/computer-science#!\/page\/0\/length\/25\/sort_by\/rank\/sort_order\/asc\/cols\/stats\u0022\u003ETimes Higher Education\/Wall Street Journal rankings\u003C\/a\u003E, when the College was named the No. 8 program in the world.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003EOther GT ranking highlights\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Over the past several years,\u0026rdquo; Galil continued, \u0026ldquo;we have made deliberate, strategic investments of time and treasure, both in research and education, and this recognition is one of the fruits of those efforts.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe College of Computing was just one of many Georgia Tech programs to place highly in the 2018 rankings.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe \u003Ca href=\u0022https:\/\/coe.gatech.edu\/\u0022\u003ECollege of Engineering\u003C\/a\u003E\u003Ca href=\u0022https:\/\/coe.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003E \u003C\/a\u003Ealso ranked No. 8 (No. 4 among public universities), and all 11 of its programs ranked in the top 10. In the \u003Ca href=\u0022https:\/\/cos.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003ECollege of Sciences\u003C\/a\u003E, Chemistry jumped four to No. 20, Math moved up two to No. 26, Physics moved up one to No. 28, Earth Sciences moved up four to No. 38, and Biology moved up one to No. 54. Within mathematics, the discrete math\/combinatorics specialty had Georgia Tech at No. 2, up two positions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.usnews.com\/best-graduate-schools\u0022 target=\u0022_blank\u0022\u003E[READ:\u0026nbsp;U.S. News and World Report 2019 Graduate School Rankings]\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe \u003Ca href=\u0022https:\/\/www.scheller.gatech.edu\/index.html\u0022 target=\u0022_blank\u0022\u003EScheller College of Business\u003C\/a\u003E full-time MBA program moved up one to No. 28, and its part-time MBA moved up five to No. 25. Scheller was also ranked in the following specialties:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EProduction\/Operations (No. 7)\u003C\/li\u003E\r\n\t\u003Cli\u003ESupply Chain\/Logistics (No. 17)\u003C\/li\u003E\r\n\t\u003Cli\u003EInformation Systems (No. 12)\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EIn the \u003Ca href=\u0022https:\/\/www.iac.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003EIvan Allen College of Liberal Arts\u003C\/a\u003E, the Sam Nunn School of Public Policy moved up two to No. 43 overall with the Information and Technology Management specialty remaining at No. 2, Public Policy Analysis debuting at No. 20 and the Environmental Policy and Management specialty debuting at No. 12.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech\u0027s computer science moves up list of best U.S. graduate schools."}],"uid":"32045","created_gmt":"2018-03-19 17:27:27","changed_gmt":"2018-03-21 17:28:30","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-03-19T00:00:00-04:00","iso_date":"2018-03-19T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"603992":{"id":"603992","type":"image","title":"GT Computing Binary Bridge code close up","body":null,"created":"1521483862","gmt_created":"2018-03-19 18:24:22","changed":"1521483862","gmt_changed":"2018-03-19 18:24:22","alt":"Close up of Binary Bridge at Georgia Tech","file":{"fid":"230210","name":"BinaryBridge_july16 copy 2.JPG","image_path":"\/sites\/default\/files\/images\/BinaryBridge_july16%20copy%202.JPG","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/BinaryBridge_july16%20copy%202.JPG","mime":"image\/jpeg","size":244714,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/BinaryBridge_july16%20copy%202.JPG?itok=t4Ue_YCg"}}},"media_ids":["603992"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"545781","name":"Institute for Data Engineering and Science"},{"id":"576481","name":"ML@GT"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"},{"id":"576491","name":"CRNCH"},{"id":"1305","name":"Georgia Tech Academic Advising Network (GTAAN)"},{"id":"1299","name":"GVU Center"},{"id":"431631","name":"OMS"},{"id":"131901","name":"Provost"},{"id":"430601","name":"Institute for Information Security and Privacy"}],"categories":[],"keywords":[{"id":"177484","name":"US News rankings"},{"id":"177485","name":"eighth place"},{"id":"2523","name":"cs"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EMike Terrazas, Communications Director\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:mterraza@cc.gatech.edu?subject=U.S.%20News%202019%20Best%20Graduate%20Schools\u0022\u003Emterraza@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["mterraza@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"602797":{"#nid":"602797","#data":{"type":"news","title":"IC Assistant Professor Alex Endert Earns NSF CAREER Award","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing Assistant Professor \u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/people\/7069\/alex-enderts\u0022\u003EAlex Endert\u003C\/a\u003E\u003C\/strong\u003E received a CAREER Award from the \u003Ca href=\u0022https:\/\/www.nsf.gov\/\u0022\u003ENational Science Foundation\u003C\/a\u003E for a project titled \u003Cem\u003ECAREER: Visual Analytics by Demonstration for Interactive Data Analysis.\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe award, which will total $493,000 paid out over the course of five years, begins on May 1 and builds on Endert\u0026rsquo;s prior work on demonstration-based user interaction to create tools that make data science more usable and accessible to people without formal data science training.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research will build knowledge in how people can visually demonstrate their questions about data. In turn, visual analytic system interfaces will need to change to interpret these demonstrations and perform the appropriate analytic operations. Finally, people will be able to leverage complex and powerful analytic functions without the need to provide formal parameterizations of the model being used.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In today\u0026rsquo;s data-driven era, everyday decisions are becoming increasingly data-driven problems,\u0026rdquo; Endert explained. \u0026ldquo;While this provides opportunity for people to make better decisions, it requires technology for visual data analysis to become easier to use for people without formal data science training.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt\u0026rsquo;s not just people in business who are constantly utilizing data to inform decisions. Everyday people encounter data on a daily basis \u0026ndash; comparing car models, searching for houses, and more. Endert noted impactful areas of interest like health care and national security.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This research will create new methods for people to interact with data, focusing on domains of interest to society including health care and national security,\u0026rdquo; said Endert, who added that he and his students will develop visual analytic prototypes released on the web, toolkits for developers to leverage, adopt and expand research, and provide empirical evidence to support the increase in usability.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA key challenge, Endert said, is fostering the interactive feedback loop between people and systems. The overall goal is to simplify aspects of this iterative process by building by-demonstration alternatives to existing control panels providing precise, yet complex controls.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If successful, the proposed work has the potential to transform user interfaces for data science systems,\u0026rdquo; he said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEndert is a part of a separate team of researchers that was \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/news\/600879\/georgia-tech-tufts-university-and-wisconsin-researchers-awarded-27m-make-data-science\u0022\u003Erecently awarded $2.7 million\u003C\/a\u003E from the \u003Ca href=\u0022https:\/\/www.darpa.mil\/\u0022\u003EDefense Advanced Research Projects Agency\u003C\/a\u003E \u003Ca href=\u0022https:\/\/www.darpa.mil\/program\/data-driven-discovery-of-models\u0022\u003EData-Driven Discovery of Models\u003C\/a\u003E program to study similar advances in the accessibility of data science.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The award will total $493,000 paid out over five years and builds on Endert\u2019s prior work on tools that make data science more accessible to people without formal data science training."}],"uid":"33939","created_gmt":"2018-02-22 21:14:33","changed_gmt":"2018-02-22 21:14:33","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-02-22T00:00:00-05:00","iso_date":"2018-02-22T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"355601":{"id":"355601","type":"image","title":"Alex Endert - Compressed","body":null,"created":"1449245756","gmt_created":"2015-12-04 16:15:56","changed":"1475895087","gmt_changed":"2016-10-08 02:51:27","alt":"Alex Endert - Compressed","file":{"fid":"202040","name":"alex-endert.jpg","image_path":"\/sites\/default\/files\/images\/alex-endert.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/alex-endert.jpg","mime":"image\/jpeg","size":14268,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/alex-endert.jpg?itok=fsG4En41"}}},"media_ids":["355601"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"112421","name":"alex endert"},{"id":"92811","name":"data science"},{"id":"176401","name":"visual analytics"},{"id":"363","name":"NSF"},{"id":"362","name":"National Science Foundation"},{"id":"174710","name":"National Science Foundation CAREER Award"},{"id":"9413","name":"CAREER Award"},{"id":"7842","name":"NSF CAREER Award"},{"id":"166848","name":"School of Interactive Computing"},{"id":"654","name":"College of Computing"}],"core_research_areas":[{"id":"39431","name":"Data Engineering and Science"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"602715":{"#nid":"602715","#data":{"type":"news","title":"Professor Amy Bruckman Joins Seven Other IC Faculty in CHI Academy","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing Professor \u003Cstrong\u003EAmy Bruckman\u003C\/strong\u003E can still remember the first paper she ever presented at the ACM CHI Conference on Human Factors in Computing Systems (CHI).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt was in 2001 when she was early in her time as a faculty member at Georgia Tech. Co-authored with Jason Ellis, the paper was titled \u003Cem\u003EDesigning Palaver Tree Online: Supporting Social Roles in a Community of Oral History\u003C\/em\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENow, 17 years later, Bruckman joins an ever-expanding list of Georgia Tech faculty that have earned entry into the CHI Academy. She was announced this month as a 2018 inductee into the prestigious group. She is one of eight who will be inducted this year, and she is the eighth Georgia Tech faculty member, all from the School of Interactive Computing, to join the group.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It was such a big honor to have a single paper in CHI as a young researcher,\u0026rdquo; Bruckman said. \u0026ldquo;To be actually inducted into the CHI Academy is beyond words. All I can say is that I\u0026rsquo;m honored.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFellow IC Professors \u003Cstrong\u003EBeki Grinter\u003C\/strong\u003E and \u003Cstrong\u003EJim Foley\u003C\/strong\u003E provided a nomination for Bruckman to the CHI Academy. In it, they highlighted the depth and breadth of her research in content creation for educational purposes, social computing, and examination of the adoption of online social systems in countries like Cuba. A second sustained emphasis of her research, the nomination said, highlights the ethical issues that affect our community.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In this, not only has she demonstrated research excellence, but also a commitment to serving SIGCHI,\u0026rdquo; they wrote.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMore than her own induction, Bruckman noted what it means to have Georgia Tech continuously recognized for its commitment to the field of human-computer interaction.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech has seen new members join the CHI Academy in five of the past six years. Professor \u003Cstrong\u003EThad Starner\u003C\/strong\u003E was inducted in 2017, Professor \u003Cstrong\u003EJohn Stasko\u003C\/strong\u003E in 2016, Professor \u003Cstrong\u003EKeith Edwards\u003C\/strong\u003E in 2014, and Grinter in 2013. Before that recent run, Professors. \u003Cstrong\u003EGregory Abowd\u003C\/strong\u003E and \u003Cstrong\u003EBeth Mynatt\u003C\/strong\u003E were inducted in back-to-back years in 2008-09. Professor Emeritus Jim Foley, who retired in December, was the first of a long line of successful researchers in 2001.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;My colleagues have been making an impact in the field for a long time,\u0026rdquo; Bruckman said. \u0026ldquo;It\u0026rsquo;s humbling to be added to that group.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGrinter said that it\u0026rsquo;s been Georgia Tech\u0026rsquo;s commitment to human-computer interaction that has resulted in this kind of international recognition.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We\u0026rsquo;ve been committed to a vision in which HCI plays a critical role,\u0026rdquo; Grinter said. \u0026ldquo;So, as we\u0026rsquo;ve recruited and retained key faculty over time, we\u0026rsquo;ve been recognized by the CHI Academy.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOr, as Foley simply put it:\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Great faculty get recognized.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBruckman will be recognized at CHI 2018, which will be held on April 21-26 in Montr\u0026eacute;al, Canada.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Professor Amy Bruckman was announced this month as a 2018 inductee into the prestigious CHI Academy."}],"uid":"33939","created_gmt":"2018-02-21 19:59:42","changed_gmt":"2018-02-21 19:59:42","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-02-21T00:00:00-05:00","iso_date":"2018-02-21T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"590524":{"id":"590524","type":"image","title":"Amy Bruckman","body":null,"created":"1492457925","gmt_created":"2017-04-17 19:38:45","changed":"1492457925","gmt_changed":"2017-04-17 19:38:45","alt":"Professor Amy Bruckman to serve as School of Interactive Computing Interim Chair","file":{"fid":"224980","name":"asb_full.jpg","image_path":"\/sites\/default\/files\/images\/asb_full.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/asb_full.jpg","mime":"image\/jpeg","size":74680,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/asb_full.jpg?itok=717qrDXl"}}},"media_ids":["590524"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"8472","name":"amy bruckman"},{"id":"177194","name":"CHI Academy"},{"id":"1027","name":"chi"},{"id":"166848","name":"School of Interactive Computing"},{"id":"654","name":"College of Computing"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"600879":{"#nid":"600879","#data":{"type":"news","title":"Georgia Tech, Tufts University, and Wisconsin Researchers Awarded $2.7M to Make Data Science More Accessible","body":[{"value":"\u003Cp\u003EResearchers at the Georgia Institute of Technology, Tufts University, and University of Wisconsin will develop new techniques to make machine learning in data science more accessible to non-data scientists under a $2.7 million grant from the \u003Ca href=\u0022https:\/\/www.darpa.mil\/\u0022\u003EDefense Advanced Research Projects Agency\u003C\/a\u003E (DARPA) \u003Ca href=\u0022https:\/\/www.darpa.mil\/program\/data-driven-discovery-of-models\u0022\u003EData-Driven Discovery of Models\u003C\/a\u003E (D3M) program.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOver the years, advances in machine learning have resulted in more complex, and more powerful, applications in information visualization. As a consequence, machine learning techniques to achieve specific insights from data have also gotten more complicated. Most require data science degrees or some formal data science training in order to use the tools that are being built.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThus, the gap between subject matter experts \u0026ndash; international politics majors, historians, biology experts, or climatologists, for example \u0026ndash; and the complexity of the machine learning tools used to contextualize data will continue to grow.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Often, these experts have a wealth of knowledge about things like international affairs or cybersecurity, but they don\u0026rsquo;t have a wealth of knowledge of what it means to use machine learning model X, Y, or Z,\u0026rdquo; said Alex Endert, an assistant professor in the School of Interactive Computing at Georgia Tech, one of the four collaborators on the project.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECurrently, tools to adjust parameters on the data consist of buttons, control panels, dropdown menus and sliders, knobs and fields to adjust values, direct manipulations to define a machine learning model and letting it achieve the desired data.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis is less intuitive for non-data scientists, so the aim for the researchers is to move the user interaction into the visual space. Users could adjust the data within a scatter plot, for example, by zooming or panning, coloring items or generally demonstrating areas of interest inside the data. Then, they could infer how those parameters should change as a result of the exploration of the data.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If we are successful, we have the chance to bring data analysis to the public,\u0026rdquo; said primary investigator Remco Chang, an associate professor in the Tufts University Department of Computer Science. \u0026ldquo;But to get there, we will need to allow the end users to be able to intuitively ask questions about their data that can be formalized and executed in machine learning. We need to allow the user to make sense of the complex results from machine learning and help contextualize the results in the user\u0026rsquo;s domain.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe grant, which took effect earlier this year, will fund four years of research. Other participants are Georgia Tech School of Interactive Computing Professor John Stasko, and University of Wisconsin Department of Computer Science Professor Michael Gleicher.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDARPA\u0026rsquo;s D3M program aims to develop automated model discovery systems that enable users with subject matter expertise but no data science background to create empirical models of real, complex processes. Automated model discovery systems developed by the D3M program will be tested on real-world problems that will progressively get harder during the course of the program. Toward the end of the program, D3M will target problems that are both unsolved and underspecified in terms of data and instances of outcomes available for modeling.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Researchers at the Georgia Institute of Technology, Tufts University, and University of Wisconsin will develop new techniques to make machine learning in data science more accessible to non-data scientists."}],"uid":"33939","created_gmt":"2018-01-16 18:34:04","changed_gmt":"2018-02-09 18:48:45","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-01-16T00:00:00-05:00","iso_date":"2018-01-16T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"600878":{"id":"600878","type":"image","title":"Internet map","body":null,"created":"1516127458","gmt_created":"2018-01-16 18:30:58","changed":"1516127458","gmt_changed":"2018-01-16 18:30:58","alt":"Internet map","file":{"fid":"229045","name":"Internet_map_1024.jpg","image_path":"\/sites\/default\/files\/images\/Internet_map_1024.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Internet_map_1024.jpg","mime":"image\/jpeg","size":1120421,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Internet_map_1024.jpg?itok=cHnLRuk_"}}},"media_ids":["600878"],"related_links":[{"url":"http:\/\/vis.gatech.edu\/","title":"Georgia Tech Visualization Lab"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"},{"id":"545781","name":"Institute for Data Engineering and Science"}],"categories":[],"keywords":[{"id":"13253","name":"DARPA grant"},{"id":"9167","name":"machine learning"},{"id":"172922","name":"information visualization"},{"id":"112421","name":"alex endert"},{"id":"11632","name":"john stasko"},{"id":"92811","name":"data science"},{"id":"654","name":"College of Computing"},{"id":"166848","name":"School of Interactive Computing"},{"id":"176784","name":"tufts university"},{"id":"176785","name":"university of wisconsin"}],"core_research_areas":[{"id":"39431","name":"Data Engineering and Science"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"601772":{"#nid":"601772","#data":{"type":"news","title":"Grad Students in 3 Degree Programs Make Their Pitch to Industry at Interactivity 2018","body":[{"value":"\u003Cp\u003EGraduates from three different degree programs participated in \u003Ca href=\u0022http:\/\/interactivity.cc.gatech.edu\/\u0022\u003EInteractivity 2018\u003C\/a\u003E on Thursday, pitching themselves and their research to potential employers in the annual One-Minute Madness and poster session at the Historic Academy of Medicine in midtown Atlanta.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne hundred and thirty-six students participated in the event, pitching to at least 150 industry guests from 75 different companies. Students from the MS-HCI, MS-Digital Media, and MS-Industrial Design degree programs participated in the event.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This event is such a great opportunity for our students to get in front of real industry people who are here to hire somebody,\u0026rdquo; said School of Interactive Computing Professor of the Practice and MS-HCI Program Director \u003Cstrong\u003ERichard Henneman\u003C\/strong\u003E, who leads the event.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnd even for those who aren\u0026rsquo;t hiring, Interactivity is a great opportunity for related industries to stay connected with some of the best and brightest young minds coming through the academic ranks.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We\u0026rsquo;ve had some who aren\u0026rsquo;t hiring, but come just because they want to get to know our students and the interesting research they are engaged in,\u0026rdquo; Henneman said. \u0026ldquo;That\u0026rsquo;s the kind of reputation these students have.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EInteractivity is presented by the \u003Ca href=\u0022http:\/\/gvu.gatech.edu\/\u0022\u003EGVU Center\u003C\/a\u003E and sponsored by Mailchimp. GVU Director \u003Cstrong\u003EKeith Edwards\u003C\/strong\u003E expressed his affinity for the event during an announcement at the beginning of the One-Minute Madness session.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This is one of my all-time favorite events,\u0026rdquo; he said. \u0026ldquo;I think you\u0026rsquo;ll be so impressed by the originality, but most of all by the quality of the work.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESome of the work this year included:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EA method to reveal criminal activities by visualizing the transportation trajectory of human trafficking,\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003Ean MS-HCI student with an affinity for screenwriting who recently worked on a project to improve the purchase experience of clothes in retail stores for individuals in wheel chairs,\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003Eand others who had worked on story development and prototyping of novel virtual reality concepts, among others.\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EResumes of the participating students \u003Ca href=\u0022http:\/\/interactivity.cc.gatech.edu\/attendees\/\u0022\u003Ecan be found online here\u003C\/a\u003E. For photos from the event, go to \u003Ca href=\u0022https:\/\/www.flickr.com\/photos\/ccgatech\/albums\/72157663255245947\u0022\u003Ethis album on the College of Computing\u0026rsquo;s Fickr feed\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"One hundred and thirty-six students participated in the event, pitching to at least 150 industry guests from 75 different companies."}],"uid":"33939","created_gmt":"2018-02-02 16:40:48","changed_gmt":"2018-02-02 16:40:48","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-02-02T00:00:00-05:00","iso_date":"2018-02-02T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"601771":{"id":"601771","type":"image","title":"Interactivity 2018","body":null,"created":"1517589211","gmt_created":"2018-02-02 16:33:31","changed":"1517589211","gmt_changed":"2018-02-02 16:33:31","alt":"Interactivity 2018","file":{"fid":"229376","name":"IMG_1899.jpg","image_path":"\/sites\/default\/files\/images\/IMG_1899.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/IMG_1899.jpg","mime":"image\/jpeg","size":149122,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/IMG_1899.jpg?itok=de37N3Da"}}},"media_ids":["601771"],"related_links":[{"url":"https:\/\/www.flickr.com\/photos\/ccgatech\/albums\/72157663255245947","title":"Photos from Interactivity 2018"},{"url":"http:\/\/interactivity.cc.gatech.edu\/","title":"Interactivity 2018"},{"url":"http:\/\/gvu.gatech.edu\/index.php?q=home-page","title":"GVU Center"},{"url":"https:\/\/www.ic.gatech.edu\/academics\/master-science-human-computer-interaction","title":"MS-HCI"},{"url":"https:\/\/id.gatech.edu\/mid","title":"MS-ID"},{"url":"http:\/\/dm.lmc.gatech.edu\/program\/ms-program\/","title":"MS-DM"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"21141","name":"interactivity"},{"id":"107441","name":"ms-hci"},{"id":"176991","name":"ms-digital media"},{"id":"176992","name":"ms-industrial design"},{"id":"176993","name":"ms-dm"},{"id":"176994","name":"ms-id"},{"id":"166848","name":"School of Interactive Computing"},{"id":"654","name":"College of Computing"},{"id":"111831","name":"Richard Henneman"},{"id":"13541","name":"Keith Edwards"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"599583":{"#nid":"599583","#data":{"type":"news","title":"Iconic IC Professor, GVU Center Founder Jim Foley Bids Farewell","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing Professor \u003Cstrong\u003EJim Foley\u003C\/strong\u003E was in the midst of some well-deserved personal leave earlier this year when he had a realization. He was traveling around the world, skiing, swimming, and playing with his trains, a favorite hobby of his since childhood.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I was coming back next semester to teach,\u0026rdquo; he said, referring to his planned return in Spring 2018, \u0026ldquo;but I said, \u0026lsquo;Wait a minute. I\u0026rsquo;m enjoying this too much!\u0026rsquo;\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe \u003Ca href=\u0022http:\/\/cc.gatech.edu\/\u0022\u003ECollege of Computing\u003C\/a\u003E icon, who came to Georgia Tech in 1991 to establish the \u003Ca href=\u0022http:\/\/gvu.gatech.edu\/\u0022\u003EGVU Center\u003C\/a\u003E, instead elected to retire from teaching. It will be a welcome break for an individual who has left a vibrant mark on the College, the \u003Ca href=\u0022http:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC), and a number of associated centers, institutes, and labs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Jim has always been and always will be my personal role model for thoughtful and graceful leadership,\u0026rdquo; said \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/amy-bruckman\u0022\u003E\u003Cstrong\u003EAmy Bruckman\u003C\/strong\u003E\u003C\/a\u003E, professor and interim IC chair. \u0026ldquo;We\u0026rsquo;re going to miss him so terribly here at Georgia Tech.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Cstrong\u003EA Distinguished Career\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EFoley guided the GVU Center from inception until 1996, helping it garner a No. 1 ranking that final year for graduate computer science research in graphics and user interaction by U.S. News and World Report. He is a Fellow of the \u003Ca href=\u0022https:\/\/www.acm.org\/\u0022\u003EAssociation for Computing Machinery\u003C\/a\u003E (ACM), the \u003Ca href=\u0022https:\/\/www.ieee.org\/index.html\u0022\u003EInstitute of Electrical and Electronics Engineers\u003C\/a\u003E (IEEE), and the \u003Ca href=\u0022https:\/\/www.aaas.org\/\u0022\u003EAmerican Association for the Advancement of Science\u003C\/a\u003E (AAAS), an inaugural member of the \u003Ca href=\u0022https:\/\/en.wikipedia.org\/wiki\/CHI_Academy\u0022\u003EACM\/CHI Academy\u003C\/a\u003E, and a recipient of two lifetime achievement awards: the biannual ACM\/SIGGRAPH Stephen Coons Award for Outstanding Creative Contributions to Computer Graphics and the ACM\/SIGCHI Lifetime Achievement Award. In 2008, he was elected to the \u003Ca href=\u0022https:\/\/www.nae.edu\/\u0022\u003ENational Academy of Engineering\u003C\/a\u003E and also received Georgia Tech\u0026rsquo;s highest faculty honor, the Class of 1934 Distinguished Professor Award.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe co-authored four widely-used graphics textbooks and advised nine students over the course of his 27 years at Tech. Two of them, \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/elizabeth-mynatt\u0022\u003E\u003Cstrong\u003EBeth Mynatt\u003C\/strong\u003E\u003C\/a\u003E and \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/melody-jackson\u0022\u003E\u003Cstrong\u003EMelody Moore Jackson\u003C\/strong\u003E\u003C\/a\u003E, are now IC faculty members.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHis ability to guide students through their academic journeys was clear from the time he joined the College. He was named the \u0026ldquo;most likely to make students want to grow up to be professors\u0026rdquo; in 1992. Mynatt was one of those graduate students.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I was having trouble,\u0026rdquo; said Mynatt of her graduate school experience. \u0026ldquo;I couldn\u0026rsquo;t find my path, and I was probably a semester away from walking out the door. I walked into his office and asked for a second of his time, said I had my project and my funding and I promised never to bother him, but could he please be my advisor. It was such a tremendous impact on my entire life that he said yes. He\u0026rsquo;s continued to be my advisor every single day since then.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMynatt earned her \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/phd-computer-science\u0022\u003EPh.D. in computer science\u003C\/a\u003E shortly thereafter, in 1995, and joined the faculty at Georgia Tech in 1998.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFoley came to Georgia Tech in 1991 after being recruited by College of Computing Dean \u003Cstrong\u003EPeter Freeman\u003C\/strong\u003E and then-Georgia Tech president \u003Cstrong\u003EPat Crecine\u003C\/strong\u003E. Crecine had been a provost at Carnegie Mellon, where Foley could see the power and influence of having a separate computing college. Foley was excited by the vision of the new College of Computing, which was to push beyond traditional computer science to its interaction with other disciplines.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Pat pushed a broad vision of computing here at Tech,\u0026rdquo; Foley said. \u0026ldquo;He was big on new media and the future of interactive computing, so they recruited me to come and do something here.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThere was already a small but enthusiastic group of faculty devoted to graphics and user interface research at the College of Computing. Foley was able to take that group and the resources provided by the Institute to establish and grow the GVU Center into a nationally prominent organization in an astonishingly short period of time.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I\u0026rsquo;ve enjoyed all of my 27 years at Georgia Tech, but those five years really stand out to me,\u0026rdquo; Foley said. \u0026ldquo;Those were heady times. There was a lot of excitement. The college was new, the GVU Center was new, we were growing, and we were getting national recognition. It was just very exciting. Everyone was committed, working hard, and making the GVU Center into what it became.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Jim was, obviously, the instrumental person in founding the GVU Center,\u0026rdquo; said Professor Keith Edwards, the current GVU director. \u0026ldquo;He defined what its mission would be, how its people would work together, and how the community would come together.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe center changed directors in 1996, when Foley briefly left for Mitsubishi Research, but his impact has been felt continuously in the succeeding years. So much so that in 2008, Mynatt, the GVU Center director at that time, led a fundraising campaign amongst GVU faculty, students, and friends to establish the Foley Scholars Endowment, which funds two $5,000 scholarships awarded annually to GVU-affiliated graduate students.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I was really overwhelmed by how many contributed to the endowment, and by the continuing contributions,\u0026rdquo; Foley said. \u0026ldquo;It has supported 20 awards to some of the strongest GVU students. It\u0026rsquo;s a very humbling experience.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Cstrong\u003E\u0026lsquo;Remember How We Got Here\u0026rsquo;\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EFoley\u0026rsquo;s career has gone through a number of metamorphoses over the years. He started out as an electrical engineer at Lehigh University, drawing on his childhood dream of being an engineer \u0026ldquo;of a different kind.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I had toy trains and did a lot of electrical wiring as a boy,\u0026rdquo; he said. \u0026ldquo;That led me to electrical engineering at Lehigh University.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThere, he was introduced to computers and did some programming. He followed his undergraduate work with a degree in Computer Information and Control Engineering at the University of Michigan, learning about computer graphics and setting up the next stage of his career.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhen he was first getting into computing, there were individuals from various fields \u0026ndash; electrical engineering, math, physics, and more \u0026ndash; beginning to pursue similar fields of study. It was a great foundation in his belief in collegiality and collaboration across research areas.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I\u0026rsquo;ve always been a believer in the power of many,\u0026rdquo; Foley said. \u0026ldquo;Being able to collaborate with others has been a real high point in my career.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIndeed, that is one thing that attracted him to Georgia Tech. Edwards said Foley helped make Georgia Tech\u0026rsquo;s reputation as a leader in collaboration that much more impressive.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I think one of the things that gets overlooked is that he was instrumental in defining GVU\u0026rsquo;s culture \u0026ndash; that we could have people from great different disciplines come together, respect each other, learn from each other, and work together. Jim was the role model for how to do this, since he lived it every day, and people emulated him because of that. Those seeds really took root because now, 25 years later, I think the cultural influence here in GVU is what he started: An open, collaborative, respectful, and fun group to work with.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAsked for the message he wants to leave his colleagues and students with at Georgia Tech, Foley offered familiar sentiments.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Firstly, what I learned from my parents: From my mom, I learned determination and to keep going after my goals,\u0026rdquo; he said. \u0026ldquo;From my dad, I learned to be kind to everyone. Be courteous, be friendly.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Secondly, I recognize that whatever I\u0026rsquo;ve been able to accomplish has been with the help of many others. None of us have achieved our goals on our own. So, I say to everyone \u0026ndash; to my faculty colleagues, to students, to friends: Remember how we got here, and help others achieve their own goals.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Retiring School of Interactive Computing Professor Jim Foley looks back on his distinguished career."}],"uid":"33939","created_gmt":"2017-12-05 21:33:35","changed_gmt":"2017-12-05 21:33:35","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-12-05T00:00:00-05:00","iso_date":"2017-12-05T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"249711":{"id":"249711","type":"image","title":"Jim Foley in GVU Center","body":null,"created":"1449243795","gmt_created":"2015-12-04 15:43:15","changed":"1475894929","gmt_changed":"2016-10-08 02:48:49","alt":"Jim Foley in GVU Center","file":{"fid":"198066","name":"08c1214-p4-032.jpg","image_path":"\/sites\/default\/files\/images\/08c1214-p4-032_0.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/08c1214-p4-032_0.jpg","mime":"image\/jpeg","size":2952210,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/08c1214-p4-032_0.jpg?itok=1uj74pZm"}}},"media_ids":["249711"],"related_links":[{"url":"http:\/\/gvu.gatech.edu","title":"GVU Center at Georgia Tech"},{"url":"http:\/\/gvu.gatech.edu\/james-d-foley-gvu-center-endowment","title":"Foley Scholar Endowment"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"78531","name":"Jim Foley"},{"id":"4887","name":"GVU Center"},{"id":"175331","name":"Foley Scholars Program"},{"id":"166848","name":"School of Interactive Computing"},{"id":"654","name":"College of Computing"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"599503":{"#nid":"599503","#data":{"type":"news","title":"IC Professor John Stasko Earns Grant to Explore Future Interfaces for Data Visualization","body":[{"value":"\u003Cp\u003EThe film industry has explored a future in which characters interact with large, projected wall displays through speech, gaze, and gesture. Characters like Tony Stark from the \u003Cem\u003EIron Man\u003C\/em\u003E franchise or those in \u003Cem\u003EMinority Report\u003C\/em\u003E can perform data exploration and analysis activities using various visualizations by non-haptic means.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EToday\u0026rsquo;s systems for data visualization, which utilize desktop and laptop computers to interact via mouse-driven direct manipulation interfaces following the window-icon-menu-pointer (WIMP) paradigm, pale in comparison to the natural, fluid interactions presented in those futuristic film sequences.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThrough a \u003Ca href=\u0022https:\/\/www.nsf.gov\/awardsearch\/showAward?AWD_ID=1717111\u0022\u003Enew grant\u003C\/a\u003E provided by the \u003Ca href=\u0022https:\/\/www.nsf.gov\/index.jsp\u0022\u003ENational Science Foundation\u003C\/a\u003E (NSF) Information \u003Ca href=\u0022https:\/\/www.nsf.gov\/funding\/pgm_summ.jsp?pims_id=503303\u0026amp;org=CISE\u0022\u003EIntegration and Informatics (III) program\u003C\/a\u003E, School of Interactive Computing Professor \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/john-stasko\u0022\u003EJohn Stasko\u003C\/a\u003E aims to explore, design, develop, and evaluate post-WIMP interfaces for data visualization and data analytics.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe project, titled \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/gvu\/ii\/naturalvis\/\u0022\u003E\u003Cem\u003ECreating Natural Data Visualization and Analysis Environments\u003C\/em\u003E\u003C\/a\u003E, has received three years of funding worth a total of $493,752.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo move beyond WIMP interfaces, Stasko said, new forms of natural user interfaces (NUIs) employing multimodal interactions such as speech, pen, touch, gestures, gaze, and head and body movements must be developed.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;While no one interaction modality may provide all desired capabilities, combinations of modalities \u0026ndash; speech, gaze, and pen, for example \u0026ndash; could provide a more natural, intuitive, and integrated interface experience,\u0026rdquo; Stasko said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn this scenario, the system could assist individuals who know the information they want to extract from their data but not the specific commands or interface actions to take in a visualization system to produce the proper charts.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If the objects to be acted upon are not clear from speech commands, then gaze, gesture, and touch can clarify a person\u0026rsquo;s intent,\u0026rdquo; Stasko said. \u0026ldquo;Furthermore, these input modalities may excel when a conventional mouse and keyboard are not available.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EStasko is assisted on the project by Ph.D. student \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/content\/arjun-srinivasan\u0022\u003EArjun Srinivasan\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Through a new grant provided by the National Science Foundation (NSF) Information Integration and Informatics (III) program, School of Interactive Computing Professor John Stasko aims to explore, design, develop, and evaluate interfaces for data analytics"}],"uid":"33939","created_gmt":"2017-12-04 19:34:15","changed_gmt":"2017-12-04 19:34:15","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-12-04T00:00:00-05:00","iso_date":"2017-12-04T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"599502":{"id":"599502","type":"image","title":"Minority report interface","body":null,"created":"1512415792","gmt_created":"2017-12-04 19:29:52","changed":"1512415792","gmt_changed":"2017-12-04 19:29:52","alt":"Man uses interactive visualization interface","file":{"fid":"228556","name":"Minority Report.jpg","image_path":"\/sites\/default\/files\/images\/Minority%20Report.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Minority%20Report.jpg","mime":"image\/jpeg","size":123635,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Minority%20Report.jpg?itok=AJcRNjiP"}}},"media_ids":["599502"],"related_links":[{"url":"https:\/\/www.cc.gatech.edu\/gvu\/ii\/","title":"Information Interfaces"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"11632","name":"john stasko"},{"id":"172922","name":"information visualization"},{"id":"176401","name":"visual analytics"},{"id":"33301","name":"data analytics"},{"id":"176402","name":"minority report"},{"id":"9614","name":"Iron Man"},{"id":"362","name":"National Science Foundation"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"599262":{"#nid":"599262","#data":{"type":"news","title":"Wearable Computing Ring Allows Users to Write Words and Numbers with Thumb","body":[{"value":"\u003Cp\u003EWith the whirl of a thumb, Georgia Tech researchers have created technology that allows people to trace letters and numbers on their fingers and see the figures appear on a nearby computer screen. The system is triggered by a thumb ring outfitted with a gyroscope and tiny microphone. As wearers strum their thumb across the fingers, the hardware detects the movement.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn a \u003Ca href=\u0022https:\/\/www.youtube.com\/watch?v=6IIx7nceVeY\u0026amp;feature=youtu.be\u0022\u003Evideo demonstration\u003C\/a\u003E, the \u0026ldquo;written\u0026rdquo; figures appear on an adjacent screen. In the future, the researchers say the technology could be used to send phone calls to voicemail or answer text messages \u0026mdash; all without the wearer reaching for their phone or even looking at it.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;When a person grabs their phone during a meeting, even if trying to silence it, the gesture can infringe on the conversation or be distracting,\u0026rdquo; said Thad Starner, the Georgia Tech School of Interactive Computing professor leading the project. \u0026ldquo;But if they can simply send the call to voicemail, perhaps by writing an \u0026lsquo;x\u0026rsquo; on their hand below the table, there isn\u0026rsquo;t an interruption.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EStarner also says the technology could be used in virtual reality, replacing the need to take off a head-mounted device in order to input commands via a mouse or keyboard.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research team wanted to build a system that would always be available and easy to use.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;A ring augments the fingers in a way that is fairly non-obstructive during daily activities. A ring is also socially acceptable, unlike other wearable input devices,\u0026rdquo; said Cheng Zhang, the Georgia Tech graduate student who created the technology.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe system is called Fingersound. While other gesture-based systems require the user to perform gestures in the air, Fingersound uses the fingers as a canvas. This allows the system to clearly recognize the beginning and end of an intended gesture by using the microphone and gyroscope to detect the signal. In addition to helping recognize the start and end of a gesture, it also provides tactile feedback while performing the gestures. This feedback is crucial for user experience and is missing on other in-air gestures\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Our system uses sound and movement to identify intended gestures, which improves the accuracy compared to a system just looking for movements,\u0026rdquo; said Zhang. \u0026ldquo;For instance, to a gyroscope, random finger movements during walking may look very similar to the thumb gestures. But based on our investigation, the sounds caused by these daily activities are quite different from each other.\u0026rdquo; \u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFingersound sends the sound captured by the contact microphone and motion data captured by the gyroscope sensor through multiple filtering mechanisms. The system then analyzes it to determine whether a gesture was performed or whether it was simply noise from other finger-related activity.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research was presented earlier this year at Ubicomp and the ACM International Symposium on Wearable Computing along with two other papers that feature ring-based gesture technology. \u003Ca href=\u0022https:\/\/www.youtube.com\/watch?v=GTiisg_gqwA\u0026amp;feature=youtu.be\u0022\u003EFingOrbits\u003C\/a\u003E allows the wearer to control apps on a smartwatch or head-mounted display by rubbing their thumb on their hand. With \u003Ca href=\u0022https:\/\/www.youtube.com\/watch?v=m-i3HJrNc0A\u0026amp;feature=youtu.be\u0022\u003ESoundTrak\u003C\/a\u003E, people can write words or 3-D doodles in the air by localizing the absolute position of the finger in 3-D space, then see the results simultaneously on a computer screen.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe new technologies were developed by the same team that created a technique that \u003Ca href=\u0022http:\/\/www.news.gatech.edu\/2017\/01\/24\/new-techniques-allow-greater-control-smartwatches\u0022\u003Eallowed smartwatch wearers to control their device by tapping its sides\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":[{"value":"Technology provides eyes-free way to interact with smart devices"}],"field_summary":[{"value":"\u003Cp\u003EWith the whirl of a thumb, Georgia Tech researchers have created technology that allows people to trace letters and numbers on their fingers and see the figures appear on a nearby computer screen. The system is triggered by a thumb ring outfitted with a gyroscope and tiny microphone. As wearers strum their thumb across the fingers, the hardware detects the movement.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Technology allows people to trace letters and numbers on their fingers and see the figures appear on a nearby computer screen."}],"uid":"27560","created_gmt":"2017-11-29 17:34:44","changed_gmt":"2017-12-01 13:45:26","author":"Jason Maderer","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-11-29T00:00:00-05:00","iso_date":"2017-11-29T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"599254":{"id":"599254","type":"image","title":"FingerSound Number Illustrations ","body":null,"created":"1511975225","gmt_created":"2017-11-29 17:07:05","changed":"1511975225","gmt_changed":"2017-11-29 17:07:05","alt":"FingerSound Gestures ","file":{"fid":"228450","name":"gestures_2.png","image_path":"\/sites\/default\/files\/images\/gestures_2.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/gestures_2.png","mime":"image\/png","size":256460,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/gestures_2.png?itok=yjHlP5Uy"}}},"media_ids":["599254"],"related_links":[{"url":"http:\/\/www.news.gatech.edu\/2017\/01\/24\/new-techniques-allow-greater-control-smartwatches","title":"Previous Research with Smartwatches"},{"url":"https:\/\/www.ic.gatech.edu\/","title":"School of Interactive Computing"},{"url":"https:\/\/www.cc.gatech.edu\/home\/thad\/","title":"Meet Thad Starner"}],"groups":[{"id":"1214","name":"News Room"},{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[{"id":"10353","name":"wearable computing"},{"id":"176353","name":"FingerSound"},{"id":"176354","name":"finger"},{"id":"1944","name":"Thad Starner"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EJason Maderer\u003Cbr \/\u003E\r\nNational Media Relations\u003Cbr \/\u003E\r\nmaderer@gatech.edu\u003Cbr \/\u003E\r\n404-660-2926\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["maderer@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"598129":{"#nid":"598129","#data":{"type":"news","title":"GT Computing Faculty and Alum Awarded ASSETS Paper Impact Award","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing Professors \u003Cstrong\u003EKeith Edwards\u003C\/strong\u003E and \u003Cstrong\u003EBeth Mynatt\u003C\/strong\u003E were given the 2017 ASSETS Paper Impact Award for their 1994 paper \u003Cem\u003EProviding Access to Graphical User Interfaces \u0026ndash; Not Graphical Screens\u003C\/em\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe award is given every other year to the authors of a paper from the ASSETS conference that was presented at least 10 years ago, and has had significant and sustained impact in the literature.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECollege of Computing and GVU Center alum \u003Cstrong\u003EKathryn Stockton\u003C\/strong\u003E (M.S. CS, \u0026rsquo;94) was also a co-author of the paper and was recognized for her contributions, as well.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEdwards and Mynatt, the current and former directors of the GVU Center, were presented the award at this year\u0026rsquo;s \u003Ca href=\u0022https:\/\/assets17.sigaccess.org\/\u0022\u003EASSETS conference\u003C\/a\u003E, taking place this week in Baltimore, Md. Each received a plaque, and the team was awarded a cash prize of $500.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe awarded paper highlighted the Mercator project, which had a significant and lasting impact on accessibility to graphical user interfaces. It was foundational in enabling and setting the direction of screen reader technology for \u003Ca href=\u0022https:\/\/en.wikipedia.org\/wiki\/X_Window_System\u0022\u003EX Windows\u003C\/a\u003E, and opening up opportunities for assistive technology.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe paper was one of the first to raise and tackle the challenge of providing screen reader capabilities in graphical user interfaces. It proposed that translation of the GUI should be done at a semantic, rather than syntactic level.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe work includes several ideas that have proven to be important and influential in accessibility, including the use of auditory icons to represent different objects, audio formatting to confer status and other properties, and hierarchical modelling of containment and cause-effect relationships between interface objects.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe notion of defining user interfaces at an abstract level to allow for realization in many forms has been a major research thread in accessibility, leading to the development of several standards, and the underpinning ongoing efforts to develop personalized user interfaces.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"School of Interactive Computing Professors Keith Edwards and Beth Mynatt were given the 2017 ASSETS Paper Impact Award for their 1994 paper Providing Access to Graphical User Interfaces \u2013 Not Graphical Screens."}],"uid":"33939","created_gmt":"2017-10-31 15:05:53","changed_gmt":"2017-10-31 15:05:53","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-10-31T00:00:00-04:00","iso_date":"2017-10-31T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"598128":{"id":"598128","type":"image","title":"Keith Edwards and Beth Mynatt Impact Award","body":null,"created":"1509462218","gmt_created":"2017-10-31 15:03:38","changed":"1509462218","gmt_changed":"2017-10-31 15:03:38","alt":"Beth Mynatt and Keith Edwards receive ASSETS Impact Award","file":{"fid":"228022","name":"Impact Award.jpeg","image_path":"\/sites\/default\/files\/images\/Impact%20Award.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Impact%20Award.jpeg","mime":"image\/jpeg","size":93920,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Impact%20Award.jpeg?itok=x5gq50yz"}}},"media_ids":["598128"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"13541","name":"Keith Edwards"},{"id":"10989","name":"Beth Mynatt"},{"id":"56611","name":"ASSETS"},{"id":"4887","name":"GVU Center"},{"id":"166848","name":"School of Interactive Computing"},{"id":"654","name":"College of Computing"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"597580":{"#nid":"597580","#data":{"type":"news","title":"College of Computing Makes a Splash at GHC 2017 Orlando","body":[{"value":"\u003Cp\u003EA group of 57 College of Computing students recently traveled to Orlando, Fla., to attend the 2017 \u003Ca href=\u0022https:\/\/ghc.anitab.org\/\u0022\u003EGrace Hopper Celebration of Women in Computing (GHC)\u003C\/a\u003E as representatives of Georgia Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe celebration was held at the Orange County Convention Center from Oct. 4-6 and welcomed more than 17,000 female technologists from across the globe, as well as more than 100 companies in attendance to recruit the top talent in the tech industry. Keynote speakers included Melinda Gates of the Gates Foundation, Fei-Fei Li of Stanford University AI Lab, and Georgia Tech\u0026rsquo;s own \u003Ca href=\u0022http:\/\/robotics.gatech.edu\/faculty\/howard\u0022\u003EAyanna Howard\u003C\/a\u003E of the School of Electrical and Computer Engineering, among many other inspiring female leaders in STEM fields.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe College of Computing is a platinum-level sponsor of Grace Hopper Celebration and sends a number of scholarship attendees to the conference each year. Among this year\u0026rsquo;s 57 student attendees from Georgia Tech, 40 were on-campus undergraduate and graduate students based in Atlanta and 17 were online M.S. in Computer Science (OMS CS) students. An additional 15 current Georgia Tech graduate students attended as recruiters for their companies or through outside scholarships with companies like Microsoft and Disney.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If Georgia Tech wants to be known for our efforts to support women in computing, it\u0026rsquo;s important for us to have a presence at the nation\u0026rsquo;s foremost gathering of female technologists -- so that we can be allies with a passion for creating a diverse workforce to meet the growing needs of the industry,\u0026rdquo; said Jennifer Whitlow, director of computing enrollment in the College of Computing. \u0026ldquo;The conference provides current female computing students with amazing opportunities to network with others who share a similar background and pathway in the field, as well as the opportunity to seek career and graduate school opportunities with companies and universities from across the nation.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe OMS CS students in attendance traveled from all over the United States -- from San Francisco to Washington D.C. to Las Vegas -- and Canada. Student Rwithu Menon even traveled from Bangalore, India, to attend and meet her fellow students, with plans to make a pit stop in Atlanta on her way home in order to see Georgia Tech\u0026rsquo;s campus for the first time.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECollege of Computing attendees participated in sessions and talks about their fields of interest, interviewed for jobs and internships with top tech companies (some, even receiving job offers on the spot), and gathered at an all-GT Computing reception on Thursday, Oct. 5. The reception included a surprise visit from Charles Isbell, executive associate dean and professor in the College of Computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026quot;Attending Grace Hopper with Georgia Tech was a great opportunity to meet lots of talented women in tech and to hear their stories and experiences, ups and downs,\u0026rdquo; said Azade Sanjari, a current OMS CS student from California. \u0026ldquo;Also, I was able to finally meet other students, in-person, from the OMS CS program! We talked about our experiences with our courses and our plans for the future. It made me even more determined to complete the program and hopefully, start my career path in machine learning.\u0026quot;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cbr \/\u003E\r\nIf you\u0026rsquo;d like to learn more about the experiences of women in computing at Georgia Tech and the significance of Grace Hopper, you can view our \u003Ca href=\u0022https:\/\/youtu.be\/YABHaUePscU\u0022\u003E#SheisGTComputing video\u003C\/a\u003E or explore \u003Ca href=\u0022https:\/\/anitab.org\/\u0022\u003Ehttps:\/\/anitab.org\/\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A group of 57 College of Computing students recently traveled to Orlando, Fla., to attend the 2017 Grace Hopper Celebration of Women in Computing (GHC) as representatives of Georgia Tech."}],"uid":"27998","created_gmt":"2017-10-18 19:50:51","changed_gmt":"2017-10-18 19:52:31","author":"Brittany Aiello","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-10-18T00:00:00-04:00","iso_date":"2017-10-18T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"597581":{"id":"597581","type":"image","title":"GHC 2017 Group Photo","body":null,"created":"1508356324","gmt_created":"2017-10-18 19:52:04","changed":"1508356324","gmt_changed":"2017-10-18 19:52:04","alt":"GHC 2017 Group Photo","file":{"fid":"227792","name":"ghc-17-group-photo.jpg","image_path":"\/sites\/default\/files\/images\/ghc-17-group-photo.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/ghc-17-group-photo.jpg","mime":"image\/jpeg","size":803659,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/ghc-17-group-photo.jpg?itok=aKeM7nBs"}}},"media_ids":["597581"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1305","name":"Georgia Tech Academic Advising Network (GTAAN)"},{"id":"1299","name":"GVU Center"},{"id":"431631","name":"OMS"}],"categories":[],"keywords":[{"id":"172628","name":"GHC"},{"id":"8471","name":"grace hopper"},{"id":"45501","name":"Grace Hopper Celebration"},{"id":"175977","name":"Grace Hopper 2017"},{"id":"8469","name":"women in computing"},{"id":"175978","name":"#sheisgtcomputing"},{"id":"825","name":"Ayanna Howard"},{"id":"175979","name":"jennifer whitlow"},{"id":"66341","name":"OMS CS"},{"id":"69631","name":"Online Master of Science in Computer Science"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EBrittany Aiello\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOMS CS Communications\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["baiello@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"597147":{"#nid":"597147","#data":{"type":"news","title":"Foley Finalist Pavalanathan Turns Chaos into Computing Success","body":[{"value":"\u003Cp\u003EAs a child, \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~upavalan\/\u0022\u003E\u003Cstrong\u003EUmashanthi Pavalanathan\u003C\/strong\u003E\u003C\/a\u003E had a morning routine.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EShe and the other four members of her immediate family would wake up and get ready. As a group, they would walk out of the house in the direction of her school, not knowing whether all five would meet again at the end of the day.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAn uncertainty foreign to most growing up inside the United States, it was a way of life for Pavalanathan, who grew up during the height of a violent civil war in Sri Lanka that claimed the lives of more than 100,000 and displaced nearly one million.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It just happened,\u0026rdquo; says Pavalanathan of the sporadic bombing that affected the northern area of Sri Lanka, where she lived. \u0026ldquo;It was normal. It was part of our life. We couldn\u0026rsquo;t ... [just] stay home and stop living our lives. We had to go.\u0026rdquo;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnd, so, every day, Pavalanathan and her family went through that same routine, committed to pursuing one thing they felt could help them achieve a better future: an education.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThat commitment put Pavalanathan on a path that has led her to Georgia Tech\u0026rsquo;s School of Interactive Computing, where she has contributed to important research in computational sociolinguistics and was recently named a \u003Ca href=\u0022http:\/\/gvu.gatech.edu\/index.php?q=james-d-foley-gvu-center-endowment\u0022\u003EFoley Scholars\u003C\/a\u003E finalist.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003EHope in the Dark\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EThe violence began in 1983 and lasted until 2009. Pavalanathan was born in 1986 and didn\u0026rsquo;t move to the United States until 2011, meaning the vast majority of her life was spent living under these tenuous circumstances.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It was scary,\u0026rdquo; she admits. \u0026ldquo;It was just the uncertainty. To see friends and family members, and you don\u0026rsquo;t know what\u0026rsquo;s going to happen.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne thing she had to look forward to, though, was her education. It was a common point of emphasis among parents of Tamil families living in northern Sri Lanka: Vigorously pursue an education in order to advance to a university with the hope of preparing for a better future when the country regains peace and stability.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Even with the hopelessness, there was some hope that someday things can change through education,\u0026rdquo; says Pavalanathan. \u0026ldquo;So, we had a goal. Even though there were bombings, and friends and neighbors were being killed, we had a goal.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThere were challenges. Beyond the obvious \u0026ndash; the bombings and displacement \u0026ndash; Pavalanathan also grew up without electricity until she was 12 years old. They had kerosene lamps they would use at night. The scarcity led to innovation, she says, as people would come up with novel ways to limit the amount of kerosene being used.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EShe was displaced for an extended period of time once in 1995, when she was 9 years old. A heavy attack forced her family and many others out of their homes for about six months. Living as a refugee within her own country, she attended school in the evenings to keep up with everyone else her age.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlong the way, she was introduced to computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBefore high school, she saw her first computer at an exhibition at a university. There, she learned about the internet.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I was excited about it,\u0026rdquo; she says. \u0026ldquo;That was something I enjoyed.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhen she was about 12 years old, her family had electricity for the first time. It wasn\u0026rsquo;t available for 24 hours a day, so when her family got its first computer two years later she was only able to use it at certain times of the day.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The dial-up was faster at night, so I would stay up late and try to do as much as I could,\u0026rdquo; she said. \u0026ldquo;I enjoyed solving problems in that way. That\u0026rsquo;s when I knew I wanted to do something in computing.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003EA Helpful Challenge\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EThe story may have ended there, if not for the high school she attended in Sri Lanka. It was a missionary school that focused on more than just standard education.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThere were extracurricular activities like sports and fine arts, things that pushed Pavalanathan to be more outgoing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I think I realized that [as an undergraduate] when I met students from other top schools in my hometown known for good grades that there was a difference in the way I was brought up,\u0026rdquo; says Pavalanathan. \u0026ldquo;Many could do well in exams, but they couldn\u0026rsquo;t present themselves or go up and speak to people. But I think my school was very influential in giving us the challenge in those areas.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThey were vital skills she claims helped her when she made the big decision to come to the United States following her undergraduate studies. Taking that course was considered very much outside the norm in her family.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I asked my dad not to tell that to the relatives, because they were brought up in a strict, tight-knit environment,\u0026rdquo; she says. \u0026ldquo;It was something that wasn\u0026rsquo;t really accepted by everyone.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHowever, when she made it to the United States, first as a visiting scholar at Indiana University and then as a Ph.D. student at Georgia Tech, she found that things came naturally.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EShe had a number of Ph.D. offers from other schools\u0026nbsp;but says she chose Georgia Tech because of the welcoming and diverse environment in the School of Interactive Computing. Also key to her decision was the relationship she established with her advisor, \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/jacob-eisenstein\u0022\u003E\u003Cstrong\u003EJacob Eisenstein\u003C\/strong\u003E\u003C\/a\u003E, and the research she was able to pursue.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003EPursuing Impactful Research\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EPavalanathan\u0026rsquo;s research is focused on the field of \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/content\/social-computing-computational-journalism\u0022\u003Ecomputational sociolinguistics\u003C\/a\u003E, a fusion between computer science, social computing, and natural language processing that studies the relationship between language and society in a computational way.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESociolinguists have long studied the impact context has on the development of language, but only recently have they had large online social systems like Facebook, Twitter, and others to observe natural communication on a large scale.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPavalananthan is interested in studying why and how people say the things they do in a given context.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;When we speak, we have non-verbal cues,\u0026rdquo; says Pavalanathan. \u0026ldquo;Those don\u0026rsquo;t exist in writing, so we are trying to invent new ways.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Say that they\u0026rsquo;re happy or sad or want to argue. Sometimes they\u0026rsquo;ll use all capitals or punctuation. On Twitter, you\u0026rsquo;ll see repeating characters. That\u0026rsquo;s not just random. There is a reason people do this.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAdditionally, she examines how language changes with the audience. For example, when someone speaks directly to a peer on Twitter, they will speak in a different manner, likely more informal, than they would to a broader audience.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We see that in face-to-face communication, as well,\u0026rdquo; she says.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne paper published last year looked at how the introductions of emojis on Twitter caused any changes in writing style from the more dated emoticons.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe ultimate goal of the research is to improve language tools to make them more aware of linguistic patterns in different social contexts.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;But we\u0026rsquo;re still a long way from that,\u0026rdquo; says Pavalanathan. \u0026ldquo;We are trying to understand the patterns of variation in online language, and this could potentially help us to improve language processing tools in the future.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe work has gotten her recognized as a finalist in the 2017 Foley Scholars program. Winners will be announced at the \u003Ca href=\u0022http:\/\/gvu.gatech.edu\/gvu-25-program\u0022\u003EGVU 25\u003Csup\u003Eth\u003C\/sup\u003E Anniversary\u003C\/a\u003E celebration on Oct. 18 at the Tech Square Research Building.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It was a nice surprise to be selected as a finalist,\u0026rdquo; she says. \u0026ldquo;It really validates the work that we\u0026rsquo;ve done.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Growing up during height of civil war in Sri Lanka, IC student Umashanthi Pavalanathan pursues success in education."}],"uid":"33939","created_gmt":"2017-10-10 12:09:24","changed_gmt":"2017-10-12 19:51:52","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-10-10T00:00:00-04:00","iso_date":"2017-10-10T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"597322":{"id":"597322","type":"image","title":"Umashanthi Pavalanathan update","body":null,"created":"1507837849","gmt_created":"2017-10-12 19:50:49","changed":"1507837849","gmt_changed":"2017-10-12 19:50:49","alt":"Umashanthi Pavalanathan","file":{"fid":"227686","name":"umashanthi_main_updated.jpg","image_path":"\/sites\/default\/files\/images\/umashanthi_main_updated.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/umashanthi_main_updated.jpg","mime":"image\/jpeg","size":33126,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/umashanthi_main_updated.jpg?itok=EvWkizut"}}},"media_ids":["597322"],"related_links":[{"url":"http:\/\/gvu.gatech.edu\/index.php?q=james-d-foley-gvu-center-endowment","title":"James D. Foley Scholars Program"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"175317","name":"umashanthi pavalanathan"},{"id":"175331","name":"Foley Scholars Program"},{"id":"175862","name":"computational sociolinguistics"},{"id":"111941","name":"Jacob Eisenstein; Twitter"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"596987":{"#nid":"596987","#data":{"type":"news","title":"Seeing is Believing: Georgia Tech Becoming a Leader in Visualization, Visual Analytics","body":[{"value":"\u003Cp\u003EFor a long time, School of Interactive Computing (IC) Professor \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/john-stasko\u0022\u003E\u003Cstrong\u003EJohn Stasko\u003C\/strong\u003E\u003C\/a\u003E was information visualization and visual analytics at Georgia Tech. After joining the faculty in 1989, he spent the better part of two decades as a one-man shop, teaching and leading research with just a handful of graduate students at a time.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERecent years, however, have seen a steep rise in interest in the field, and Georgia Tech has positioned itself as a national leader.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThat leadership is again on display this week at \u003Ca href=\u0022http:\/\/ieeevis.org\/\u0022\u003EIEEE VIS 2017\u003C\/a\u003E in Phoenix, Ariz., where Georgia Tech is presenting six conference papers, two journal articles, six workshop papers, and six posters across the multiple co-located conferences \u0026ndash; Visual Analytics Science and Technology (VAST), Information Visualization (InfoVis), and Scientific Visualization (SciVis), among others.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Most universities have a vis person,\u0026rdquo; Stasko explained. \u0026ldquo;Maybe one. There are a few places that have more than that. With the faculty and resources we have now, I think we\u0026rsquo;re among the biggest presences out there. At the visualization conferences, people know about us. They know we\u0026rsquo;re a force.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003ERising Interest Leads to Diverse Research\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EThere are a number of indicators that point to the rising emphasis in the field, both in academia and beyond.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor one, there has been a tremendous growth in course enrollments at Georgia Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If that\u0026rsquo;s an indicator of interest, then, yes, the appetite is definitely there,\u0026rdquo; said Associate Professor \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/rahul-basole\u0022\u003E\u003Cstrong\u003ERahul Basole\u003C\/strong\u003E\u003C\/a\u003E, who joined IC in 2012.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe CS 4460, Introduction to Information Visualization, class is taught every semester \u0026ndash; spring, summer, and fall \u0026ndash; frequently garnering over 100 students in each session.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;And there\u0026rsquo;s even higher demand than that,\u0026rdquo; Stasko said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBeyond that, though, there are endless fields that utilize or could utilize expertise in visual analytics and information visualization, from health care to financial technology, sports to public policy, international affairs, and more.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Data can come from anything,\u0026rdquo; said IC Assistant Professor \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/alex-endert\u0022\u003E\u003Cstrong\u003EAlex Endert\u003C\/strong\u003E\u003C\/a\u003E, who came to Georgia Tech in 2013. \u0026ldquo;More and more domains are becoming data-driven. They\u0026rsquo;re collecting data, and they\u0026rsquo;re saying, \u0026lsquo;How do we make sense of this? What do I know now that I didn\u0026rsquo;t know before?\u0026rsquo; I think that\u0026rsquo;s where vis plays a big role.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWith the support of former IC school chair Annie Ant\u0026oacute;n and others, the Georgia Tech Visualization Lab grew five-fold over the course of the past decade.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EProfessor \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/james-foley\u0022\u003E\u003Cstrong\u003EJim Foley\u003C\/strong\u003E\u003C\/a\u003E began working in information visualization with Stasko about 10 years ago, focusing on teaching the 4460 undergraduate course. Basole, Endert, and School of Computational Science \u0026amp; Engineering (CSE) Assistant Professor \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/polo-chau\u0022\u003E\u003Cstrong\u003EPolo Chau\u003C\/strong\u003E\u003C\/a\u003E joined the mix shortly thereafter. Others, like CSE Professor \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/haesun-park\u0022\u003E\u003Cstrong\u003EHaesun Park\u003C\/strong\u003E\u003C\/a\u003E, regularly contribute to research in the field, as well. Each brings what Basole called a \u0026ldquo;slightly different flavor,\u0026rdquo; establishing well-rounded resources to potential students and industry partners.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBasole looked at the field through the lens of enterprise, mapping complex markets and providing organizational-level visualizations. Endert comes at it from angles of human-computer interaction, machine learning, and data mining, among others. Stasko saw the skyrocketing amount of available data, fueled by the growth of the internet, and became focused on providing tools to analyze and understand these data sets.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;More students are joining because we have such a diverse set of research areas that are complimentary to each other,\u0026rdquo; Basole said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Georgia Tech has the perfect culture for collaborative research,\u0026rdquo; Chau added. \u0026ldquo;Students are encouraged to collaborate to innovate across disciplines. Faculty can easily work across schools and colleges and with industry partners.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDiverse expertise means diverse areas of study for students at every level, as well. Six classes examining different areas of information visualization and visual and data analytics are offered to both undergraduate and graduate students at Georgia Tech:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003ECS 4460\u003C\/li\u003E\r\n\t\u003Cli\u003ECS 7450 \u0026ndash; Information Visualization, offered every fall\u003C\/li\u003E\r\n\t\u003Cli\u003ECS 8803 CV \u0026ndash; Data Visualization: Principles and Applications, a new course that began last spring primarily for Scheller College of Business MBA students and those in the one-year Data Analytics master\u0026rsquo;s program\u003C\/li\u003E\r\n\t\u003Cli\u003ECS 8803 VDA \u0026ndash; Visual Data Analysis\u003C\/li\u003E\r\n\t\u003Cli\u003ECS 8803 VEA \u0026ndash; Visual Enterprise Analytics\u003C\/li\u003E\r\n\t\u003Cli\u003EIn CSE, Chau offers a combined undergraduate and graduate course, CX 4242\/CSE 6242 \u0026ndash; Data and Visual Analytics \u0026ndash; that has between 150-200 students per term, as well.\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I can confidently say that if you come here interested in visualization and you take the number of courses that we have available, you are more than likely to leave with a well-rounded education in what it means to do visualization,\u0026rdquo; Endert said. \u0026ldquo;I don\u0026rsquo;t know many other universities that can make that claim.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003EOpportunities for Industry\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EBeyond the resources the College offers in the academic setting, which also include a spacious lab and equipment for use by students and researchers, faculty members see a future that could also face outward to the greater Atlanta landscape.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBasole pointed to the growth in associated industry in Atlanta \u0026ndash; like the NCR Corporation, which is building a new headquarters in Technology Square, and audit, tax, and advisory firm KPMG, which is opening an innovation hub in Midtown \u0026ndash; as opportunity for collaboration.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnd then, of course, there is a rise in visualization in areas like public policy and the news media.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;For them to understand all the things we are doing in here, that would be incredibly beneficial,\u0026rdquo; Basole said. \u0026ldquo;If industry understood the capabilities we have in analyzing data and making it more accessible to everyone, that would be a win-win for everyone.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn the meantime, they will take advantage of the resources they have to lead the way in research that pushes the boundaries of the field.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We\u0026rsquo;re in an envious position,\u0026rdquo; Stasko said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We have opportunities coming from inside and outside, industry and government. The ability for us to digest and be able to deliver on that is the biggest challenge. We would love to continue to attract more bright Ph.D. students to the program. That is essential, and will allow us to explore areas that haven\u0026rsquo;t really been explored before.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The Georgia Tech Visualization Lab has grown by leaps and bounds over the past decade, becoming a national thought leader in the field."}],"uid":"33939","created_gmt":"2017-10-05 13:38:16","changed_gmt":"2017-10-05 13:38:16","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-10-05T00:00:00-04:00","iso_date":"2017-10-05T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"596985":{"id":"596985","type":"image","title":"Vis lab 2","body":null,"created":"1507210394","gmt_created":"2017-10-05 13:33:14","changed":"1507210394","gmt_changed":"2017-10-05 13:33:14","alt":"Georgia Tech visualization lab","file":{"fid":"227536","name":"Vis Lab.jpg","image_path":"\/sites\/default\/files\/images\/Vis%20Lab.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Vis%20Lab.jpg","mime":"image\/jpeg","size":203791,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Vis%20Lab.jpg?itok=9RpDpxBH"}}},"media_ids":["596985"],"related_links":[{"url":"https:\/\/vis.gatech.edu\/","title":"Georgia Tech Visualization Lab"},{"url":"http:\/\/poloclub.gatech.edu\/cse6242\/2017fall\/","title":"Data and Visual Analytics"},{"url":"http:\/\/va.gatech.edu\/","title":"Visual Analytics Lab"},{"url":"https:\/\/www.cc.gatech.edu\/gvu\/ii\/","title":"Information Interfaces Group"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"175808","name":"georgia tech visualization lab"},{"id":"11632","name":"john stasko"},{"id":"112421","name":"alex endert"},{"id":"53931","name":"Rahul Basole"},{"id":"83261","name":"Polo Chau"},{"id":"78531","name":"Jim Foley"},{"id":"10475","name":"Haesun Park"},{"id":"7257","name":"visualization"},{"id":"172922","name":"information visualization"},{"id":"175777","name":"ieee vis 2017"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"596952":{"#nid":"596952","#data":{"type":"news","title":"IC Researchers Earn Test of Time Award for VAST 2007 Paper","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing Professor \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/john-stasko\u0022\u003E\u003Cstrong\u003EJohn Stasko\u003C\/strong\u003E\u003C\/a\u003E and three co-authors were presented with one of five Test of Time awards Tuesday at the \u003Ca href=\u0022http:\/\/ieeevis.org\/\u0022\u003EIEEE VIS 2017\u003C\/a\u003E conference in Phoenix, Ariz., for research presented at the Visual Analytics Science and Technology (VAST) 2007 conference.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe paper of note, titled \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~stasko\/papers\/vast07-jigsaw.pdf\u0022\u003E\u003Cem\u003EJigsaw: Supporting Investigative Analysis through Interactive Visualization\u003C\/em\u003E\u003C\/a\u003E, was co-authored by Stasko, \u003Cstrong\u003ECarsten G\u0026ouml;rg\u003C\/strong\u003E, \u003Cstrong\u003EZhicheng Liu\u003C\/strong\u003E, and \u003Cstrong\u003EKanupriya Singhal\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe team\u0026rsquo;s 2007 research developed a visual analytic system, called Jigsaw, which addresses a challenge investigative analysts face when working with large collections of text documents: Connecting embedded threads of evidence to formulate hypotheses. As the number of documents and concepts in such cases grows larger, making sense of the information becomes more difficult.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EJigsaw represents documents and their contents visually in order to help analysts examine reports more efficiently and develop theories about potential actions more quickly. The system performs rudimentary text analysis including sentiment detection, similarity comparison, and clustering, among other tasks, on the documents and then provides multiple interactive visualizations of the documents\u0026rsquo; text. It provides multiple coordinated views with emphasis on visually illustrating connections between entities across the different documents.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This was probably the biggest project in my lab, maybe, over my entire career in terms of how many students were on it,\u0026rdquo; Stasko said. \u0026ldquo;It was a big effort for maybe seven or eight years, and this paper was our first introduction of the idea.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EStasko\u0026rsquo;s lab published a number of subsequent papers relating to the system (a list can be found \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/gvu\/ii\/jigsaw\/\u0022\u003Ehere\u003C\/a\u003E).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIEEE VIS is being held Oct. 1-6 in Phoenix, Ariz., and includes a number of co-located conferences and programs, including IEEE VAST, IEEE Information Visualization, and IEEE Scientific Visualization.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA full account of Georgia Tech\u0026rsquo;s participation at the conference can be found \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/news\/596888\/vis-2017-georgia-tech-visualization-research-expands-new-paths-understanding-data\u0022\u003Ehere\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"School of Interactive Computing Professor John Stasko and three co-authors were presented with one of five Test of Time awards Tuesday at the IEEE VIS 2017 conference in Phoenix, Ariz., for research presented at the VAST 2007 conference."}],"uid":"33939","created_gmt":"2017-10-04 17:57:41","changed_gmt":"2017-10-04 17:57:41","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-10-04T00:00:00-04:00","iso_date":"2017-10-04T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"596950":{"id":"596950","type":"image","title":"IEEE VIS 2017 Test of Time Award 3","body":null,"created":"1507139575","gmt_created":"2017-10-04 17:52:55","changed":"1507139575","gmt_changed":"2017-10-04 17:52:55","alt":"Co-authors Zicheng Liu, Carsten G\u00f6rg, and John Stasko display their\u00a0Test of Time award at IEEE VIS 2017","file":{"fid":"227522","name":"test of time award 2.jpg","image_path":"\/sites\/default\/files\/images\/test%20of%20time%20award%202.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/test%20of%20time%20award%202.jpg","mime":"image\/jpeg","size":463223,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/test%20of%20time%20award%202.jpg?itok=bfX6WhrJ"}}},"media_ids":["596950"],"related_links":[{"url":"http:\/\/www.cc.gatech.edu\/~stasko\/papers\/vast07-jigsaw.pdf","title":"Jigsaw: Supporting Investigative Analysis through Interactive Visualization"},{"url":"http:\/\/www.ic.gatech.edu\/news\/596888\/vis-2017-georgia-tech-visualization-research-expands-new-paths-understanding-data","title":"Georgia Tech at IEEE VIS 2017"},{"url":"http:\/\/ieeevis.org\/year\/2017\/info\/awards\/test-of-time-awards","title":"IEEE VIS 2017 Test of Time Awards"},{"url":"https:\/\/vis.gatech.edu\/","title":"Georgia Tech Visualization Lab"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"11632","name":"john stasko"},{"id":"166848","name":"School of Interactive Computing"},{"id":"172922","name":"information visualization"},{"id":"175784","name":"vast 2007"},{"id":"175777","name":"ieee vis 2017"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"596888":{"#nid":"596888","#data":{"type":"news","title":"Vis 2017: Georgia Tech Visualization Research Expands New Paths to Understanding Data\u00a0","body":[{"value":"\u003Cp\u003EGeorgia Tech researchers are presenting new techniques and research for information visualization and visual analytics this week, Oct. 1-6, at the IEEE Vis 2017 conference in Phoenix, Ariz.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech research is led by School of Interactive Computing faculty and students, and also includes School of Computational Science and Engineering researchers.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAmong the 20 Georgia Tech papers and posters in the technical program at InfoVis and VAST (Visual Analytics Science and Technology) are those that include a variety of approaches portending a future where data analysis tools will be as commonplace as word processing software.\u0026nbsp; \u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EVisualization work, already inherent to many enterprises, is gaining wider adoption and creating a wave of new opportunities and research innovations in the space. Visualizations are designed to create interactive representations of data that allow users to explore its many facets and connections in order to gain greater insight into data sets.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESome emerging themes in this year\u0026rsquo;s Georgia Tech work include machine learning methods, new techniques to explore data patterns (including augmented reality), modeling neural networks, and finding connections within graphs, such as for biological systems, network security and finance.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EJohn Stasko, Interactive Computing, and his co-authors were presented one of five Test of Time awards Tuesday morning at the plenary ceremony for their research from VAST 2007. The paper, \u003Cem\u003EJigsaw: Supporting Investigative Analysis through Interactive Visualization\u003C\/em\u003E was co-authored by John Stasko, Carsten G\u0026ouml;rg, Zhicheng Liu, and Kanupriya Singhal.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EExplore GT research from Vis 2017 below and come back through the week for a look at the people in Georgia Tech\u0026rsquo;s VIS Lab as well as coverage of the Test of Time Award.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Ch2\u003E\u003Cstrong\u003EGT Involvement at IEEE VIS 2017\u003C\/strong\u003E\u003C\/h2\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EAwards\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EVAST Test of Time Award\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Jigsaw: Supporting Investigative Analysis through Interactive Visualization\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EJohn Stasko, Carsten G\u0026ouml;rg, Zhicheng Liu, and Kanupriya Singhal\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor paper at VAST 2007 Conference\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EInfoVis\u0026nbsp;Papers\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Orko: Facilitating Multimodal Interaction for Visual Network Exploration and Analysis\u0026rdquo;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EArjun Srinivasan and John Stasko\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/~stasko\/papers\/infovis17-orko.pdf\u0022\u003Ehttp:\/\/www.cc.gatech.edu\/~stasko\/papers\/infovis17-orko.pdf\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EVAST\u0026nbsp;Papers\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Graphiti: Interactive Specification of Attribute-based Edges for Network Modeling and Visualization\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EArjun Srinivasan, Hyunwoo Park, Alex Endert, and Rahul C. Basole\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/arjun010.github.io\/static\/papers\/graphiti-vast-17.pdf\u0022\u003Ehttp:\/\/arjun010.github.io\/static\/papers\/graphiti-vast-17.pdf\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMinsuk Kahng, Pierre Andrews, Aditya Kalro, and Polo Chau\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~dchau\/papers\/17-vast-activis.pdf\u0022\u003Ehttps:\/\/www.cc.gatech.edu\/~dchau\/papers\/17-vast-activis.pdf\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003E(Collaboration with Facebook; deployed on Facebook\u0026rsquo;s machine learning platform)\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;VIGOR: Interactive Visual Exploration of Graph Query Results\u0026rdquo;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERobert Pienta, Fred Hohman, Alex Endert, Acar Tamersoy, Kevin Roundy, Chris Gates, Shamkant Navathe, and Polo Chau\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~dchau\/papers\/17-vast-vigor.pdf\u0022\u003Ehttps:\/\/www.cc.gatech.edu\/~dchau\/papers\/17-vast-vigor.pdf\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003E(Collaboration with Symantec)\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026quot;Warning, Bias May Occur: A Proposed Approach to Detecting Cognitive Bias in Interactive Visual Analytics\u0026quot;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEmily Wall, Leslie Blaha, Lyndsey Franklin, and Alex Endert\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~ewall9\/media\/papers\/BiasVAST17.pdf\u0022\u003Ehttps:\/\/www.cc.gatech.edu\/~ewall9\/media\/papers\/BiasVAST17.pdf\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Podium: Ranking Data Using Mixed-Initiative Visual Analytics\u0026ldquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEmily Wall, Subhajit Das, Ravish Chawla, Bharath Kalidindi, Eli T. Brown, and Alex Endert\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~ewall9\/media\/papers\/PodiumVAST17.pdf\u0022\u003Ehttps:\/\/www.cc.gatech.edu\/~ewall9\/media\/papers\/PodiumVAST17.pdf\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ETVCG (journal paper being presented at VIS)\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;vispubdata.org: A Metadata Collection about IEEE Visualization (VIS) Publications\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPetra Isenberg, Florian Heimerl, Steffen Koch, Tobias Isenberg, Panpan Xu, Charles Stolper, Michael Sedlmair, Jian Chen, Torsten M\u0026ouml;ller, and John T. Stasko\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/~stasko\/papers\/tvcg17-vispubdata.pdf\u0022\u003Ehttp:\/\/www.cc.gatech.edu\/~stasko\/papers\/tvcg17-vispubdata.pdf\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Evaluating Interactive Graphical Encodings for Data Visualization\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBahador Saket, Arjun Srinivasan, Eric Ragan, and Alex Endert\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/bahadorsaket.com\/publication\/encodingsPaper.pdf\u0022\u003Ehttp:\/\/bahadorsaket.com\/publication\/encodingsPaper.pdf\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBlog Post: \u003Ca href=\u0022https:\/\/goo.gl\/YwkjqX\u0022\u003Ehttps:\/\/goo.gl\/YwkjqX\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EPosters\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Equity Monitor: Visualizing Attributes of Health Inequity in Atlanta\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EXiaoxue Zhang, Alex Godwin, John Stasko\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/~stasko\/papers\/vis17-poster-health.pdf\u0022\u003Ehttp:\/\/www.cc.gatech.edu\/~stasko\/papers\/vis17-poster-health.pdf\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;CricVis: Interactive Visual Exploration and Analysis of Cricket Matches\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAyan Das, Arjun Srinivasan, John Stasko\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/~stasko\/papers\/vis17-poster-cricket.pdf\u0022\u003Ehttp:\/\/www.cc.gatech.edu\/~stasko\/papers\/vis17-poster-cricket.pdf\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Atomic Operations for Specifying Graph Visualization Techniques\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECharles D. Stolper, Will Price, Matt Sanford, Duen Horng Chau, John Stasko\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/~stasko\/papers\/vis17-poster-glo.pdf\u0022\u003Ehttp:\/\/www.cc.gatech.edu\/~stasko\/papers\/vis17-poster-glo.pdf\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;3D Exploration of Graph Layers via Vertex Cloning\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EJames Abello, Fred Hohman, Duen Horng (Polo) Chau\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~dchau\/papers\/17-vis-playground.pdf\u0022\u003Ehttps:\/\/www.cc.gatech.edu\/~dchau\/papers\/17-vis-playground.pdf\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;High-Recall Document Retrieval from Large-Scale Noisy Documents via Visual Analytics based on Targeted Topic Modeling\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHannah Kim, Jaegul Choo, Alex Endert, Haesun Park\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~aendert3\/resources\/Kim2017HighRecall.pdf\u0022\u003Ehttps:\/\/www.cc.gatech.edu\/~aendert3\/resources\/Kim2017HighRecall.pdf\u003C\/a\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;PredVis: Interaction Techniques for Time Series Prediction\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESakshi Sanjay Pratap and Alex Endert\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~aendert3\/resources\/Pratap2017PredVis.pdf\u0022\u003Ehttps:\/\/www.cc.gatech.edu\/~aendert3\/resources\/Pratap2017PredVis.pdf\u003C\/a\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWorkshop Papers\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u0026ldquo;\u003C\/strong\u003E\u003Ca href=\u0022https:\/\/scholar.google.com\/citations?view_op=view_citation\u0026amp;hl=en\u0026amp;user=y8DBOyMAAAAJ\u0026amp;citation_for_view=y8DBOyMAAAAJ:uWiczbcajpAC\u0022\u003EVisAR: Bringing Interactivity to Static Data Visualizations through Augmented Reality\u003C\/a\u003E\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETaeheon Kim, Bahador Saket, Alex Endert, Blair MacIntyre\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWorkshop on Immersive Analytics\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/bahadorsaket.com\/publication\/VisAR.pdf\u0022\u003Ehttp:\/\/bahadorsaket.com\/publication\/VisAR.pdf\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;A Viz of Ice and Fire:Exploring Entertainment Video Using Color and Dialogue\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFred Hohman, Sandeep Soni, Ian Stewart, and John Stasko\u003C\/p\u003E\r\n\r\n\u003Cp\u003E2nd Workshop on Visualization for the Digital Humanities\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~stasko\/papers\/vis4dh17-thrones.pdf\u0022\u003Ehttps:\/\/www.cc.gatech.edu\/~stasko\/papers\/vis4dh17-thrones.pdf\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Affordances of Input Modalities for Visual Data Exploration in Immersive Environments\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESriram Karthik Badam, Arjun Srinivasan, Niklas Elmqvist, and John Stasko\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWorkshop on Immersive Analytics\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/~john.stasko\/papers\/immersive17-input.pdf\u0022\u003Ehttp:\/\/www.cc.gatech.edu\/~john.stasko\/papers\/immersive17-input.pdf\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Designing a Visual Analytics System for Industry-Scale Deep Neural Network Models\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMinsuk Kahng, Pierre Andrews, Aditya Kalro, and Polo Chau\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWorkshop on Visual Analytics for Deep Learning\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Four Perspectives on Human Bias in Visual Analytics\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEmily Wall, Leslie Blaha, Celeste Paul, Kris Cook, and Alex Endert\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDECISIVe: Workshop on Dealing with Cognitive Biases in Visualizations\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~ewall9\/media\/papers\/BiasDECISIVe17.pdf\u0022\u003Ehttps:\/\/www.cc.gatech.edu\/~ewall9\/media\/papers\/BiasDECISIVe17.pdf\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Designing Breadth-Oriented Data Exploration for Mitigating Cognitive Biases\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPo-Ming Law, and Rahul Basole\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDECISIVe: Workshop on Dealing with Cognitive Biases in Visualizations\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EDoctoral Colloquium\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlex Godwin\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBahador Saket\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEmily Wall\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EOrganizing Committee\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERahul Basole - InfoVis Program Committee\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERahul Basole - VIS Supporters Co-Chair\u003C\/p\u003E\r\n\r\n\u003Cp\u003EJohn Stasko - VAST Steering Committee\u003C\/p\u003E\r\n\r\n\u003Cp\u003EJohn Stasko - InfoVis Program Committee\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlex Endert - VIS Panels Co-Chair\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlex Endert - VAST Program Committee\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlex Endert - Workshop on Immersive Analytics Co-Organizer\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlex Endert - DECISIVe 2017: 2nd Workshop on Dealing with Cognitive Biases in Visualizations Co-Organizer\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBahador Saket - Workshop on Immersive Analytics Co-Organizer\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech researchers are presenting new techniques and research for information visualization and visual analytics this week, Oct. 1-6, at the IEEE Vis 2017 conference in Phoenix, Ariz. Georgia Tech research is led by School of Interactive Computing faculty and students, and also includes School of Computational Science and Engineering researchers.\u0026nbsp;\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech researchers are presenting new techniques and research for information visualization and visual analytics this week, Oct. 1-6, at the IEEE Vis 2017 conference in Phoenix, Ariz.\u00a0"}],"uid":"27592","created_gmt":"2017-10-03 17:44:55","changed_gmt":"2017-10-03 17:48:59","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-10-03T00:00:00-04:00","iso_date":"2017-10-03T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"596889":{"id":"596889","type":"image","title":"Vis 2017 faculty","body":null,"created":"1507052818","gmt_created":"2017-10-03 17:46:58","changed":"1507052818","gmt_changed":"2017-10-03 17:46:58","alt":"","file":{"fid":"227489","name":"faculty at vis 2017.jpg","image_path":"\/sites\/default\/files\/images\/faculty%20at%20vis%202017.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/faculty%20at%20vis%202017.jpg","mime":"image\/jpeg","size":153539,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/faculty%20at%20vis%202017.jpg?itok=7b6KUt3Q"}}},"media_ids":["596889"],"groups":[{"id":"1299","name":"GVU Center"},{"id":"47223","name":"College of Computing"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39431","name":"Data Engineering and Science"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EJoshua Preston\u003C\/p\u003E\r\n\r\n\u003Cp\u003Ejpreston@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"596563":{"#nid":"596563","#data":{"type":"news","title":"GT Computing Takes the Spotlight at Tapia 2017","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003EGT Computing Takes the Spotlight at Tapia 2017\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAtlanta played host to the \u003Ca href=\u0022http:\/\/tapiaconference.org\/\u0022\u003E2017 Richard Tapia Celebration of Diversity in Computing\u003C\/a\u003E, held Sept. 20-23 in the downtown Hyatt Regency, and once again a strong contingent of GT Computing students, faculty, and staff represented the College of Computing and all its efforts to build equity of access to computing education.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis year, however, those efforts literally took center stage, as the College received the inaugural \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/596289\/award-highlights-college-computings-efforts-grow-diversity-cs\u0022\u003EUniversity Award for Retention of Minorities and Students with Disabilities in Computer Science\u003C\/a\u003E. The awarded recognizes U.S. institutions that have demonstrated a proven commitment to recruiting and retaining students from underrepresented groups in undergraduate computing programs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAwarded by the \u003Ca href=\u0022http:\/\/www.cmd-it.org\/\u0022\u003ECenter for Minorities and People with Disabilities in IT\u003C\/a\u003E (CMD-IT), the honor was accepted on behalf of the College by Executive Associate Dean Charles Isbell, Assistant Dean Cedric Stallworth, and Director of Computing Enrollment Jennifer Whitlow, however it recognized the work of many more Georgia Tech faculty and staff\u0026mdash;several of whom also had official roles to play at Tapia.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EProfessor Mark Guzdial and Research Scientist Barb Ericson, for example, helped build several of the programs that helped Georgia Tech win the award, such as the College\u0026rsquo;s undergraduate program in computational media and the Georgia Computes! initiative to bring CS education into more of Georgia\u0026rsquo;s K-12 schools. Both Guzdial and Ericson spoke in separate sessions as part of the official Tapia program (\u003Cem\u003Esee below\u003C\/em\u003E).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOverall, it was quite an introduction to GT Computing\u0026rsquo;s diversity for the undergraduate and graduate students who attended Tapia, which included 17 online M.S. in Computer Science (OMS CS) students traveling from around the country and the world. In all, some 57 students comprised Georgia Tech\u0026rsquo;s delegation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn fact, two OMS CS students (one former and one current) participated in a panel with Isbell titled, \u0026ldquo;\u003Ca href=\u0022http:\/\/tapiaconference.org\/schedule\/thursday-september-21-2017\/130pm-230pm-1\/how-can-digital-degrees-make-higher-education-more-accessible\/\u0022\u003EHow Can Digital Degrees Make Higher Education More Accessible?\u003C\/a\u003E\u0026rdquo; Program alumnus Miguel Morales, a 2017 graduate, joined current student Tia Pope in sharing their experiences as students from underrepresented groups.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn fact, the entire Tapia program was dotted with GT Computing speakers, including:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EGuzdial, who took part in the panel, \u0026ldquo;\u003Ca href=\u0022http:\/\/tapiaconference.org\/schedule\/thursday-september-21-2017\/1045am-1215pm\/increasing-diversity-in-computing-sharing-of-good-practices\/\u0022\u003EIncreasing Diversity in Computing: Sharing of Good Practices\u003C\/a\u003E\u0026rdquo;\u003C\/li\u003E\r\n\t\u003Cli\u003EProfessor Ayanna Howard (School of Electrical \u0026amp; Computer Engineering), in the panels, \u0026ldquo;\u003Ca href=\u0022http:\/\/tapiaconference.org\/schedule\/thursday-september-21-2017\/1045am-1215pm\/entrepreneurial-skills-thinking\/\u0022\u003EEntrepreneurial Skills \u0026amp; Thinking\u003C\/a\u003E\u0026rdquo; and \u0026ldquo;\u003Ca href=\u0022http:\/\/tapiaconference.org\/schedule\/friday-september-22-2017\/330pm-500pm\/fairness-accountability-and-transparency-in-algorithmic-decision-making\/\u0022\u003EFairness, Accountability, and Transparency in Algorithmic Decision Making\u003C\/a\u003E\u0026rdquo;\u003C\/li\u003E\r\n\t\u003Cli\u003EEricson, as moderator of the workshop, \u0026ldquo;\u003Ca href=\u0022http:\/\/tapiaconference.org\/schedule\/thursday-september-21-2017\/1045am-1215pm\/how-to-use-and-customize-free-interactive-ebooks\/\u0022\u003EHow to Use and Customize Free Interactive Ebooks\u003C\/a\u003E\u0026rdquo;\u003C\/li\u003E\r\n\t\u003Cli\u003EResearch Scientist Lorna Rivera (Center for Education Integrating Science, Mathematics \u0026amp; Computing), on the panel, \u0026ldquo;\u003Ca href=\u0022http:\/\/tapiaconference.org\/schedule\/thursday-september-21-2017\/130pm-230pm-1\/using-advanced-computing-to-affect-social-change\/\u0022\u003EUsing Advanced Computing to Affect Social Change\u003C\/a\u003E\u0026rdquo;\u003C\/li\u003E\r\n\t\u003Cli\u003EResearch Scientist Rosa Arriaga, on the panel, \u0026ldquo;\u003Ca href=\u0022http:\/\/tapiaconference.org\/schedule\/friday-september-22-2017\/130pm-300pm\/strategies-for-human-human-interaction\/\u0022\u003EStrategies for Human-Human Interaction\u003C\/a\u003E\u0026rdquo;\u003C\/li\u003E\r\n\t\u003Cli\u003EAssociate Professor Ada Gavriloska, on the panel, \u0026ldquo;\u003Ca href=\u0022http:\/\/tapiaconference.org\/schedule\/friday-september-22-2017\/130pm-300pm\/internet-of-things\/\u0022\u003EData Challenges for the Internet of Things\u003C\/a\u003E\u0026rdquo;\u003C\/li\u003E\r\n\t\u003Cli\u003EM.S. student Nicole de Vries, in the workshop, \u0026ldquo;\u003Ca href=\u0022http:\/\/tapiaconference.org\/schedule\/friday-september-22-2017\/330pm-500pm\/using-why-to-build-a-better-what-a-human-centered-approach-to-systems-and-data\/\u0022\u003EUsing \u0026lsquo;Why\u0026rsquo; to Build a Better \u0026lsquo;What\u0026rsquo;: A Human-Centered Approach to Systems \u0026amp; Data\u003C\/a\u003E\u0026rdquo;\u003C\/li\u003E\r\n\t\u003Cli\u003EIsbell, on the panel, \u0026ldquo;\u003Ca href=\u0022http:\/\/tapiaconference.org\/schedule\/friday-september-22-2017\/330pm-500pm\/national-scale-committee-the-process-and-the-requirements\/\u0022\u003ENational-Scale Committee: The Process \u0026amp; the Requirements\u003C\/a\u003E\u0026rdquo;\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EOn Thursday evening, the College hosted a reception on campus for its Tapia attendees and Atlanta-area alumni to celebrate GT Computing\u0026rsquo;s leadership in diversity. Isbell and Stallworth both discussed the past efforts that had won Georgia Tech the inaugural CMD-IT award and gave an attendees-only preview of the exciting work that lay ahead.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;What is the role for Georgia Tech in ensuring that students at the K-12 level have equity of access to a computing education?\u0026rdquo; Stallworth said in his remarks. \u0026ldquo;How can we help all kids in the state of Georgia and beyond tap into the incredible opportunities that accompany that kind of an education?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOn Friday, several OMS CS attendees took an afternoon break from Tapia to enjoy their own personalized campus tour\u0026mdash;for most, the first (and possibly only) opportunity they would have to see Georgia Tech firsthand.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;OMS CS has been my first online educational experience, and one worry I had was, what kind of community would I find in the program?\u0026rdquo; said OMS CS student Shipra De, who traveled from Las Vegas to attend Tapia. \u0026ldquo;The collaboration among students has been so encouraging, with many going above and beyond in their efforts to help each other. Then there\u0026rsquo;s the fact that they\u0026rsquo;re doing all this for people they\u0026rsquo;ve never met!\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In Ecuador, there\u0026rsquo;s not as much of a culture of improvement for technical skills or software engineers, so OMS CS was just what I needed,\u0026rdquo; said Romeo Cabrera, who traveled from his hometown of Guayaquil, Ecuador. \u0026ldquo;This program has improved my life in so many ways, not just because of the technical experience\u0026mdash;its flexibility has also allowed me time to share with my family. I can\u0026rsquo;t say enough about it.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Atlanta was this year\u0027s host to the Tapia Celebration of Diversity in Computing and the Georgia Tech College of Computing represented its expansive community in full-force."}],"uid":"27998","created_gmt":"2017-09-27 18:47:17","changed_gmt":"2017-09-27 18:51:12","author":"Brittany Aiello","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-09-27T00:00:00-04:00","iso_date":"2017-09-27T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"596565":{"id":"596565","type":"image","title":"Tapia 2017 - Undergraduate Scholar Students","body":null,"created":"1506538191","gmt_created":"2017-09-27 18:49:51","changed":"1506538191","gmt_changed":"2017-09-27 18:49:51","alt":"Tapia 2017 Undergraduate Scholar Students","file":{"fid":"227371","name":"Screen Shot 2017-09-27 at 2.55.54 PM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202017-09-27%20at%202.55.54%20PM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202017-09-27%20at%202.55.54%20PM.png","mime":"image\/png","size":2911824,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202017-09-27%20at%202.55.54%20PM.png?itok=f0npohnc"}},"596566":{"id":"596566","type":"image","title":"Tapia 2017 - OMS CS Scholar Students","body":null,"created":"1506538245","gmt_created":"2017-09-27 18:50:45","changed":"1506538245","gmt_changed":"2017-09-27 18:50:45","alt":"Tapia 2017 - OMS CS Scholar Students","file":{"fid":"227373","name":"Screen Shot 2017-09-22 at 2.10.48 PM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202017-09-22%20at%202.10.48%20PM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202017-09-22%20at%202.10.48%20PM.png","mime":"image\/png","size":3465913,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202017-09-22%20at%202.10.48%20PM.png?itok=C4h4hqqb"}}},"media_ids":["596565","596566"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1305","name":"Georgia Tech Academic Advising Network (GTAAN)"},{"id":"1299","name":"GVU Center"},{"id":"431631","name":"OMS"}],"categories":[],"keywords":[{"id":"175368","name":"Tapia Celebration of Diversity in Computing"},{"id":"170724","name":"TAPIA"},{"id":"10664","name":"charles isbell"},{"id":"10666","name":"cedric stallworth"},{"id":"66341","name":"OMS CS"},{"id":"489","name":"atlanta"},{"id":"736","name":"diversity"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EMike Terrazas\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["mterraza@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"596144":{"#nid":"596144","#data":{"type":"news","title":"IC Faculty, Alumni Awarded with 10-Year Impact Award at Ubicomp 2017","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing faculty and alumni were among a group of five recognized for the 10-Year Impact Award at the \u003Ca href=\u0022http:\/\/ubicomp.org\/ubicomp2017\/\u0022\u003EACM International Joint Conference on Pervasive and Ubiquitous Computing\u003C\/a\u003E (Ubicomp 2017) for their paper titled \u003Ca href=\u0022https:\/\/homes.cs.washington.edu\/~shwetak\/papers\/ubicomp2007_flick.pdf\u0022\u003E\u003Cem\u003EAt the Flick of a Switch: Detecting and Classifying Unique Electrical Events on the Residential Power Line\u003C\/em\u003E\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe paper, which presented an approach that uses a single plug-in sensor to detect a variety of electrical events throughout the home, earned Best Paper and Best Presentation honors at Ubicomp 2007. This year, the paper was one of three awarded at Ubicomp 2017 for having outstanding influence over the past 10 years.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECo-authors on the paper included current Georgia Tech Professor \u003Cstrong\u003EGregory Abowd\u003C\/strong\u003E, former Georgia Tech postdoctoral student and Research Scientist \u003Cstrong\u003EMatt Reynolds\u003C\/strong\u003E, alumni \u003Cstrong\u003EShwetak Patel\u003C\/strong\u003E and \u003Cstrong\u003EJulie Kientz\u003C\/strong\u003E, and \u003Cstrong\u003ETom Robertson\u003C\/strong\u003E, who worked in Abowd\u0026rsquo;s lab for two years around the time of publication.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u25ba \u003Ca href=\u0022https:\/\/public.tableau.com\/views\/Ubicomp-ISWC2017\/Dashboard1?:embed=y\u0026amp;:display_count=no\u0026amp;publish=yes\u0026amp;:showVizHome=no\u0022\u003EInteractive Graphic of Ubicomp\/ISWC 2017 Papers Program\u003C\/a\u003E\u003C\/h4\u003E\r\n\r\n\u003Ch4\u003E\u0026nbsp;\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EReynolds, Patel, and Kientz now each hold faculty positions at the University of Washington.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo achieve desired results, the researchers applied machine learning techniques to recognize electrically noisy events such as turning on or off a particular light switch, a television set, or an electric stove. They tested their system in one home for several weeks and in five homes for one week each to evaluate the system performance over time in different types of houses. Results indicated that it is possible to learn and classify various electrical events with accuracies ranging from 85-90 percent.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe method has become known as Infrastructural Mediated Sensing, a concept developed and commercialized in a variety of subsequent ways by Patel and Reynolds.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUbicomp 2017 took place earlier this month in conjunction with the \u003Ca href=\u0022https:\/\/iswc2017.semanticweb.org\/\u0022\u003EACM International Symposium on Wearable Computing\u003C\/a\u003E in Maui, Hawaii.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech received another notable accolade at the co-located ISWC 2017 conference, which shares a technical program with Ubicomp. The Jury Prize for Best Paper and Entry in the aesthetics category of the \u003Ca href=\u0022http:\/\/iswc.net\/iswc17\/program\/designexhibition.html\u0022\u003EDesign Exhibition\u003C\/a\u003E was awarded to \u003Ca href=\u0022http:\/\/www.clintzeagler.com\/2017\/03\/12\/le-monstre-from-characters\/\u0022\u003ELe Monstr\u0026eacute;\u003C\/a\u003E, an interactive participatory performance costume developed by Ph.D. HCC student and research scientist \u003Cstrong\u003EClint Zeagler\u003C\/strong\u003E. The team also included IMTC research scientists \u003Cstrong\u003EScott Gilliland\u003C\/strong\u003E and \u003Cstrong\u003ELaura Levy\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis year, Georgia Tech had 11 paper accepted at the conference. Titles, authors, and available links for each can be found below.\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022http:\/\/ws.iat.sfu.ca\/papers\/passivehapticslearning.pdf\u0022\u003EPassive Haptic Training to Improve Speed and Performance on a Keypad\u003C\/a\u003E (Caitlyn Seim, Nick Doering, Yang Zhang, Wolfgang Stuerzlinger, Thad Starner)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003EFingerSound: Recognizing Unistroke Thumb Gestures Using a Ring (Cheng Zhang, Anandghan Waghmare, Pranav Kundra, Yiming Pu, Scott Gilliland, Thomas Ploetz, Thad Starner, Omer Inan, Gregory Abowd)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022http:\/\/delivery.acm.org\/10.1145\/3140000\/3132030\/a108-vigil-hayes.pdf?ip=128.61.126.225\u0026amp;id=3132030\u0026amp;acc=OPEN\u0026amp;key=A79D83B43E50B5B8%2E5E2401E94B5C98E0%2E4D4702B0C3E38B35%2E4201BFF1B9FFDE9A\u0026amp;CFID=959967287\u0026amp;CFTOKEN=45486595\u0026amp;__acm__=1505503648_4a93c99d866\u0022\u003EFiDO: A Community-based Web Browsing Agent and CDN for Challenged Network Environments\u003C\/a\u003E (Morgan Vigil-Hayes, Elizabeth Belding, Ellen Zegura)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022http:\/\/delivery.acm.org\/10.1145\/3130000\/3123041\/p62-zhang.pdf?ip=128.61.126.225\u0026amp;id=3123041\u0026amp;acc=OPEN\u0026amp;key=A79D83B43E50B5B8%2E5E2401E94B5C98E0%2E4D4702B0C3E38B35%2E6D218144511F3437\u0026amp;CFID=959967287\u0026amp;CFTOKEN=45486595\u0026amp;__acm__=1505503735_e58b56acad5ca423e0\u0022\u003EFingOrbits: Interaction With Wearables Using Synchronized Thumb Movements\u003C\/a\u003E (Cheng Zhang, Xiaoxuan Wang, Anandghan Waghmare, Sumeet Jain, Thomas Ploetz, Omer Inan, Thad Starner, Gregory Abowd)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022http:\/\/delivery.acm.org\/10.1145\/3130000\/3123060\/p94-lee.pdf?ip=128.61.126.225\u0026amp;id=3123060\u0026amp;acc=OPEN\u0026amp;key=A79D83B43E50B5B8%2E5E2401E94B5C98E0%2E4D4702B0C3E38B35%2E6D218144511F3437\u0026amp;CFID=959967287\u0026amp;CFTOKEN=45486595\u0026amp;__acm__=1505503858_b1dd594bc241169aa203\u0022\u003EItchy Nose: Discreet Gesture Interaction Using EOG Sensors in Smart Eye-Wear\u003C\/a\u003E (Juyoung Lee, Hui-Shyong Yeo, Murtaza Dhuliawala, Jedidiah Akano, Junichi Shimizu, Thad Starner, Aaron Quigley, Woontack Woo, Kai Kunze)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003EDetecting Gaze Towards Eyes in Natural Social Interactions and Its Use in Child Assessment (Eunji Chong, Katha Chanda, Zhefan Ye, Audrey Southerland, Nataniel Ruiz, Rebecca Jones, Agata Rozga, Jim Rehg)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003EEarBit: Using Wearable Sensors to Detect Eating Episodes in Unconstrained Environments (Abdelkareem Bedri, Richard Li, Malcolm Haynes, Raj Prateek Kosaraju, Ishaan Grover, Temiloluwa Prioleau, Min Yan Beh, Mayank Goel, Thad Starner, Gregory Abowd)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022http:\/\/delivery.acm.org\/10.1145\/3130000\/3123042\/p150-zeagler.pdf?ip=128.61.126.225\u0026amp;id=3123042\u0026amp;acc=OPEN\u0026amp;key=A79D83B43E50B5B8%2E5E2401E94B5C98E0%2E4D4702B0C3E38B35%2E6D218144511F3437\u0026amp;CFID=959967287\u0026amp;CFTOKEN=45486595\u0026amp;__acm__=1505504122_71e44b91df88e7f\u0022\u003EWhere to Wear It: Functional, Technical, and Social Considerations in On-Body Location for Wearable Technology, 20 Years of Designing for Wearability\u003C\/a\u003E (Clint Zeagler)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022http:\/\/www.czhang.org\/wp-content\/uploads\/2017\/05\/SoundTrak_Journal__Authorversion_.pdf\u0022\u003ESoundTrak: Continuous 3D Tracking of a Finger Using Active Acoustics\u003C\/a\u003E (Cheng Zhang, Qiuyue Xue, Anandghan Waghmare, Sumeet Jain, Yiming Pu, Jordan Conant, Sinan Hersek, Kent Lyons, Kenneth A. Cunefare, Omer T. Inan, Gregory Abowd)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022http:\/\/www.munmund.net\/pubs\/IMWUT_SM_EMA.pdf\u0022\u003EInferring Mood Instability on Social Media by Leveraging Ecological Momentary Assessments\u003C\/a\u003E (Koustuv Saha, Larry Chan, Kaya de Barbaro, Gregory Abowd, Munmun De Choudhury)\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Professor Gregory Abowd, a former research scientist, and two former students were among those awarded for a paper\u0027s lasting impact."}],"uid":"33939","created_gmt":"2017-09-19 15:41:20","changed_gmt":"2017-09-20 11:36:05","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-09-19T00:00:00-04:00","iso_date":"2017-09-19T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"596142":{"id":"596142","type":"image","title":"Ubicomp test of time 2017","body":null,"created":"1505834242","gmt_created":"2017-09-19 15:17:22","changed":"1505834242","gmt_changed":"2017-09-19 15:17:22","alt":"Winners of the Ubicomp Test of Time award pose with their certificates.","file":{"fid":"227191","name":"Ubicomp-2017-test-of-time-winners.jpg","image_path":"\/sites\/default\/files\/images\/Ubicomp-2017-test-of-time-winners.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Ubicomp-2017-test-of-time-winners.jpg","mime":"image\/jpeg","size":300566,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Ubicomp-2017-test-of-time-winners.jpg?itok=eQfOY6RP"}}},"media_ids":["596142"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"4923","name":"Ubicomp"},{"id":"171123","name":"shwetak patel"},{"id":"175586","name":"julie kientz"},{"id":"175587","name":"matt reynolds"},{"id":"175588","name":"tom robertson"},{"id":"11002","name":"Gregory Abowd"},{"id":"166848","name":"School of Interactive Computing"},{"id":"654","name":"College of Computing"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"595697":{"#nid":"595697","#data":{"type":"news","title":"Michaelanne Dye Awarded ARCS Global Impact Award for 2nd Consecutive Year","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing Ph.D. student \u003Cstrong\u003E\u003Ca href=\u0022https:\/\/michaelannedye.wordpress.com\/\u0022\u003EMichaelanne Dye\u003C\/a\u003E\u003C\/strong\u003E was awarded an \u003Ca href=\u0022https:\/\/www.arcsfoundation.org\/\u0022\u003EAchievement Rewards for College Scientist\u003C\/a\u003E (ARCS) Scholar award for the second year in a row, recognizing her research in Cuba and its potential for future global impact.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESpecifically, she was awarded the Global Impact Award, which goes to just one ARCS Scholar each year. The award, which she also won last year, provides $10,000 of unrestricted funding, meaning that she is able to choose what to use the money for.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It allows me to spend longer periods of time conducting field work by helping cover the costs of having my son travel with me,\u0026rdquo; Dye said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDye\u0026rsquo;s research in \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/academics\/human-centered-computing-phd-program\u0022\u003Ehuman-centered computing\u003C\/a\u003E explores interaction and development issues from a social computing perspective. Drawing on her bachelor\u0026rsquo;s degree in Spanish and master\u0026rsquo;s in cultural anthropology, Dye uses qualitative methods to investigate socio-technical issues surrounding internet and social media use and non-use among low-resource communities during times of political, economic, and social transitions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECurrently, her research lies in Cuba, where, up until recently, internet access was limited to 5 percent of the population. Through fieldwork, observation, and interviews with Cubans, Dye is developing a holistic understanding of how new internet infrastructures interact with cultural values and local constraints.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing Cuba as a case study, her work explores how future internet access initiatives might successfully map onto local information infrastructures to provide meaningful, sustainable engagements among under-connected communities in resource-constrained parts of the world.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe ARCS Foundation is a nationally recognized nonprofit organization started and run entirely by women who boost American leadership and aid advancement in science and technology. According to the foundation\u0026rsquo;s website, nine out of 10 ARCS Scholars work in their sponsored fields after they graduate.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDye is co-advised by Professor \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/fac\/Amy.Bruckman\/\u0022\u003EAmy Bruckman\u003C\/a\u003E and Assistant Professor \u003Ca href=\u0022https:\/\/nehakumar.org\/\u0022\u003ENeha Kumar\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Dye was awarded an ARCS Scholar Award for her research in Cuba and its potential for future global impact."}],"uid":"33939","created_gmt":"2017-09-07 21:45:42","changed_gmt":"2017-09-07 21:45:42","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-09-07T00:00:00-04:00","iso_date":"2017-09-07T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"586831":{"id":"586831","type":"image","title":"Michaelanne Dye","body":null,"created":"1486063573","gmt_created":"2017-02-02 19:26:13","changed":"1486063573","gmt_changed":"2017-02-02 19:26:13","alt":"","file":{"fid":"223639","name":"Dye_ARCS_082016.jpg","image_path":"\/sites\/default\/files\/images\/Dye_ARCS_082016.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Dye_ARCS_082016.jpg","mime":"image\/jpeg","size":402458,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Dye_ARCS_082016.jpg?itok=FIJ3KNrk"}}},"media_ids":["586831"],"related_links":[{"url":"http:\/\/www.ic.gatech.edu\/academics\/human-centered-computing-phd-program","title":"Ph.D. in Human-Centered Computing"},{"url":"https:\/\/www.arcsfoundation.org\/","title":"ARCS Foundation"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"175458","name":"Amy Bruckman; Michaelanne Dye; School of Interactive Computing; Cuba; multi-user domains"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"595650":{"#nid":"595650","#data":{"type":"news","title":"Paper Co-Authored by IC Faculty Earns Best Paper at EMNLP 2017","body":[{"value":"\u003Cp\u003EA paper co-authored by Assistant Professor \u003Cstrong\u003EDhruv Batra\u003C\/strong\u003E and Research Scientist \u003Cstrong\u003EStefan Lee\u003C\/strong\u003E of the School of Interactive Computing will be awarded at this week\u0026rsquo;s \u003Ca href=\u0022http:\/\/emnlp2017.net\/\u0022\u003EConference on Empirical Methods in Natural Language Processing\u003C\/a\u003E (EMNLP 2017), which begins Thursday in Copenhagen, Denmark.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe paper, titled \u003Cem\u003ENatural Language Does Not Emerge \u0026lsquo;Naturally\u0026rsquo; in Multi-Agent Dialog\u003C\/em\u003E, earned one of four Best Paper awards (out of 1,500 submissions) for its findings.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt explores the conditions under which human-interpretable languages simply emerge between goal-driven interacting AI agents that invent their own communication protocols. In contrast to many recent works that have shown compositional, human-interpretable languages emerging between agents in multi-agent game settings, this work shows that while most agent-invented languages are effective, achieving near-perfect rewards, they are decidedly not interpretable or compositional.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBatra and Lee, along with collaborators from Carnegie Mellon University, used a Task-and-Tell reference game between two agents as a testbed to come to this conclusion.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETask and Tell is a simple reference game between a questioner and answerer agent set in a simple world. In the game, the answerer is presented with a simple object \u0026ndash; a colored shape, for example, with a specific style, such as a circle drawn with a red-dashed line. The questioner is tasked with discovering two of the three attributes of the object. The agents communicate in ungrounded vocabulary, using symbols with no pre-specified meanings. Exchanging such single-symbol utterances over two rounds of dialog, the questioner must predict the requested attributes.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile the language exchanged between the two agents was effective, it was not interpretable.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;As our goal is to explore how natural, human interpretable languages emerge in multi-agent dialogs, we consider these negative results,\u0026rdquo; Lee said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn essence, they found that natural language does not, in fact, emerge naturally. Further, they find that restricting the agents\u0026rsquo; vocabularies and limiting how they interact is essential for human-interpretable languages to emerge in such a setting. Using just the right set of controls, the two bots invent their own communication protocol and start using certain symbols to ask or answer about certain visual attributes of a given object.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEMNLP is one of the top natural language processing conferences. This year, six papers co-authored by College of Computing faculty and students were accepted to the conference.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERead more on each below.\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1706.08502\u0022\u003E\u003Cstrong\u003ENatural Language Does Not Emerge \u0026lsquo;Naturally\u0026rsquo; in Multi-Agent Dialog\u003C\/strong\u003E\u003C\/a\u003E\u003Cstrong\u003E (Satwik Kottur, Jos\u0026eacute; M.F. Moura, Stefan Lee, Dhruv Batra)\u003C\/strong\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EABSTRACT: A number of recent works have proposed techniques for end-to-end learning of communication protocols among cooperative multi-agent populations, and have simultaneously found the emergence of grounded human-interpretable language in the protocols developed by the agents, all learned without any human supervision!\u0026nbsp;In this paper, using a Task and Tell reference game between two agents as a testbed, we present a sequence of \u0026#39;negative\u0026#39; results culminating in a \u0026#39;positive\u0026#39; one -- showing that while most agent-invented languages are effective (i.e. achieve near-perfect task rewards), they are decidedly not interpretable or compositional.\u0026nbsp;In essence, we find that natural language does not emerge \u0026#39;naturally\u0026#39;, despite the semblance of ease of natural-language-emergence that one may gather from recent literature. We discuss how it is possible to coax the invented languages to become more and more human-like and compositional by increasing restrictions on how two agents may communicate.\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1705.06476\u0022\u003E\u003Cstrong\u003EParlAI: A Dialog Research Software Platform\u003C\/strong\u003E\u003C\/a\u003E\u003Cstrong\u003E (Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, Jason Weston)\u003C\/strong\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EABSTRACT: We introduce ParlAI (pronounced \u0026quot;par-lay\u0026quot;), an open-source software platform for dialog research implemented in Python, available at\u0026nbsp;\u003Ca href=\u0022http:\/\/parl.ai.\/\u0022\u003Ethis http URL\u003C\/a\u003E\u0026nbsp;Its goal is to provide a unified framework for sharing, training and testing of dialog models, integration of Amazon Mechanical Turk for data collection, human evaluation, and online\/reinforcement learning; and a repository of machine learning models for comparing with others\u0026#39; models, and improving upon existing architectures. Over 20 tasks are supported in the first release, including popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail, CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated, including neural models such as memory networks, seq2seq and attentive LSTMs.\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1706.05125\u0022\u003EDeal or No Deal? End-to-End Learning for Negotiation Dialogues (Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra)\u003C\/a\u003E\u003C\/strong\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EABSTRACT: Much of human dialogue occurs in semi-cooperative settings, where agents with different goals attempt to agree on common decisions. Negotiations require complex communication and reasoning skills, but success is easy to measure, making this an interesting task for AI. We gather a large dataset of human-human negotiations on a multi-issue bargaining task, where agents who cannot observe each other\u0026#39;s reward functions must reach an agreement (or a deal) via natural language dialogue. For the first time, we show it is possible to train end-to-end models for negotiation, which must learn both linguistic and reasoning skills with no annotated dialogue states. We also introduce dialogue rollouts, in which the model plans ahead by simulating possible complete continuations of the conversation, and find that this technique dramatically improves performance. Our code and dataset are publicly available (\u003Ca href=\u0022https:\/\/github.com\/facebookresearch\/end-to-end-negotiator\u0022\u003Ethis https URL\u003C\/a\u003E).\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1705.00601\u0022\u003E\u003Cstrong\u003EThe Promise of Premise: Harnessing Question Premises in Visual Question Answering\u003C\/strong\u003E\u003C\/a\u003E\u003Cstrong\u003E (Aroma Mahendru, Viraj Prabhu, Akrit Mohapatra, Dhruv Batra, Stefan Lee)\u003C\/strong\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EABSTRACT: In this paper, we make a simple observation that questions about images often contain premises - objects and relationships implied by the question - and that reasoning about premises can help Visual Question Answering (VQA) models respond more intelligently to irrelevant or previously unseen questions. When presented with a question that is irrelevant to an image, state-of-the-art VQA models will still answer purely based on learned language biases, resulting in non-sensical or even misleading answers. We note that a visual question is irrelevant to an image if at least one of its premises is false (i.e. not depicted in the image). We leverage this observation to construct a dataset for Question Relevance Prediction and Explanation (QRPE) by searching for false premises. We train novel question relevance detection models and show that models that reason about premises consistently outperform models that do not. We also find that forcing standard VQA models to reason about premises during training can lead to improvements on tasks requiring compositional reasoning.\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1703.01720\u0022\u003E\u003Cstrong\u003ESound-Word2Vec: Learning Word Representations Grounded in Sounds\u003C\/strong\u003E\u003C\/a\u003E\u003Cstrong\u003E (Ashwin K. Vijayakumar, Ramakrishna Vedantam, Devi Parikh)\u003C\/strong\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EABSTRACT: To be able to interact better with humans, it is crucial for machines to understand sound - a primary modality of human perception. Previous works have used sound to learn embeddings for improved generic textual similarity assessment. In this work, we treat sound as a first-class citizen, studying downstream textual tasks which require aural grounding. To this end, we propose sound-word2vec - a new embedding scheme that learns specialized word embeddings grounded in sounds. For example, we learn that two seemingly (semantically) unrelated concepts, like leaves and paper are similar due to the similar rustling sounds they make. Our embeddings prove useful in textual tasks requiring aural reasoning like text-based sound retrieval and discovering foley sound effects (used in movies). Moreover, our embedding space captures interesting dependencies between words and onomatopoeia and outperforms prior work on aurally-relevant word relatedness datasets such as AMEN and ASLex.\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1707.06961\u0022\u003E\u003Cstrong\u003EMimicking Word Embeddings Using Subword RNNs\u003C\/strong\u003E\u003C\/a\u003E\u003Cstrong\u003E (Yuval Pinter, Robert Guthrie, Jacob Eisenstein)\u003C\/strong\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EABSTRACT: Word embeddings improve generalization over lexical features by placing each word in a lower-dimensional space, using distributional information obtained from unlabeled data. However, the effectiveness of word embeddings for downstream NLP tasks is limited by out-of-vocabulary (OOV) words, for which embeddings do not exist. In this paper, we present MIMICK, an approach to generating OOV word embeddings compositionally, by learning a function from spellings to distributional embeddings. Unlike prior work, MIMICK does not require re-training on the original word embedding corpus; instead, learning is performed at the type level. Intrinsic and extrinsic evaluations demonstrate the power of this simple approach. On 23 languages, MIMICK improves performance over a word-based baseline for tagging part-of-speech and morphosyntactic attributes. It is competitive with (and complementary to) a supervised character-based model in low-resource settings.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"IC Assistant Professor Dhruv Batra and Research Scientist Stefan Lee contributed to a paper that was one of four best papers recognized at the conference."}],"uid":"33939","created_gmt":"2017-09-07 14:18:11","changed_gmt":"2017-09-07 17:02:03","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-09-07T00:00:00-04:00","iso_date":"2017-09-07T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"595647":{"id":"595647","type":"image","title":"EMNLP 2017","body":null,"created":"1504793682","gmt_created":"2017-09-07 14:14:42","changed":"1504793682","gmt_changed":"2017-09-07 14:14:42","alt":"EMNLP 2017 logo","file":{"fid":"226985","name":"EMNLP.png","image_path":"\/sites\/default\/files\/images\/EMNLP.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/EMNLP.png","mime":"image\/png","size":214173,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/EMNLP.png?itok=YEi8nSAC"}}},"media_ids":["595647"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"595357":{"#nid":"595357","#data":{"type":"news","title":"Dragon Con 2017: Your Guide to GT Computing Panels This Weekend","body":[{"value":"\u003Cp\u003EThe College of Computing will be represented at \u003Ca href=\u0022http:\/\/www.dragoncon.org\/\u0022\u003EDragon Con\u003C\/a\u003E this week in Atlanta, with faculty members participating in a handful of panels.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThere will be one panel each on Friday, Saturday, and Sunday that features a member of the College. All three are part of the video game track at the Westin.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe following is a rundown on events that will feature GT Computing panelists.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EAugmented and Virtual Reality, 1 p.m. Friday at the Westin Augusta E-G\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EThis panel will feature \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC) Professor \u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/people\/blair-macintyre\u0022\u003E\u003Cstrong\u003EBlair MacIntyre\u003C\/strong\u003E\u003C\/a\u003E and \u003Ca href=\u0022http:\/\/www.imtc.gatech.edu\/people\/maribeth-gandy-coleman-phd\u0022\u003E\u003Cstrong\u003EMaribeth Coleman\u003C\/strong\u003E\u003C\/a\u003E, who is the director of the \u003Ca href=\u0022http:\/\/www.imtc.gatech.edu\/\u0022\u003EInteractive Media Technology Center\u003C\/a\u003E (IMTC) and associate director of interactive media for the \u003Ca href=\u0022http:\/\/ipat.gatech.edu\/\u0022\u003EInstitute for People and Technology\u003C\/a\u003E (IPaT). The panel will look at the history and future of virtual reality in video games, and also feature \u003Cstrong\u003ERoger Altizer\u003C\/strong\u003E (University of Utah), and \u003Cstrong\u003EMike Capps\u003C\/strong\u003E (former president of Epic Games).\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EDystopian Tech and Gaming, 11:30 a.m. Saturday at the Westin Augusta E-G\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EThis panel will also feature MacIntyre, Coleman, and Altizer, along with Georgia Tech Research Scientist \u003Ca href=\u0022http:\/\/www.imtc.gatech.edu\/people\/clint-zeagler\u0022\u003E\u003Cstrong\u003EClint Zeagler\u003C\/strong\u003E\u003C\/a\u003E (wearable computing, textile interfaces, animal computer interaction) and Emory University Professor \u003Cstrong\u003ESusan Tamasi\u003C\/strong\u003E (linguistics). The panel examines the ramifications of connecting our lives more closely through technology and the way we tell stories through that. What effect does gamifying our lives, health, experiences, and relationships have on our humanity and the future of how we relate to what surrounds us?\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EToys That Are Changing the Future of Gaming, 5:30 p.m. Sunday at the Westin Augusta E-G\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EColeman and MacIntyre will be joined by IC Professor \u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/people\/thad-starner\u0022\u003E\u003Cstrong\u003EThad Starner\u003C\/strong\u003E\u003C\/a\u003E and IMTC Research Scientist \u003Cstrong\u003E\u003Ca href=\u0022http:\/\/www.imtc.gatech.edu\/people\/laura-levy\u0022\u003ELaura Levy\u003C\/a\u003E\u003C\/strong\u003E. \u003Cstrong\u003EJos\u0026eacute; P. Zagal\u003C\/strong\u003E (University of Utah) will also be on the panel. Panelists will discuss revolutionary technology like neural interfaces, contact lens monitors, and more innovations just over the horizon for consumers. Additionally, they will talk about how we could co-opt that tech for video games.\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EDragon Con is a multigenre convention founded in 1987 that takes place annually over Labor Day weekend in Atlanta. As of 2016, the convention draws over 77,000 attendees, features hundreds of guests, and encompasses five hotels in the Peachtree Center neighborhood of downtown Atlanta.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A number of GT Computing faculty members and researchers, including Professors Thad Starner and Blair MacIntyre, will participate in panels during Dragon Con."}],"uid":"33939","created_gmt":"2017-08-31 14:48:56","changed_gmt":"2017-08-31 14:48:56","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-08-31T00:00:00-04:00","iso_date":"2017-08-31T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"595250":{"id":"595250","type":"image","title":"DragonCon 2017","body":null,"created":"1504034059","gmt_created":"2017-08-29 19:14:19","changed":"1504034059","gmt_changed":"2017-08-29 19:14:19","alt":"","file":{"fid":"226851","name":"DragonCon logo.png","image_path":"\/sites\/default\/files\/images\/DragonCon%20logo.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/DragonCon%20logo.png","mime":"image\/png","size":214586,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/DragonCon%20logo.png?itok=vlmzGpmt"}}},"media_ids":["595250"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576491","name":"CRNCH"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"140101","name":"dragon con"},{"id":"1944","name":"Thad Starner"},{"id":"11099","name":"Blair MacIntyre"},{"id":"172775","name":"Maribeth Gandy Coleman"},{"id":"173537","name":"Laura Levy"},{"id":"9873","name":"clint zeagler"},{"id":"2356","name":"gaming"},{"id":"1597","name":"Augmented Reality"},{"id":"145251","name":"virtual reality"},{"id":"10353","name":"wearable computing"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["david.mitchell@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"595052":{"#nid":"595052","#data":{"type":"news","title":"Walking the Wire: New IC Students Learn to Overcome Struggles at Leadership Challenge Course","body":[{"value":"\u003Cp\u003EThe expressions on the faces of the 14 new \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E Ph.D. students were varied.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOn one, a bright grin spread wide across the face. On others, expressions of concentration. A few faces bore eyes glancing timidly toward the ground, as if afraid it would bite should they take a moment to look away.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThere were 14 separate thought processes as the incoming students took part in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/www.crc.gatech.edu\/leadership-challenge-course\u0022\u003ELeadership Challenge Course\u003C\/a\u003E on Aug. 15 prior to their official orientation, but one similar goal: Work together to find a way to traverse wire-thin cables, unsteady wood platforms, and other assorted barriers \u0026ndash; not unlike the many challenges they will face in pursuit of their common goal of earning a Ph.D. from the Georgia Institute of Technology.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt\u0026rsquo;s a program current IC Professor and former Chair \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/people\/7080\/annie-antons\u0022\u003EAnnie Ant\u0026oacute;n\u003C\/a\u003E developed to achieve a handful of goals for her students.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne, she wanted to challenge them. Like the intense challenges they face over the course of their five or six years in the Ph.D. program, she wanted to force them into an uncomfortable situation that takes patience to overcome.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETwo, she wanted to build a sense of community with other participants. There is no such thing as a graduating class when it comes to a graduate degree, so the idea was to create an environment where members of the same cohort could meet each other, develop friendships, and feel a sense of belonging during their time in school.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnd three, the most important of Ant\u0026oacute;n\u0026rsquo;s goals was to give students an opportunity to share their excitement and concern about the challenge they were embarking on.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;After the challenge course, we get together and discuss what they are most excited about working on their Ph.D.,\u0026rdquo; Ant\u0026oacute;n explained. \u0026ldquo;You get all kinds of answers: I\u0026rsquo;m excited to solve this problem; I\u0026rsquo;m excited to work with this advisor; I\u0026rsquo;m excited to become a professor when I finish. Then we ask what they\u0026rsquo;re scared of. That\u0026rsquo;s when you crack the nut open.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EStudents express concerns over things like not getting along with advisors or peers, fears of presenting papers at conferences or that their work won\u0026rsquo;t even be accepted in the first place, worries about passing qualifying exams, and more.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;But at the end of the day, when we\u0026rsquo;ve gone through that circle, they realize that everyone else has the same concerns,\u0026rdquo; Ant\u0026oacute;n said. \u0026ldquo;More than that, they have strategies and resources to go to within their new community to help them through.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch2\u003EClick \u003Ca href=\u0022https:\/\/www.flickr.com\/photos\/ccgatech\/albums\/72157687741036505\u0022\u003EHERE\u003C\/a\u003E for photos of IC\u0026#39;s day at the Leadership Challenge Course.\u003C\/h2\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EChristopher Banks\u003C\/strong\u003E, who is pursuing his \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/academics\/robotics-phd-program\u0022\u003EPh.D. in robotics\u003C\/a\u003E, was one incoming student who said there was some trepidation in climbing onto the wires on the course.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I am afraid of heights, so I was very wary of the parts of the challenge course that required harnesses,\u0026rdquo; he said. \u0026ldquo;Luckily, with the support of my teammates, I was able to complete the course, something I would have never done under normal circumstances. I was definitely pushed out of my comfort zone.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe conceded that he likely wouldn\u0026rsquo;t be tightrope walking anytime soon, but enjoyed the camaraderie that was built during the day.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThere were also those who had experience in challenging climbing courses, like \u003Cstrong\u003ENathan Hatch\u003C\/strong\u003E, who is pursuing his \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/academics\/computer-science-phd-program\u0022\u003EPh.D. in computer science\u003C\/a\u003E. Hatch said he enjoys rock climbing, and so the course was not a real challenge for him. But it was an opportunity to learn how to lead and share knowledge with others.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Hopefully, I was able to use my experience to encourage some of the others in my group,\u0026rdquo; he said. \u0026ldquo;In any case, these activities certainly built a rapport very quickly. I think they made it much easier to express our worries and hopes during the following group discussion. Everyone felt comfortable being honest in front of each other, which made the discussion very helpful.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn addition to the 14 incoming Ph.D. students, the \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/academics\/master-science-human-computer-interaction\u0022\u003EMS HCI\u003C\/a\u003E program also took a sizeable group \u0026ndash; around 50-60 \u0026ndash; to participate in the course the following Saturday.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe Leadership Challenge Course is located off of Ferst Drive and is run by the \u003Ca href=\u0022http:\/\/www.crc.gatech.edu\/\u0022\u003ECampus Recreation Center\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Fourteen incoming Ph.D. students learn about overcoming challenges as they embark on their life in the IC Ph.D. program."}],"uid":"33939","created_gmt":"2017-08-25 16:44:19","changed_gmt":"2017-08-25 16:44:19","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-08-25T00:00:00-04:00","iso_date":"2017-08-25T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"595051":{"id":"595051","type":"image","title":"IC at Leadership Challenge Course","body":null,"created":"1503678725","gmt_created":"2017-08-25 16:32:05","changed":"1503678725","gmt_changed":"2017-08-25 16:32:05","alt":"A group of School of Interactive Computing Ph.D. students takes a break on the Leadership Challenge Course.","file":{"fid":"226773","name":"LCC Main.jpg","image_path":"\/sites\/default\/files\/images\/LCC%20Main.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/LCC%20Main.jpg","mime":"image\/jpeg","size":295265,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/LCC%20Main.jpg?itok=4_yf7U4n"}}},"media_ids":["595051"],"related_links":[{"url":"https:\/\/www.ic.gatech.edu\/academics\/phd-programs","title":"School of Interactive Computing Ph.D. Programs"},{"url":"http:\/\/www.crc.gatech.edu\/leadership-challenge-course","title":"Leadership Challenge Course"},{"url":"http:\/\/www.crc.gatech.edu\/","title":"Campus Recreation Center"},{"url":"https:\/\/www.flickr.com\/photos\/ccgatech\/albums\/72157687741036505","title":"Photos from IC at the Leadership Challenge Course"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"134","name":"Student and Faculty"}],"keywords":[{"id":"19441","name":"Leadership Challenge Course"},{"id":"166848","name":"School of Interactive Computing"},{"id":"654","name":"College of Computing"},{"id":"27641","name":"annie anton"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["david.mitchell@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"594286":{"#nid":"594286","#data":{"type":"news","title":"Five IC Ph.D. Students Selected for Premier Workshop at Stanford University","body":[{"value":"\u003Cp\u003EFive\u0026nbsp;\u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E\u0026nbsp;Ph.D. students were selected to participate in the\u0026nbsp;\u003Ca href=\u0022https:\/\/risingstars2017.stanford.edu\/\u0022\u003ERising Stars in EECS 2017\u003C\/a\u003E\u0026nbsp;workshop at Stanford University on Nov. 5-7 of this year.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.tescafitzgerald.com\/\u0022\u003ETesca Fitzgerald\u003C\/a\u003E\u0026nbsp;(Computer Science),\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~vchu7\/\u0022\u003EVivian Chu\u003C\/a\u003E\u0026nbsp;(Robotics),\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/barbara-ericson\u0022\u003EBarbara Ericson\u003C\/a\u003E\u0026nbsp;(Human-Centered Computing),\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~upavalan\/\u0022\u003EUmashanthi Pavalanathan\u003C\/a\u003E\u0026nbsp;(Computer Science), and\u0026nbsp;\u003Ca href=\u0022http:\/\/maiajacobs.com\/\u0022\u003EMaia Jacobs\u003C\/a\u003E\u0026nbsp;(Human-Centered Computing) will participate in the workshop, which aims to bring together top senior Ph.D. and postdoctoral candidates preparing for careers in academia. It is organized by leading professors in computer science and electrical engineering and will entail scientific discussions and informal sessions aimed at navigating the early stages of an academic career.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlong with networking opportunities for participants, the workshop includes research presentations, panel discussions, and sessions on developing interviewing and promotional skills.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe application process consisted of a research statement, bio, curriculum vitae, and recommendation letters for each student. Around 60 applicants were selected from a competitive field of 323.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFitzgerald\u0026rsquo;s research lies at the intersections of human-robot interaction and cognitive systems. She develops algorithms and knowledge representations for robots to learn, adapt, and reuse knowledge through interaction with a human teacher. She is co-advised by IC Professor\u0026nbsp;\u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/people\/7068\/ashok-goels\u0022\u003EAshok Goel\u003C\/a\u003E\u0026nbsp;and former IC Associate Professor Andrea Thomaz, now at the University of Texas.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EChu\u0026rsquo;s research interests include socially intelligent robots, interactive multi-sensory perception, natural language processing, and applying machine learning techniques for robotic learning in unstructured environments. She is co-advised by Thomaz and Assistant Professor\u0026nbsp;\u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/people\/11322\/sonia-chernovas\u0022\u003ESonia Chernova\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEricson is also a senior research scientist in the College of Computing. Her research is focused on computing education, specifically in trying to increase the quality and quantity of secondary computing students and the quantity and diversity of computing students. She is the Director for Computing Outreach for the\u0026nbsp;\u003Ca href=\u0022http:\/\/coweb.cc.gatech.edu\/ice-gt\/\u0022\u003EInstitute for Computing Education\u003C\/a\u003E\u0026nbsp;in the College.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPavalanathan\u0026#39;s research deals in the computational analysis of language in online social media. Her thesis work focuses on computational approaches to understanding stylistic variation in online writing. She is a member of the\u0026nbsp;\u003Ca href=\u0022https:\/\/gtnlp.wordpress.com\/\u0022\u003EComputational Linguistics Laboratory\u003C\/a\u003E\u0026nbsp;and is advised by Assistant Professor Jacob Eisenstein.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EJacobs focuses on health informatics, mobile computing, and human-computer interaction. More broadly, she is interested in how mobile interfaces may be designed to address the changing needs and priorities of users. She is advised by Professor\u0026nbsp;\u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/people\/7122\/elizabeth-mynatts\u0022\u003EBeth Mynatt\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Maia Jacobs, Tesca Fitzgerald, Barbara Ericson, Umashanthi Pavalanathan, and Vivian Chu will attend the Rising Stars in EECS workshop in November."}],"uid":"33939","created_gmt":"2017-08-10 17:40:51","changed_gmt":"2017-08-24 19:52:36","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-08-10T00:00:00-04:00","iso_date":"2017-08-10T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"594284":{"id":"594284","type":"image","title":"Maia Jacobs, Vivian Chu, and Tesca Fitzgerald for Rising Stars in EECS","body":null,"created":"1502386192","gmt_created":"2017-08-10 17:29:52","changed":"1502386192","gmt_changed":"2017-08-10 17:29:52","alt":"Maia Jacobs, Vivian Chu, and Tesca Fitzgerald were selected to participate at Rising Stars in EECS","file":{"fid":"226487","name":"TescaVivianMaia.png","image_path":"\/sites\/default\/files\/images\/TescaVivianMaia.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/TescaVivianMaia.png","mime":"image\/png","size":412367,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/TescaVivianMaia.png?itok=N0UUzUv4"}}},"media_ids":["594284"],"related_links":[{"url":"https:\/\/www.ic.gatech.edu\/academics\/computer-science-phd-program","title":"Computer Science Ph.D. Program"},{"url":"https:\/\/www.ic.gatech.edu\/academics\/robotics-phd-program","title":"Robotics Ph.D. Program"},{"url":"https:\/\/www.ic.gatech.edu\/academics\/human-centered-computing-phd-program","title":"Human-Centered Computing Ph.D. Program"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"},{"id":"576481","name":"ML@GT"}],"categories":[],"keywords":[{"id":"175142","name":"rising stars in eecs"},{"id":"118671","name":"Maia Jacobs"},{"id":"69711","name":"Tesca Fitzgerald"},{"id":"172726","name":"Vivian Chu"},{"id":"10665","name":"barbara ericson"},{"id":"175317","name":"umashanthi pavalanathan"},{"id":"667","name":"robotics"},{"id":"1051","name":"Computer Science"},{"id":"10621","name":"hcc"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["david.mitchell@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"594242":{"#nid":"594242","#data":{"type":"news","title":"IC Presents Eight Papers at CVPR 2017","body":[{"value":"\u003Cp\u003EThe College of Computing had a substantial presence\u0026nbsp;at the \u003Ca href=\u0022http:\/\/cvpr2017.thecvf.com\/\u0022\u003EComputer Vision and Pattern Recognition 2017\u003C\/a\u003E (CVPR 2017) conference\u0026nbsp;in Honolulu, Hawaii.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA total of eight faculty and students co-authored nine papers that were accepted and presented at the main conference.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFrom the \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC), Associate Professor \u003Cstrong\u003EJames Hays\u003C\/strong\u003E contributed to two papers, one each with graduate students \u003Cstrong\u003EPatsorn Sangkloy\u003C\/strong\u003E (\u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/academics\/computer-science-phd-program\u0022\u003EPh.D. CS\u003C\/a\u003E) and \u003Cstrong\u003ESamarth Brahmbhatt\u003C\/strong\u003E (\u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/academics\/robotics-phd-program\u0022\u003EPh.D. Robotics\u003C\/a\u003E), Associate Professor \u003Cstrong\u003EDhruv Batra\u003C\/strong\u003E contributed to four, including one with advisee \u003Cstrong\u003EAbhishek Das\u003C\/strong\u003E (Ph.D. CS), and Assistant Professor \u003Cstrong\u003EDevi Parikh\u003C\/strong\u003E contributed to five.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAssociate Professor \u003Cstrong\u003ELe Song\u003C\/strong\u003E and his advisee \u003Cstrong\u003EWeiyang Liu\u003C\/strong\u003E (Ph.D. CS) from the \u003Ca href=\u0022http:\/\/cse.gatech.edu\u0022\u003ESchool of Computational Science and Engineering\u003C\/a\u003E also presented a paper at the conference.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAt least 13 alumni also attended the conference, two of whom \u0026ndash; \u003Cstrong\u003EGabe Brostow\u003C\/strong\u003E (Ph.D. CS, \u0026rsquo;03) and \u003Cstrong\u003EAlireza Fathi\u003C\/strong\u003E (Ph.D. CS, \u0026rsquo;13) \u0026ndash; contributed to accepted papers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOverall, the \u003Ca href=\u0022http:\/\/cc.gatech.edu\u0022\u003ECollege of Computing\u003C\/a\u003E had nine main conference publications, seven invited workshop talks and demonstrations, two workshops organized, and four workshop publications. School of IC Professor \u003Cstrong\u003EJim Rehg\u003C\/strong\u003E was also a program chair for the conference.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECVPR 2017 was held from July 21-26 at the Hawaii Convention Center and is the premier annual computer vision event, comprising the main conference and several co-located workshops and short courses.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe nine papers that current Georgia Tech faculty and students contributed to are listed with links and abstracts below:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cem\u003E\u003Ca href=\u0022http:\/\/openaccess.thecvf.com\/content_cvpr_2017\/papers\/Sangkloy_Scribbler_Controlling_Deep_CVPR_2017_paper.pdf\u0022\u003EScribbler: Controlling Deep Image Synthesis with Sketch and Color\u003C\/a\u003E\u003C\/em\u003E (\u003Cstrong\u003EPatsorn Sangkloy\u003C\/strong\u003E, Jingwan Lu, Chen Fang, Fisher Yu, \u003Cstrong\u003EJames Hays\u003C\/strong\u003E)\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EABSTRACT:\u003C\/strong\u003E Several recent works have used deep convolutional networks to generate realistic imagery. These methods sidestep the traditional computer graphics rendering pipeline and instead generate imagery at the pixel level by learning from large collections of photos (e.g. faces or bedrooms). However, these methods are of limited utility because it is difficult for a user to control what the network produces. In this paper, we propose a deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces. We demonstrate a sketch based image synthesis system which allows users to scribble over the sketch to indicate preferred color for objects. Our network can then generate convincing images that satisfy both the color and the sketch constraints of user. The network is feed-forward which allows users to see the effect of their edits in real time. We compare to recent work on sketch to image synthesis and show that our approach generates more realistic, diverse, and controllable outputs. The architecture is also effective at user-guided colorization of grayscale images.\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cem\u003E\u003Ca href=\u0022http:\/\/openaccess.thecvf.com\/content_cvpr_2017\/papers\/Brahmbhatt_DeepNav_Learning_to_CVPR_2017_paper.pdf\u0022\u003EDeepNav: Learning to Navigate Large Cities\u003C\/a\u003E\u003C\/em\u003E (\u003Cstrong\u003ESamarth Brahmbhatt\u003C\/strong\u003E, \u003Cstrong\u003EJames Hays\u003C\/strong\u003E)\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EABSTRACT:\u003C\/strong\u003E We present DeepNav, a Convolutional Neural Network (CNN) based algorithm for navigating large cities using locally visible street-view images. The DeepNav agent learns to reach its destination quickly by making the correct navigation decisions at intersections. We collect a large-scale dataset of street-view images organized in a graph where nodes are connected by roads. This dataset contains 10 city graphs and more than 1 million street-view images. We propose 3 supervised learning approaches for the navigation task and show how A* search in the city graph can be used to generate supervision for the learning. Our annotation process is fully automated using publicly available mapping services and requires no human input. We evaluate the proposed DeepNav models on 4 held-out cities for navigating to 5 different types of destinations. Our algorithms outperform previous work that uses hand-crafted features and Support Vector Regression (SVR).\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cem\u003E\u003Ca href=\u0022http:\/\/openaccess.thecvf.com\/content_cvpr_2017\/papers\/Sun_Bidirectional_Beam_Search_CVPR_2017_paper.pdf\u0022\u003EBidirectional Beam Search: Forward-Backward Inference in Neural Sequence Models for Fill-in-the-Blank Image Captioning\u003C\/a\u003E\u003C\/em\u003E (Qing Sun, Stefan Lee, \u003Cstrong\u003EDhruv Batra\u003C\/strong\u003E)\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EABSTRACT:\u003C\/strong\u003E We develop the first approximate inference algorithm for 1-Best (and M-Best) decoding in bidirectional neural sequence models by extending Beam Search (BS) to reason about both forward and backward time dependencies. Beam Search (BS) is a widely used approximate inference algorithm for decoding sequences from unidirectional neural sequence models. Interestingly, approximate inference in bidirectional models remains an open problem, despite their significant advantage in modeling information from both the past and future. To enable the use of bidirectional models, we present Bidirectional Beam Search (BiBS), an efficient algorithm for approximate bidirectional inference. To evaluate our method and as an interesting problem in its own right, we introduce a novel Fill-in-the-Blank Image Captioning task which requires reasoning about both past and future sentence structure to reconstruct sensible image descriptions. We use this task as well as the Visual Madlibs dataset to demonstrate the effectiveness of our approach, consistently outperforming all baseline methods.\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cem\u003E\u003Ca href=\u0022http:\/\/openaccess.thecvf.com\/content_cvpr_2017\/papers\/Goyal_Making_the_v_CVPR_2017_paper.pdf\u0022\u003EMaking the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering\u003C\/a\u003E\u003C\/em\u003E (Yash Goyal, Tejas Khot, Douglas Summers-Stay, \u003Cstrong\u003EDhruv Batra\u003C\/strong\u003E, \u003Cstrong\u003EDevi Parikh\u003C\/strong\u003E)\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EABSTRACT:\u003C\/strong\u003E Problems at the intersection of vision and language are of significant importance both as challenging research questions and for the rich set of applications they enable. However, inherent structure in our world and bias in our language tend to be a simpler signal for learning than visual modalities, resulting in models that ignore visual information, leading to an inflated sense of their capability. We propose to counter these language priors for the task of Visual Question Answering (VQA) and make vision (the V in VQA) matter! Specifically, we balance the popular VQA dataset [3] by collecting complementary images such that every question in our balanced dataset is associated with not just a single image, but rather a pair of similar images that result in two different answers to the question. Our dataset is by construction more balanced than the original VQA dataset and has approximately twice the number of image-question pairs. Our complete balanced dataset is publicly available at http:\/\/visualqa.org\/ as part of the 2nd iteration of the Visual Question Answering Dataset and Challenge (VQA v2.0). We further benchmark a number of state-of-art VQA models on our balanced dataset. All models perform significantly worse on our balanced dataset, suggesting that these models have indeed learned to exploit language priors. This finding provides the first concrete empirical evidence for what seems to be a qualitative sense among practitioners. Finally, our data collection protocol for identifying complementary images enables us to develop a novel interpretable model, which in addition to providing an answer to the given (image, question) pair, also provides a counterexample based explanation. Specifically, it identifies an image that is similar to the original image, but it believes has a different answer to the same question. This can help in building trust for machines among their users.\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cem\u003E\u003Ca href=\u0022http:\/\/openaccess.thecvf.com\/content_cvpr_2017\/papers\/Chattopadhyay_Counting_Everyday_Objects_CVPR_2017_paper.pdf\u0022\u003ECounting Everyday Objects in Everyday Scene\u003C\/a\u003Es\u003C\/em\u003E (Prithvijit Chattopadhyay, Ramakrishna Vedantam, Ramprasaath R. Selvaraju, \u003Cstrong\u003EDhruv Batra, Devi Parikh\u003C\/strong\u003E)\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EABSTRACT: \u003C\/strong\u003EWe are interested in counting the number of instances of object classes in natural, everyday images. Previous counting approaches tackle the problem in restricted domains such as counting pedestrians in surveillance videos. Counts can also be estimated from outputs of other vision tasks like object detection. In this work, we build dedicated models for counting designed to tackle the large variance in counts, appearances, and scales of objects found in natural scenes. Our approach is inspired by the phenomenon of subitizing \u0026ndash; the ability of humans to make quick assessments of counts given a perceptual signal, for small count values. Given a natural scene, we employ a divide and conquer strategy while incorporating context across the scene to adapt the subitizing idea to counting. Our approach offers consistent improvements over numerous baseline approaches for counting on the PASCAL VOC 2007 and COCO datasets. Subsequently, we study how counting can be used to improve object detection. We then show a proof of concept application of our counting methods to the task of Visual Question Answering, by studying the \u0026lsquo;how many?\u0026rsquo; questions in the VQA and COCO-QA datasets.\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cem\u003E\u003Ca href=\u0022http:\/\/openaccess.thecvf.com\/content_cvpr_2017\/papers\/Lu_Knowing_When_to_CVPR_2017_paper.pdf\u0022\u003EKnowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning\u003C\/a\u003E \u003C\/em\u003E(Jiasen Lu, Caiming Xiong, \u003Cstrong\u003EDevi Parikh\u003C\/strong\u003E, Richard Socher)\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EABSTRACT:\u003C\/strong\u003E Attention-based neural encoder-decoder frameworks have been widely adopted for image captioning. Most methods force visual attention to be active for every generated word. However, the decoder likely requires little to no visual information from the image to predict non-visual words such as \u0026ldquo;the\u0026rdquo; and \u0026ldquo;of\u0026rdquo;. Other words that may seem visual can often be predicted reliably just from the language model e.g., \u0026ldquo;sign\u0026rdquo; after \u0026ldquo;behind a red stop\u0026rdquo; or \u0026ldquo;phone\u0026rdquo; following \u0026ldquo;talking on a cell\u0026rdquo;. In this paper, we propose a novel adaptive attention model with a visual sentinel. At each time step, our model decides whether to attend to the image (and if so, to which regions) or to the visual sentinel. The model decides whether to attend to the image and where, in order to extract meaningful information for sequential word generation. We test our method on the COCO image captioning 2015 challenge dataset and Flickr30K. Our approach sets the new state-of-the-art by a significant margin.\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cem\u003E\u003Ca href=\u0022http:\/\/openaccess.thecvf.com\/content_cvpr_2017\/papers\/Das_Visual_Dialog_CVPR_2017_paper.pdf\u0022\u003EVisual Dialog\u003C\/a\u003E\u003C\/em\u003E (\u003Cstrong\u003EAbhishek Das\u003C\/strong\u003E, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos\u0026eacute; M.F. Moura, \u003Cstrong\u003EDevi Parikh, Dhruv Batra\u003C\/strong\u003E)\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EABSTRACT:\u003C\/strong\u003E We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, and answer the question accurately. Visual Dialog is disentangled enough from a specific downstream task so as to serve as a general test of machine intelligence, while being grounded in vision enough to allow objective evaluation of individual responses and benchmark progress. We develop a novel two-person chat data collection protocol to curate a large-scale Visual Dialog dataset (VisDial). VisDial contains 1 dialog (10 question-answer pairs) on \u0026sim;140k images from the COCO dataset, with a total of \u0026sim;1.4M dialog question-answer pairs. We introduce a family of neural encoder-decoder models for Visual Dialog with 3 encoders (Late Fusion, Hierarchical Recurrent Encoder and Memory Network) and 2 decoders (generative and discriminative), which outperform a number of sophisticated baselines. We propose a retrieval-based evaluation protocol for Visual Dialog where the AI agent is asked to sort a set of candidate answers and evaluated on metrics such as mean-reciprocal-rank of human response. We quantify gap between machine and human performance on the Visual Dialog task via human studies. Our dataset, code, and trained models will be released publicly at visualdialog.org. Putting it all together, we demonstrate the first \u0026lsquo;visual chatbot\u0026rsquo;!\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cem\u003E\u003Ca href=\u0022http:\/\/openaccess.thecvf.com\/content_cvpr_2017\/papers\/Vedantam_Context-Aware_Captions_From_CVPR_2017_paper.pdf\u0022\u003EContext-aware Captions from Context-agnostic Supervision\u003C\/a\u003E\u003C\/em\u003E (Ramakrishna Vedantam, Samy Bengio, Kevin Murphy, \u003Cstrong\u003EDevi Parikh\u003C\/strong\u003E, Gal Chechik)\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EABSTRACT:\u003C\/strong\u003E We introduce an inference technique to produce discriminative context-aware image captions (captions that describe differences between images or visual concepts) using only generic context-agnostic training data (captions that describe a concept or an image in isolation). For example, given images and captions of \u0026ldquo;Siamese cat\u0026rdquo; and \u0026ldquo;tiger cat\u0026rdquo;, we generate language that describes the \u0026ldquo;Siamese cat\u0026rdquo; in a way that distinguishes it from \u0026ldquo;tiger cat\u0026rdquo;. Our key novelty is that we show how to do joint inference over a language model that is context-agnostic and a listener which distinguishes closely-related concepts. We first apply our technique to a justification task, namely to describe why an image contains a particular fine-grained category as opposed to another closely-related category of the CUB- 200-2011 dataset. We then study discriminative image captioning to generate language that uniquely refers to one of two semantically-similar images in the COCO dataset. Evaluations with discriminative ground truth for justification and human studies for discriminative image captioning reveal that our approach outperforms baseline generative and speaker-listener approaches for discrimination.\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cem\u003E\u003Ca href=\u0022http:\/\/openaccess.thecvf.com\/content_cvpr_2017\/papers\/Liu_SphereFace_Deep_Hypersphere_CVPR_2017_paper.pdf\u0022\u003ESphereFace: Deep Hypersphere Embedding for Face Recognition\u003C\/a\u003E\u003C\/em\u003E (\u003Cstrong\u003EWeiyang Liu\u003C\/strong\u003E, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, \u003Cstrong\u003ELe Song\u003C\/strong\u003E)\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EABSTRACT:\u003C\/strong\u003E This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter m. We further derive specific m to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), YouTube Faces (YTF) and MegaFace Challenge 1 show the superiority of A-Softmax loss in FR tasks.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Assistant Professor Devi Parikh contributed to five of the papers, while Associate Professors Dhruv Batra and James Hays contributed to four and two, respectively."}],"uid":"33939","created_gmt":"2017-08-09 15:16:14","changed_gmt":"2017-08-09 15:17:15","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-08-09T00:00:00-04:00","iso_date":"2017-08-09T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"594240":{"id":"594240","type":"image","title":"CVPR Image","body":null,"created":"1502290962","gmt_created":"2017-08-09 15:02:42","changed":"1502290962","gmt_changed":"2017-08-09 15:02:42","alt":"The Computer Vision and Pattern Recognition conference was held in Honolulu on July 21-26.","file":{"fid":"226474","name":"CVPRLogo3.jpg","image_path":"\/sites\/default\/files\/images\/CVPRLogo3.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/CVPRLogo3.jpg","mime":"image\/jpeg","size":188997,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/CVPRLogo3.jpg?itok=G4jdbqgu"}}},"media_ids":["594240"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"173615","name":"dhruv batra"},{"id":"173616","name":"devi parikh"},{"id":"169167","name":"james hays"},{"id":"11506","name":"computer vision"},{"id":"8550","name":"visual pattern recognition"},{"id":"175127","name":"cvpr 2017"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["david.mitchell@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"593471":{"#nid":"593471","#data":{"type":"news","title":"What Machine Learning Will Change (Hint: Everything)","body":[{"value":"\u003Cp\u003EIs that an image of a cat? It\u0026rsquo;s a simple question for human beings, but was a tough one for machines\u0026mdash;until recently. Today, if you type \u0026ldquo;Siamese cats\u0026rdquo; into Google\u0026rsquo;s image search engine, voil\u0026agrave;!, you\u0026rsquo;ll be presented with scores of Siamese cats, categorized by breed (\u0026ldquo;lilac point,\u0026rdquo; \u0026ldquo;totie point,\u0026rdquo; \u0026ldquo;chocolate point\u0026rdquo;), as well as other qualities, such as \u0026ldquo;kitten\u0026rdquo; or \u0026ldquo;furry.\u0026rdquo;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhat\u0026rsquo;s key here is that while some of the images carry identifying, machine-readable text or meta information, many do not. Yet the search still found them. How? The answer is that the pictures\u0026mdash; more accurately, a pattern in the pictures\u0026mdash;was recognized as \u0026ldquo;Siamese cat\u0026rdquo; by a machine, without requiring a human to classify each instance.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis is machine learning. At its core, machine learning upends the programming model, forgoing the hard coded \u0026ldquo;if this, then that\u0026rdquo; instructions and explicit rules. Instead, it uses an artificial neural network (ANN)\u0026mdash;a statistical model directly inspired by biological neural networks\u0026mdash;that is \u0026ldquo;trained\u0026rdquo; on some data set (the bigger, the better) to accomplish some new task that uses similar but yet unknown data.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe data comes first in machine learning. The system finds its own way, adjusting and refining its model, iteratively.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut back to Siamese cats. Computer vision researchers worked on image recognition for decades, but Google effectively perfected it in months once the company developed a machine-learning algorithm. Today, machine-learning facial recognition systems for mug shots and passport photos outperform human operators.\u0026nbsp;\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\n\u003Cstrong\u003ENot New But Definitely Now\u003C\/strong\u003E\u003Cbr \/\u003E\r\nIn fact, machine learning, neural networks and pattern recognition aren\u0026rsquo;t new. In 1950, a computer program was written that improved its checkers performance the more it played (by studying winning strategies and incorporating these into its own program). In 1957, the first neural network for computers (the Perceptron) was designed. In 1967, the \u0026ldquo;nearest neighbor\u0026rdquo; algorithm, which allowed a computer to do very basic pattern recognition, was created.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIndeed, some would say that Alan Turing\u0026rsquo;s famous machine that ultimately broke the German \u0026ldquo;Enigma\u0026rdquo; code during World War II was an instance of machine learning\u0026mdash;in that it observed incoming data, analyzed it and extracted information.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESo why has machine learning exploded on the scene now, pervading fields as diverse as marketing, health care, manufacturing, information security and transportation?\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearchers at Georgia Tech say the explanation is the confluence of three things:\u0026nbsp;\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\n1. Faster, more powerful computer hardware (parallel processors, GPUs, etc.)\u003Cbr \/\u003E\r\n2. Software algorithms to take advantage of these computational architectures\u003Cbr \/\u003E\r\n3. Loads and loads of data for training (digitized documents, internet social media posts, YouTube videos, GPS coordinates, electronic health records, and, the fastest-growing category, all those networked sensors and processors behind the much-heralded Internet of Things).\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nThis digitalization began in earnest in the 1990s. According to IDC Research, digital data will grow at a compound annual growth rate of 42 percent through 2020. In the 2010-20 decade, the world\u0026rsquo;s data will grow by 50 times, from about one Zettabyte (1ZB) in 2010 to about 50ZB in 2020.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThese oceans of data and data sources not only enable machine learning, but also, in a sense, they create an urgent need for it, offering a solution to the human programmer bottleneck. \u0026ldquo;The usual way of programming computers these days is, you write a program,\u0026rdquo; says Irfan Essa, director of Tech\u0026rsquo;s new Center for Machine Learning. \u0026ldquo;Now we\u0026rsquo;re saying, that cannot scale.\u0026rdquo;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThere are simply too many data sources, arriving too fast.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe ability of these systems to quickly and reliably make inferences from data has galvanized the attention of the world\u0026rsquo;s biggest technology players and businesses, who\u0026rsquo;ve seen the commercial benefits and opportunities.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It created a disruption,\u0026rdquo; says Essa, who also serves as associate dean of the College of Computing, a professor in the School of Interactive Computing and an adjunct professor in the School of Electrical and Computer Engineering.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs Jeff Bezos, CEO of Amazon, put it in his widely circulated April 2017 letter to company shareholders, Amazon\u0026rsquo;s use of machine learning in its autonomous delivery drones and speech-controlled assistant Alexa is only part of the story.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Machine learning drives our algorithms for demand forecasting, product search ranking, product and deals recommendations, merchandising placements, fraud detection, translations and much more,\u0026rdquo; Bezos wrote. \u0026ldquo;Though less visible, much of the impact of machine learning will be of this type\u0026mdash;quietly but meaningfully improving core operations.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETwo other drivers for the rapid growth of machine learning have been the widely available, open source toolkits (such as Google\u0026rsquo;s TensorFlow) that can rapidly prototype a machine learning system, and cloud-based storage and computation services to host it.\u0026nbsp;\u003Cbr \/\u003E\r\nThis April, for instance, Amazon Web Services announced that Amazon Lex, the artificial intelligence service (AI) used to create applications that can interact with users via voice and text\u0026mdash;and the technology behind Amazon Alexa\u0026mdash;would be available to Amazon Web Services customers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;You can build a startup very, very fast,\u0026rdquo; says Sebastian Pokutta, Georgia Tech\u0026rsquo;s David M. McKenney Family Associate Professor in the H. Milton Stewart School for Industrial and Systems Engineering, and associate director of the Center for Machine Learning (ML@GT). \u0026ldquo;Before, machine learning was very academic and somewhat esoteric. Now we have a toolbox that I can give a student, and within a week they can create something that\u0026rsquo;s usable.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ENatural Language: Going Deeper\u003C\/strong\u003E\u003Cbr \/\u003E\r\nLike image recognition, speech recognition has seen great strides thanks to machine learning. Consider Amazon\u0026rsquo;s Alexa or Google Home, two darlings in the speech-controlled appliance space.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech researchers aren\u0026rsquo;t competing with these new commercial efforts. \u0026ldquo;We\u0026rsquo;re working on things that we hope will be important components of systems in the much longer term,\u0026rdquo; says Jacob Eisenstein, assistant professor in the School of Interactive Computing, where he leads the Computational Linguistics Laboratory. \u0026ldquo;As a field right now, we\u0026rsquo;re the intersection of machine learning and linguistics.\u0026rdquo;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThat said, Eisenstein points out that Google quietly incorporates increasingly sophisticated natural language processing into its search system every few months.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;What I think they\u0026rsquo;re doing is drawing ideas from the research literature, from the stuff that\u0026rsquo;s produced at universities like Georgia Tech,\u0026rdquo; he says.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHighlighting the market excitement over speech control, Eisenstein notes that five former Tech students are working at Amazon on Alexa development, as are a number of his undergrads and masters students.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESo, what sorts of problems are Eisenstein and his colleagues working to solve?\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Imagine you are interested in some new area of research, and could have a system that summarizes the 15 most important papers in that field into a four-page document,\u0026rdquo; Eisenstein says.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut creating such a system goes far beyond word or phrase recognition. \u0026ldquo;We know that to understand language, you have to have some understanding of linguistic structure\u0026mdash;how sentences are put together,\u0026rdquo; he explains. Language understanding is hard, from a machine standpoint, because it has very deep, nested structures.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETackling subjects like language or other complex, non-linear relationships has given rise to a subset of machine learning known as deep learning. A deep neural network is an artificial neural network with multiple hidden layers between the input and output layers.\u0026nbsp;\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\n\u003Cstrong\u003EBlack Box Problems\u003C\/strong\u003E\u003Cbr \/\u003E\r\nHowever, those hidden layers give rise to a black box problem. That is, if the artificial neural network contains hidden layers, its processes aren\u0026rsquo;t transparent. To take a real-word example: how do we audit the autonomous car\u0026rsquo;s decision to swerve right, not left?\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThat\u0026rsquo;s an area of study for Dhruv Batra, an assistant professor in the School of Interactive Computing. His research aims to develop theory, algorithms and implementations for transparent deep neural networks that are able to provide explanations for their predictions, and to study the effect of developed transparent neural networks and explanations on user trust and perceived trustworthiness.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAccording to Batra: \u0026ldquo;We have to be a little careful though, because if we tack on the explanatory piece\u0026mdash;\u0026lsquo;That\u0026rsquo;s why I\u0026rsquo;m calling this a cat\u0026rsquo;\u0026mdash;the system may learn to produce an explanation, a post hoc justification that may not have anything to do with its choice.\u0026rdquo;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOther problems range from the practical, \u0026ldquo;How can we remove human bias when setting up the algorithm?\u0026rdquo; to the unexpectedly philosophical, \u0026ldquo;How can we be sure these systems are, in fact, learning the right things?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETech researchers are hard at work on these fascinating questions.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEssa admits there\u0026rsquo;s a lot of hype around machine learning right now. But he notes that people are very good at overestimating the impact of technology in the short term, yet underestimating it in the long run.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIf optical character recognition and, increasingly, speech recognition are taken for granted because they \u0026ldquo;just work,\u0026rdquo; there are other technologies that are far from perfect.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;And we\u0026rsquo;d like them to be perfect, which is why research and development needs to continue,\u0026rdquo; Essa says.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMachine learning may even play a role in improving how Georgia Tech students are taught in the future.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;At Tech we have a lot of educational data,\u0026rdquo; he says. \u0026ldquo;How do we now use that data to learn more about and support our student body\u0026mdash;learn more about their learning, and provide the right kinds of guidance and support?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Ch2\u003E\u003Cstrong\u003EINSIDE MACHINE LEARNING @ GEORGIA TECH\u003C\/strong\u003E\u003C\/h2\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;At Georgia Tech, we recognize machine learning to be a game-changer not just in computer science, but in a broad range of scientific, engineering, and business disciplines and practices,\u0026rdquo; writes Irfan Essa, the inaugural director of the Center for Machine Learning at Georgia Tech (ML@GT), in his welcome note on the Center\u0026rsquo;s web page.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELaunched in June 2016, ML@GT is an interdisciplinary research center that combines assets from the College of Computing, the H. Milton Stewart School of Industrial and Systems Engineering and the School of Electrical and Computer Engineering. Its faculty, students and industry partners are working on research and real-world applications of machine learning in a variety of areas, including machine vision, information security, healthcare, logistics and supply chain, finance and education, among others.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe center truly is a collaborative effort across campus, with 125 to 150 Tech faculty involved, and more than 400 students, says Sebastian Pokutta, David M. McKenney Family Associate Professor in the School of Industrial and Systems Engineering, and an associate director of ML@GT. \u0026ldquo;Tech has always had a lot of researchers working on machine learning, but they\u0026rsquo;d been spread out, working in different departments independently,\u0026rdquo; Pokutta says. \u0026ldquo;There wasn\u0026rsquo;t a real community on campus.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEchoing Essa\u0026rsquo;s message, Pokutta says the goal of the Center is straightforward and daring: \u0026ldquo;We want to become the leader in bringing together computing, learning, data and engineering.\u0026rdquo;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETrue, there are other machine learning centers in higher ed\u0026mdash;MIT, Columbia, Carnegie Mellon\u0026mdash;but most focus on combining computing and statistics.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;One of the unique things about Georgia Tech, since we\u0026rsquo;re a big engineering school, is our machine learning effort is really closely embedded with our engineering units,\u0026rdquo; Essa says. \u0026ldquo;We\u0026rsquo;re close to the sensor, close to the processor, close to the actuator.\u0026rdquo;\u0026nbsp;\u003Cbr \/\u003E\r\nThis matters because of what is known as \u0026ldquo;edge computing\u0026rdquo;: the concept of moving applications, data and services to the logical extremes of a network, so that knowledge generation can occur at the point of action.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe objective is to use Tech\u0026rsquo;s engineering prowess\u0026mdash;and data-driven techniques\u0026mdash;to help design the next generation of technologies and methodologies.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Ch2\u003E\u003Cstrong\u003EMACHINE LEARNING\u0026#39;S IMPACT ON PRECISION MEDICINE\u003C\/strong\u003E\u003C\/h2\u003E\r\n\r\n\u003Cp\u003EHealthcare offers a rich source of data to machine learning researchers. There are scanned and electronic health records, claims data, procedure results, lab tests, genetics studies, and even telemetry from devices like heart monitors and wearables like Fitbits and smart watches.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA number of Georgia Tech\u0026rsquo;s researchers are mining this data to better understand health outcomes at scale and to ultimately figure out the right treatment for each individual patient. This is known as individualist or precision medicine.\u0026nbsp;\u003Cbr \/\u003E\r\nJacob Eisenstein, an assistant professor in the School of Interactive Computing, and Jimeng Sun, an associate professor in the School of Computational Science and Engineering, are mining the text in electronic health records to better understand health outcomes at scale.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EToday, patients and doctors try rounds of treatments for ailments, looking for the best fit. \u0026ldquo;There\u0026rsquo;s a lot of trial and error,\u0026rdquo; Eisenstein explains. The project hopes to reduce that, by systematizing treatment based on a deeper understanding of patients, treatments and outcomes.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELast year, Sun was part of a group of researchers who developed a new, accurate-but-interpretable approach for machine learning in medicine.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETheir Reverse Time Attention model (RETAIN) achieves high accuracy while remaining clinically interpretable. It is based on a two-level neural attention model that detects influential past visits and significant clinical variables within those visits (e.g., key diagnoses). RETAIN was tested on a large health system dataset with 14 million visits completed by 263,000 patients over an eight-year period and demonstrated predictive accuracy and computational scalability comparable to state-of-the-art methods such as recurrent neural networks, and ease of interpretability comparable to traditional models (logistic regression).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn other work, Tech professors and students are analyzing data from Geisinger, a hospital network in Pennsylvania, to help predict the risk for sepsis and septic shock in patients before they are admitted to the hospital. Other researchers within the School of Industrial and Systems Engineering\u0026rsquo;s Health Analytics group are collecting health care utilization data involving millions of individuals for events such as hospitalizations that can be used in estimating the cost savings of preventive care.\u003C\/p\u003E\r\n\r\n\u003Ch2\u003E\u003Cstrong\u003EPHOTO FINISH:\u0026nbsp;\u003Cbr \/\u003E\r\nWhy Facebook and Amazon Want to \u0026ldquo;See\u0026rdquo; Your Images Better\u003C\/strong\u003E\u003C\/h2\u003E\r\n\r\n\u003Cp\u003EFacebook\u0026rsquo;s interest in having machines better assess the billions of images uploaded to its platform\u0026mdash;in order to describe, rank or even delete objectionable images\u0026mdash;is obvious.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech faculty Dhruv Batra and Devi Parikh\u0026mdash;married partners both in life and at work\u0026mdash;are assistant professors in the College of Computing\u0026rsquo;s School of Interactive Computing who are currently serving as visiting researchers at Facebook Artificial Intelligence Research (FAIR).\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAt Facebook, the duo is working on ways to improve the interaction between human beings, a machine platform and images posted on the social network platform. In April 2016, Facebook began automatically describing the content of photos to blind and visually impaired users. Called \u0026ldquo;automatic alternative text,\u0026rdquo; the feature was created by Facebook\u0026rsquo;s accessibility team. The technology also works for Facebook versions in countries with limited internet speeds or that don\u0026rsquo;t allow visual content.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnd last December, Batra and Parikh also received Amazon Academic Research Awards for a pair of projects they are leading in computer vision and machine learning. They received $100,000 each from Amazon\u0026mdash;$80,000 in gift money and $20,000 in Amazon Web Services credit\u0026mdash;for projects that aim to produce the next generation of artificial intelligence agents.\u003Cbr \/\u003E\r\nBatra and Parikh are using giant image data sets with human annotations that have been built up at Mechanical Turk, Amazon\u0026rsquo;s crowdsourcing internet marketplace.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne project, Visual Dialog, led by Batra, aims at creating an AI agent able to hold a meaningful dialogue with humans in natural, conversational language about visual content. Facebook can already generate automatic alternative text for an image, explains Batra. So a user can be told, \u0026ldquo;This picture may contain a mug, a person, a cat.\u0026rdquo; The goal, he said, is to go much further\u0026mdash;to offer not only more information about the image but also engage the user in a dialog.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETraining the machine learning algorithm for the task requires a huge data set\u0026mdash;as many as 200,000 conversations on the same set of images, each conversation including 10 rounds of questions and answers (or roughly 2 million question-and-answer pairs).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnother project, titled \u0026ldquo;Counting Everyday Objects in Everyday Scenes,\u0026rdquo; is led by Parikh, and aims to enable an AI to count the number of objects belonging to the same category. One particularly interesting approach will try to estimate the counts of objects in one try by just glancing at the image as a whole. This is inspired by \u0026ldquo;subitizing\u0026rdquo;\u0026mdash;an ability humans inherently possess to see a small number of objects and know how many there are without having to explicitly count.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Today, computer algorithms poring over vast datasets can derive predictions or models from that data\u2014all on their own.  The \u201cprogramming\u201d paradigm has been upended.  Welcome to the Machine Learning Revolution."}],"uid":"33939","created_gmt":"2017-07-12 15:22:43","changed_gmt":"2017-07-19 13:53:29","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-07-12T00:00:00-04:00","iso_date":"2017-07-12T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"593470":{"id":"593470","type":"image","title":"Machine Learning Robot Portrait","body":null,"created":"1499872729","gmt_created":"2017-07-12 15:18:49","changed":"1499872729","gmt_changed":"2017-07-12 15:18:49","alt":"A robot reads a book while sitting on a stack of other books.","file":{"fid":"226229","name":"ML Robot.jpg","image_path":"\/sites\/default\/files\/images\/ML%20Robot.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/ML%20Robot.jpg","mime":"image\/jpeg","size":95380,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/ML%20Robot.jpg?itok=D7WZ_nya"}}},"media_ids":["593470"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"},{"id":"576481","name":"ML@GT"}],"categories":[{"id":"135","name":"Research"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"174914","name":"Machine Learning; College of Computing; Robotics"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ERoger Slavens\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech Alumni Magazine\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"593322":{"#nid":"593322","#data":{"type":"news","title":"Georgia Tech Hosts International Conference on Computational Creativity","body":[{"value":"\u003Cp\u003EThe Georgia Institute of Technology hosted the 2017 \u003Ca href=\u0022http:\/\/computationalcreativity.net\/iccc2017\/\u0022\u003EInternational Conference on Computational Creativity\u003C\/a\u003E on June 19-23.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe five-day event was attended by computing faculty and students from around the world. School of Interactive Computing Professor\u0026nbsp;\u003Cstrong\u003EAshok Goel \u003C\/strong\u003Eserved as the general chair for the event.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe local chair was graduate research assistant\u0026nbsp;\u003Cstrong\u003EMikhail Jacob\u003C\/strong\u003E, and the local committee was comprised of\u0026nbsp;\u003Cstrong\u003EJeff Collins\u003C\/strong\u003E,\u0026nbsp;\u003Cstrong\u003EHeather\u0026nbsp;Liger\u003C\/strong\u003E,\u0026nbsp;\u003Cstrong\u003EGerard Roma\u003C\/strong\u003E, and\u0026nbsp;\u003Cstrong\u003EAnna Xamb\u0026oacute;\u003C\/strong\u003E. IC Ph.D. student\u0026nbsp;\u003Cstrong\u003EMatthew Guzdial\u003C\/strong\u003E\u0026nbsp;served as the media chair, and the media committee was led by\u0026nbsp;\u003Cstrong\u003EAnna Weisling\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EBobbie Eicher\u003C\/strong\u003E,\u0026nbsp;\u003Cstrong\u003ETesca Fitzgerald\u003C\/strong\u003E, and\u0026nbsp;\u003Cstrong\u003EDuri Long\u003C\/strong\u003E\u0026nbsp;were student volunteers;\u0026nbsp;\u003Cstrong\u003ENicholas Davis\u003C\/strong\u003E,\u0026nbsp;\u003Cstrong\u003EKatherine Fu\u003C\/strong\u003E,\u0026nbsp;\u003Cstrong\u003EJulie Linsey\u003C\/strong\u003E, and\u0026nbsp;\u003Cstrong\u003EBrian Magerko\u0026nbsp;\u003C\/strong\u003Eserved on the program committee; and\u0026nbsp;\u003Cstrong\u003EGil Weinberg\u0026nbsp;\u003C\/strong\u003Efrom the School of Music Technology gave one of the keynote talks.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFitzgerald presented a paper,\u0026nbsp;\u003Ca href=\u0022http:\/\/gatech.us3.list-manage.com\/track\/click?u=10091ee4bef3165440405cf07\u0026amp;id=90c7834c06\u0026amp;e=09dc537555\u0022 target=\u0022_blank\u0022\u003E\u003Cem\u003EHuman-Robot Co-Creativity: Task Transfer on a Spectrum of Similarity\u003C\/em\u003E\u003C\/a\u003E, which co-authored with Goel and\u0026nbsp;\u003Cstrong\u003EAndrea Thomaz\u003C\/strong\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The five-day event was attended by computing faculty and students from around the world."}],"uid":"33939","created_gmt":"2017-07-07 18:59:36","changed_gmt":"2017-07-07 18:59:36","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-07-07T00:00:00-04:00","iso_date":"2017-07-07T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"593321":{"id":"593321","type":"image","title":"Gil Weinberg ICCC","body":null,"created":"1499453790","gmt_created":"2017-07-07 18:56:30","changed":"1499453790","gmt_changed":"2017-07-07 18:56:30","alt":"Gil Weinberg gives the closing keynote during ICCC 2017 at Georgia Tech.","file":{"fid":"226162","name":"Gil Weinberg ICCC.JPG","image_path":"\/sites\/default\/files\/images\/Gil%20Weinberg%20ICCC.JPG","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Gil%20Weinberg%20ICCC.JPG","mime":"image\/jpeg","size":411100,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Gil%20Weinberg%20ICCC.JPG?itok=ZMrm4EMf"}}},"media_ids":["593321"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"173295","name":"ICCC"},{"id":"246","name":"Georgia Institute of Technology"},{"id":"654","name":"College of Computing"},{"id":"112431","name":"ashok goel"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["david.mitchell@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"592858":{"#nid":"592858","#data":{"type":"news","title":"Selfies: We Love How We Look and We\u2019re Here to Show You","body":[{"value":"\u003Cp\u003EWhen it comes to selfies, appearance is (almost) everything. \u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo better understand the photographic phenomenon and how people form their identities online, Georgia Institute of Technology researchers combed through 2.5 million selfie posts on Instagram to determine what kinds of identity statements people make by taking and sharing selfies.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENearly 52 percent of all selfies fell into the appearance category: pictures of people showing off their make-up, clothes, lips, etc. Pics about looks were two times more popular than the other 14 categories combined. After appearances, social selfies with friends, loved ones and pets were the most common (14 percent). Then came ethnicity pics (13 percent), travel (7 percent), and health and fitness (5 percent).\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u25ba\u0026nbsp;\u003Ca href=\u0022https:\/\/public.tableau.com\/views\/SelfieResearch\/Dashboard1?:embed=y\u0026amp;:display_count=no\u0026amp;publish=yes\u0022 target=\u0022_blank\u0022\u003EExplore the Top Selfie Trends\u003C\/a\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EThe researchers noted that the prevalence of ethnicity selfies (selfies about a person\u0026rsquo;s ethnicity, nationality or country of origin) is an indication that people are proud of their backgrounds. They also found that most selfies are solo pictures, rather than taken with a group.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe data was gathered in the summer of 2015. The Georgia Tech team believes the study is the first large-scale empirical research on selfies.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOverall, an overwhelming 57 percent of selfies on Instagram were posted by the 18-35-year-old crowd, something the researchers say isn\u0026rsquo;t too surprising considering the demographics of the social media platform. The under-18 age group posted about 30 percent of selfies. The older crowd (35+) shared them far less frequently (13 percent). Appearance was most popular among all age groups.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELead author Julia Deeb-Swihart says selfies are an identity performance \u0026ndash; meaning that users carefully craft the way they appear online and that selfies are an extension of that. This is similar to William Shakespeare\u0026rsquo;s famous line: \u0026ldquo;All the world\u0026rsquo;s a stage, and all the men and women merely players.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Just like on other social media channels, people project an identity that promotes their wealth, health and physical attractiveness,\u0026rdquo; Deeb-Swihart said. \u0026ldquo;With selfies, we decide how to present ourselves to the audience, and the audience decides how it perceives you.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis work is grounded in the theory presented by Erving Goffman in \u003Cem\u003EThe Presentation of Self in Everyday Life.\u003C\/em\u003E The clothes we choose to wear and the social roles we play are all designed to control the version of ourselves we want our peers to see.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Selfies, in a sense, are the blending of our online and offline selves,\u0026rdquo; Deeb-Swihart said. \u0026ldquo;It\u0026rsquo;s a way to prove what is true in your life, or at least what you want people to believe is true.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe researchers gathered the data by searching for \u0026ldquo;#selfie,\u0026rdquo; then used computer vision to confirm that the pictures actually included faces. Nearly half of them didn\u0026rsquo;t. They found plenty of spam with blank images or text. The accounts were using the hashtag to show up in more searches to gain more followers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe study, \u0026ldquo;Selfie-Presentation in Everyday Life: A Large-scale Characterization of Selfie Contexts on Instagram,\u0026rdquo; was presented in May at the \u003Ca href=\u0022http:\/\/www.icwsm.org\/2017\/index.php\u0022\u003EInternational AAAI Conference on Web and Social Media\u003C\/a\u003E in Montreal.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EFunding and sponsorship was provided by the U.S. Army Research Office (ARO) and Defense Advanced Research Projects Agency (DARPA) under Contract No. W911NF- 12-1-0043. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors. \u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":[{"value":"Study identifies most popular selfies for men and women, age "}],"field_summary":[{"value":"\u003Cp\u003ETo better understand the photographic phenomenon and how people form their identities online, Georgia Institute of Technology researchers combed through 2.5 million selfie posts on Instagram to determine what kinds of identity statements people make by taking and sharing selfies.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"When it comes to selfies, appearance is (almost) everything.  "}],"uid":"27592","created_gmt":"2017-06-21 14:11:30","changed_gmt":"2017-06-23 13:31:42","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-06-21T00:00:00-04:00","iso_date":"2017-06-21T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"592860":{"id":"592860","type":"image","title":"Most Popular Selfie Type","body":null,"created":"1498056681","gmt_created":"2017-06-21 14:51:21","changed":"1498056681","gmt_changed":"2017-06-21 14:51:21","alt":"","file":{"fid":"225975","name":"Most Popular Selfie Type_GT.jpg","image_path":"\/sites\/default\/files\/images\/Most%20Popular%20Selfie%20Type_GT.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Most%20Popular%20Selfie%20Type_GT.jpg","mime":"image\/jpeg","size":145840,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Most%20Popular%20Selfie%20Type_GT.jpg?itok=pokRmkMl"}},"592861":{"id":"592861","type":"image","title":"Who Shares Selfies the Most","body":null,"created":"1498056727","gmt_created":"2017-06-21 14:52:07","changed":"1498056727","gmt_changed":"2017-06-21 14:52:07","alt":"","file":{"fid":"225976","name":"Who Shares Selfies the Most_GT.jpg","image_path":"\/sites\/default\/files\/images\/Who%20Shares%20Selfies%20the%20Most_GT.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Who%20Shares%20Selfies%20the%20Most_GT.jpg","mime":"image\/jpeg","size":105864,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Who%20Shares%20Selfies%20the%20Most_GT.jpg?itok=hADW8VVm"}}},"media_ids":["592860","592861"],"related_links":[{"url":"https:\/\/public.tableau.com\/views\/SelfieResearch\/Dashboard1?:embed=y\u0026:display_count=no\u0026publish=yes\u0026:showVizHome=no","title":"Interactive Selfie Visualization"}],"groups":[{"id":"50876","name":"School of Interactive Computing"},{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"174742","name":"selfies"},{"id":"174743","name":"Julie Deeb-Swihart"}],"core_research_areas":[{"id":"39431","name":"Data Engineering and Science"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EJason Maderer\u003Cbr \/\u003E\r\nNational Media Relations\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto:maderer@gatech.edu\u0022\u003Emaderer@gatech.edu\u003C\/a\u003E\u003Cbr \/\u003E\r\n404-660-2926\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["maderer@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"592891":{"#nid":"592891","#data":{"type":"news","title":"IC Professor and Student Merge Passions for Golf and CS With Interactive Visualization","body":[{"value":"\u003Cp\u003EOne Georgia Institute of Technology professor and his recently graduated student are merging their passions for golf and computer science to help create an interactive visualization for avid golfers everywhere.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUtilizing available \u0026ldquo;Top 100\u0026rdquo; lists from \u003Cem\u003EGolf\u003C\/em\u003E and \u003Cem\u003EGolf Digest\u003C\/em\u003E magazines, former student \u003Cstrong\u003EJosh Kulas\u003C\/strong\u003E, who graduated in May with a Bachelor of Science in Industrial Engineering, Professor \u003Cstrong\u003EJohn Stasko\u003C\/strong\u003E, director of the \u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/gvu\/ii\/\u0022\u003EInformation Interfaces Lab\u003C\/a\u003E in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E, and current computer science Ph.D. student \u003Cstrong\u003EJohn Thompson\u003C\/strong\u003E created a \u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/gvu\/ii\/sportvis\/golfcourses\/\u0022\u003Evisual tool\u003C\/a\u003E to help golfers quantify their experiences playing some of the nation\u0026rsquo;s greatest courses.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECount Kulas and Stasko among that enthusiastic community.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKulas played golf regularly in high school with a self-reported handicap of around 6 or 7. He placed second in the county in his senior year in Champaign, Ill., with a personal best score of 73. Stasko, whose dad was a club professional, grew up around the sport. He played in high school and college (Bucknell University), was a member for 17 years at Atlanta\u0026rsquo;s highly-rated East Lake Golf Club, and even won the match play club championship, called the Bobby Jones Memorial tournament, there in 1996.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I haven\u0026rsquo;t played a lot of the overall top 100 courses, as they are mostly private, but I\u0026rsquo;ve played a good number of the top public courses,\u0026rdquo; Stasko said of his experience with courses on the list. \u0026ldquo;I\u0026rsquo;m always looking to play more of these. Augusta National is the true bucket list No. 1 that I dream about playing one day.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKulas came across Stasko in a class during the Spring 2015 semester. Searching for an extracurricular project, Kulas discovered data visualization through a conversation with the professor. Having played golf rather seriously in high school, Kulas quickly noticed the golf course background on Stasko\u0026rsquo;s computer during class one day.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I figured that would be a fun place to start,\u0026rdquo; Kulas said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe visualization itself illustrates a composite ranking, pinpoints locations, specifies whether a course is public or private, and lists number of courses by architect, among other features.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The visualization consolidates a lot of information and gives historical and comparative angles on the data that are difficult to get otherwise,\u0026rdquo; Stasko said. \u0026ldquo;I\u0026rsquo;d actually say that the strength of the visualization is not necessarily in illuminating unexpected information or insights. It\u0026rsquo;s more of a browsing and exploratory aid.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I wanted to make a tool that could answer a variety of questions,\u0026rdquo; Kulas said. \u0026ldquo;What is the highest-ranked course I have played? Where do my favorites stack up? Are there any highly ranked courses near me I can play? How has this course changed in ranking over time? For me, I find it fascinating how things change, or don\u0026rsquo;t change, if you filter for courses built further in the past.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAll of these questions can be answered by the easy-to-use visualization (Kulas has played 20 of the 385 courses listed in the visualization; Stasko has played 40, though a potential trip with some friends to Myrtle Beach, S.C., could add fouR\u0026nbsp;more).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs are other questions golf fans may be eager to know:\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe graphic shows, for example, that only one course in the top 10 of the composite rankings was built after 1935. Adjusting the time frame for the display indicates an influx in public courses, as compared to private, in the late 1990s and early 2000s. A list of top architects shows that Tom Fazio, and not Jack Nicklaus, has designed the most ranked courses (46).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe current top-ranked course is Pine Valley, which has been ranked among the top two every year this century. That private course was designed by George Crump in 1918, the only Crump course among the 380 listed.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe recently completed 117\u003Csup\u003Eth\u003C\/sup\u003E United States Open tournament was played at Erin Hills in Hartford, Wisc. The relatively new course, which has been open since only 2006, is rated the No. 9 public course in \u003Cem\u003EGolf Digest\u003C\/em\u003E\u0026rsquo;s 2017 ranking, information easily gleaned from the visualization.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Professor John Stasko and former student Josh Kulas created an interactive visualization filled with information about top courses to satisfy thirst for golf knowledge."}],"uid":"33939","created_gmt":"2017-06-22 17:48:48","changed_gmt":"2017-06-22 17:48:48","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-06-22T00:00:00-04:00","iso_date":"2017-06-22T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"592889":{"id":"592889","type":"image","title":"Golf viz 1","body":null,"created":"1498153032","gmt_created":"2017-06-22 17:37:12","changed":"1498153032","gmt_changed":"2017-06-22 17:37:12","alt":"An interactive golf visualization developed by members of the Information Interfaces lab.","file":{"fid":"225987","name":"Screen Shot 2017-06-22 at 1.35.45 PM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202017-06-22%20at%201.35.45%20PM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202017-06-22%20at%201.35.45%20PM.png","mime":"image\/png","size":506498,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202017-06-22%20at%201.35.45%20PM.png?itok=ZKSEDVuU"}}},"media_ids":["592889"],"related_links":[{"url":"http:\/\/www.cc.gatech.edu\/gvu\/ii\/sportvis\/golfcourses\/","title":"Top 100 Golf Courses in the U.S."}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"134","name":"Student and Faculty"},{"id":"8862","name":"Student Research"},{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"172919","name":"Information Interfaces Group"},{"id":"11632","name":"john stasko"},{"id":"174751","name":"john thompson"},{"id":"174752","name":"josh kulas"},{"id":"125811","name":"golf courses"},{"id":"166848","name":"School of Interactive Computing"},{"id":"172922","name":"information visualization"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["david.mitchell@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"592693":{"#nid":"592693","#data":{"type":"news","title":"Assistant Professor Devi Parikh Earns IJCAI Computers and Thought Award","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing Assistant Professor \u003Cstrong\u003EDevi Parikh\u003C\/strong\u003E was named recipient of the 2017 \u003Ca href=\u0022https:\/\/www.ijcai.org\/awards\u0022\u003EInternational Joint Conferences on Artificial Intelligence Computers and Thought Award\u003C\/a\u003E, which is considered to be the premier award for artificial intelligence researchers under the age of 35.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EShe was selected by the IJCAI-17 Awards Selection Committee for her contributions at the intersection of words, pictures, and common sense. This includes semantic image understanding, the use of visual attributes for human-machine collaboration and visual abstractions for learning common sense, and enabling humans to interact with visual content via natural language.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EParikh joins a particularly exclusive list of 27 AI visionaries who have been awarded since 1971, including Terry Winograd, David Marr and Tom Mitchell in the early days and Stuart Russell, Daphne Koller, Carlos Guestrin, and Andrew Ng more recently.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EParikh said that she is excited about the recognition that her lab\u0026rsquo;s work in visual question answering (VQA) is getting.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Through making our large datasets and systems publicly available, we have enabled research groups around the world to make significant progress on building machines that can automatically answer questions about visual content,\u0026rdquo; Parikh said. \u0026ldquo;This has applications in any scenario where it is difficult, if not impossible, for someone to sift through visual data to elicit the information they need, be it aiding visually-impaired users, users on low-bandwidth networks that cannot support visual data, or assisting analysts in making decisions based on large quantities of visual feeds.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It has been rewarding to play a role in the creation of an entirely new sub-field of scientific endeavor in artificial intelligence and witness the research community rally around VQA.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis is one of a number of awards that Parikh has earned in recent months. She earned a \u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/news\/588083\/pair-ic-assistant-professors-earn-awards-research-explainable-intelligent-systems-and\u0022\u003EGoogle Research Faculty Award\u003C\/a\u003E, an \u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/news\/586463\/amazon-research-awards-fund-computer-vision-and-machine-learning-projects\u0022\u003EAmazon Academic Research Award\u003C\/a\u003E, and was featured last week in \u003Cem\u003EForbes\u003C\/em\u003E magazine as one of a handful of \u003Ca href=\u0022https:\/\/www.forbes.com\/sites\/mariyayao\/2017\/05\/18\/meet-20-incredible-women-advancing-a-i-research\/2\/#cee2a6e4edee\u0022\u003Ewomen advancing artificial intelligence research\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The Computers and Thought Award is considered to be the premier award for AI researchers under the age of 35."}],"uid":"33939","created_gmt":"2017-06-14 13:57:36","changed_gmt":"2017-06-14 13:57:36","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-06-14T00:00:00-04:00","iso_date":"2017-06-14T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"586462":{"id":"586462","type":"image","title":"Devi Parikh","body":null,"created":"1485377735","gmt_created":"2017-01-25 20:55:35","changed":"1485377735","gmt_changed":"2017-01-25 20:55:35","alt":"","file":{"fid":"223510","name":"Devi Parikh.jpg","image_path":"\/sites\/default\/files\/images\/Devi%20Parikh.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Devi%20Parikh.jpg","mime":"image\/jpeg","size":62731,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Devi%20Parikh.jpg?itok=xv_daXjq"}}},"media_ids":["586462"],"related_links":[{"url":"https:\/\/www.ijcai.org\/awards","title":"IJCAI Computers and Thought Award"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"174685","name":"computers and thought award"},{"id":"173616","name":"devi parikh"},{"id":"2556","name":"artificial intelligence"},{"id":"166848","name":"School of Interactive Computing"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["david.mitchell@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"592692":{"#nid":"592692","#data":{"type":"news","title":"IC\u0027s Lauren Wilcox and Neha Kumar Selected For ACM Future of Computing Academy","body":[{"value":"\u003Cp\u003E\u003Ca href=\u0022http:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E Assistant Professors \u003Cstrong\u003ELauren Wilcox\u003C\/strong\u003E and \u003Cstrong\u003ENeha Kumar\u003C\/strong\u003E were both selected to the inaugural class of the \u003Ca href=\u0022https:\/\/www.acm.org\/fca\u0022\u003EAssociation for Computing Machinery Future of Computing Academy\u003C\/a\u003E (ACM-FCA).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe ACM-FCA is a new initiative created by ACM to support and foster the next generation of computing professionals. It enables young researchers, practitioners, educators, and entrepreneurs to develop a coherent and influential voice that addresses challenging issues facing the field and society in general.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe ACM website characterizes selection to the ACM-FCA as a commitment, not an award. It notes that \u0026ldquo;members of the Academy are expected to engage in activity for the benefit of the next generation of computing professionals.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EACM-FCA members are invited to attend ACM\u0026rsquo;s celebration of 50 years of the ACM Turing Award on June 23-24 at the Westin St. Francis in San Francisco. The inaugural meeting of the ACM-FCA will be on June 25, also in San Francisco.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWilcox, whose research focuses on enabling people to cultivate a more informed relationship with their health through human-centered technology, said she is thrilled to join the Academy.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It provides an opportunity to work with other researchers, scholars, and computing professionals in many different areas of computing,\u0026rdquo; she said. \u0026ldquo;We have different ideas about what the future of computing looks like, but a common goal of creating a shared vision for that future.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKumar, who shares a joint appointment with the \u003Ca href=\u0022https:\/\/inta.gatech.edu\/\u0022\u003ESam Nunn School of International Affairs\u003C\/a\u003E and conducts research at the intersection of human-computer interaction and global development, said she is excited to be a part of this group.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I\u0026rsquo;m most excited about joining the Academy to learn the incredible things that all the other inaugural members are working towards,\u0026rdquo; she said. \u0026ldquo;Even from the one conference call we\u0026rsquo;ve had, it\u0026rsquo;s clear that everyone is driven and ready to change the world in their own unique ways.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMembers of the inaugural class represent 19 different countries, including Morocco, Pakistan, India, the United States, the Netherlands, Egypt, Germany, Colombia, the United Kingdom, Italy, Canada, China, Denmark, Bangladesh, Turkey, Republic of Korea, Vietnam, Israel, and the Ukraine.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The IC assistant professors were named to the inaugural class of the ACM-FCA, which aims to foster the next generation of computing professionals."}],"uid":"33939","created_gmt":"2017-06-14 13:33:01","changed_gmt":"2017-06-14 13:33:01","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-06-14T00:00:00-04:00","iso_date":"2017-06-14T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"507851":{"id":"507851","type":"image","title":"Neha Kumar","body":null,"created":"1457114400","gmt_created":"2016-03-04 18:00:00","changed":"1475895270","gmt_changed":"2016-10-08 02:54:30","alt":"Neha Kumar","file":{"fid":"204902","name":"neha.jpeg","image_path":"\/sites\/default\/files\/images\/neha_0.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/neha_0.jpeg","mime":"image\/jpeg","size":52721,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/neha_0.jpeg?itok=ay7TDLWk"}},"356651":{"id":"356651","type":"image","title":"Lauren Wilcox compressed","body":null,"created":"1449245762","gmt_created":"2015-12-04 16:16:02","changed":"1475895089","gmt_changed":"2016-10-08 02:51:29","alt":"Lauren Wilcox compressed","file":{"fid":"201398","name":"lauren-wilcox.jpg","image_path":"\/sites\/default\/files\/images\/lauren-wilcox_0.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/lauren-wilcox_0.jpg","mime":"image\/jpeg","size":17379,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/lauren-wilcox_0.jpg?itok=fAP3OmK3"}}},"media_ids":["507851","356651"],"related_links":[{"url":"https:\/\/www.acm.org\/fca","title":"ACM Future of Computing Academy"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"138871","name":"Neha Kumar"},{"id":"109121","name":"Lauren Wilcox"},{"id":"166848","name":"School of Interactive Computing"},{"id":"3047","name":"ACM"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["david.mitchell@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"592661":{"#nid":"592661","#data":{"type":"news","title":"Autonomous Driving Research Collaboration gets a Boost from Qualcomm","body":[{"value":"\u003Cp\u003EA team of Georgia Tech researchers headed up by School of Aerospace Engineering professor\u0026nbsp;\u003Cstrong\u003EEvangelos Theodorou\u003C\/strong\u003E\u0026nbsp;and School of Interactive Computing professor\u0026nbsp;\u003Cstrong\u003EJames Rehg\u003C\/strong\u003E\u0026nbsp;has been awarded a $100,000 fellowship by\u0026nbsp;\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.qualcomm.com\/invention\/research\/university-relations\/innovation-fellowship\/2017-us\u0022\u003EQualcomm\u003C\/a\u003E\u003C\/strong\u003E\u0026nbsp;for its proposal,\u0026nbsp;\u003Cem\u003E\u003Cstrong\u003E\u0026ldquo;Autonomous Racing Using Deep Learning and Game Theoretic Optimization.\u0026rdquo;\u003C\/strong\u003E\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe GT proposal is one of eight nationwide that were chosen for the 2017 fellowship, which also includes a one-year mentorship by Qualcomm engineers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETheodorou says the innovation fellowship will help him, Rehg, and graduate students\u0026nbsp;\u003Cstrong\u003EGrady Williams\u0026nbsp;\u003C\/strong\u003E(College of Computing)\u003Cstrong\u003E\u0026nbsp;\u003C\/strong\u003Eand\u0026nbsp;\u003Cstrong\u003EPaul Drews\u003C\/strong\u003E\u0026nbsp;(School of Electrical and Computer Engineering) to bring their research to place where it will have a transformative impact in the transportation industry.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Autonomous driving is one of the most important sub-fields in robotics,\u0026rdquo; said Theodorou. \u0026ldquo;However, autonomous vehicles driving hundreds of millions of miles are likely to get into situations where it is necessary for them to perform aggressive maneuvers to avoid collision. Our work can have an impact on that.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe team\u0026rsquo;s work focuses on the problems faced by two or more autonomous racing vehicles in an environment that has not been previously mapped out. Potholes, bumps, and other irregularities are expected, but cannot be precisely predicted at the onset. Any system seeking to travel over such terrain must be able navigate new decisions on the fly. Each racing vehicle is necessarily pushed to its handling\/acceleration limits, a condition that requires even more simultaneous sensing of the environment and other intelligent agents.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\n\u0026ldquo;There is only a small margin of error on both the control and perception side when racing against a capable adversary,\u0026rdquo; said Theodorou. \u0026ldquo;This research will address fundamental questions in autonomy by\u0026nbsp;bringing together concepts on\u0026nbsp;stochastic optimal control, game theory and deep learning. \u0026quot;\u0026nbsp;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A team of researchers from the Schools of Aerospace Engineering and Interactive Computing has received a $100K grant to further its work on autonomous driving"}],"uid":"33939","created_gmt":"2017-06-13 17:44:23","changed_gmt":"2017-06-13 17:44:23","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-06-13T00:00:00-04:00","iso_date":"2017-06-13T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"592616":{"id":"592616","type":"image","title":"Prof. Evangelos Theodorou","body":null,"created":"1497282894","gmt_created":"2017-06-12 15:54:54","changed":"1497282894","gmt_changed":"2017-06-12 15:54:54","alt":"Prof. Evangelos Theodorou","file":{"fid":"225866","name":"Theodoru-300.jpg","image_path":"\/sites\/default\/files\/images\/Theodoru-300_0.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Theodoru-300_0.jpg","mime":"image\/jpeg","size":99908,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Theodoru-300_0.jpg?itok=z3TxRXzb"}},"349611":{"id":"349611","type":"image","title":"James Rehg compressed","body":null,"created":"1449245696","gmt_created":"2015-12-04 16:14:56","changed":"1475895073","gmt_changed":"2016-10-08 02:51:13","alt":"James Rehg compressed","file":{"fid":"201042","name":"james-rehg.jpg","image_path":"\/sites\/default\/files\/images\/james-rehg_0.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/james-rehg_0.jpg","mime":"image\/jpeg","size":13397,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/james-rehg_0.jpg?itok=JO3X7HX6"}}},"media_ids":["592616","349611"],"related_links":[{"url":"https:\/\/www.qualcomm.com\/invention\/research\/university-relations\/innovation-fellowship\/2017-us","title":"Qualcomm"},{"url":"http:\/\/acds-lab.gatech.edu\/","title":"Autonomous Control \u0026 Decisions Systems Lab"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"152","name":"Robotics"}],"keywords":[{"id":"174666","name":"autonomous driving"},{"id":"667","name":"robotics"},{"id":"2082","name":"aerospace engineering"},{"id":"133251","name":"Evangelos Theodorou"},{"id":"14419","name":"jim rehg"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"591571":{"#nid":"591571","#data":{"type":"news","title":"New Georgia Tech Research May Help Combat Abusive Online Comments","body":[{"value":"\u003Cp\u003EResearchers at the Georgia Institute of Technology\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E have come up with a novel computational approach that could provide a more cost- and resource-effective way for internet communities to moderate abusive content.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThey call it the \u003Cem\u003EBag of Communities \u003C\/em\u003E(BoC), a technique that leverages large-scale, preexisting data from other internet communities to train an algorithm to identify abusive behavior within a separate target community.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESpecifically, they identified nine different communities. Five, such as the free-for-all of internet communities 4chan, are rife with abusive behavior from commenters; four, like the heavily moderated MetaFilter, are helpful, positive, and supportive.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing linguistic characteristics from these two types of communities, researchers built an algorithm that can learn from the comments and, when a new post is generated within a target community, it can make a prediction of whether or not it is abusive.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;MetaFilter is known around the internet as a good, helpful, supportive community,\u0026rdquo; said \u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/people\/eric-gilbert\u0022\u003EEric Gilbert\u003C\/a\u003E, an associate professor in the School of Interactive Computing and a member of the team of researchers on the project. \u0026ldquo;That\u0026rsquo;s an example of how, if your post is closer to that, it\u0026rsquo;s more likely that it should stay on the site. Conversely, if your post is closer to 4chan, then maybe it should come off.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe researchers provide two algorithms. One is a static model, off the shelf with no training examples from the target community, and can achieve roughly 75 percent accuracy. In other words, with access only to posts from the other nine communities, the algorithm can accurately predict abusive posts in the target community roughly three quarters of the time.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;A new community that does not have enough resources to actually build automated algorithms to detect abusive content could use the static model,\u0026rdquo; said Georgia Tech doctoral student \u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/~eshwar3\/\u0022\u003EEshwar Chandrasekharan\u003C\/a\u003E, who led the team.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA dynamic model, one that mimics scenarios in which newly moderated data arrives in batches, learns over time and can achieve 91.18 percent accuracy after seeing 100,000 human-moderated posts.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Over time, as new moderator labels come in, when it has seen examples of things that have been moderated from the site, it can learn more site-specific information,\u0026rdquo; Chandrasekharan said. \u0026ldquo;It can learn the type of comments that get moderated, and if there is a level of tolerance that is different from what you see in the static model, it could learn that over time.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBoth the static and dynamic models outperformed a solely in-domain model from a major internet community.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnyone who has managed an online community has encountered problems with abusive content from users. From social media to message boards to comments sections in online news publications, regulating what is and isn\u0026rsquo;t allowed has become overly costly and taxing on existing human moderators.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFounders at social media startup Yik Yak spent months of their early time removing hate speech, and Twitter has stated publicly that dealing with abusive behavior remains its most pressing challenge. A number of major news agencies are buried under the demands of strict moderation, and many have shut down comments sections altogether.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPrior research into abuse detection and online content moderation has focused on in-domain methods \u0026ndash; using data collected from within your own community \u0026ndash; but those face challenges in obtaining enough data to build and evaluate algorithms. In a BoC-based method, algorithms would leverage out-of-domain data from other existing online communities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGilbert said that the applications from such a model could be widespread.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This is a core internet problem,\u0026rdquo; he said. \u0026ldquo;So many places struggle with this, and many are shutting comments off because they just don\u0026rsquo;t want to deal with the trouble they cause.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis research is presented in a paper (\u003Cem\u003EThe Bag of Communities: Identifying Abusive Behavior Online with Preexisting Internet Data\u003C\/em\u003E) at the \u003Ca href=\u0022https:\/\/chi2017.acm.org\/papers.html\u0022\u003EAssociation for Computing Machinery CHI Conference on Human Factors in Computing Systems 2017\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Researchers at Georgia Tech have found a more cost-effective way for internet communities to moderate abusive content."}],"uid":"33939","created_gmt":"2017-05-09 17:26:06","changed_gmt":"2017-05-09 21:03:03","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-05-09T00:00:00-04:00","iso_date":"2017-05-09T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"591570":{"id":"591570","type":"image","title":"Cyberabuse hand","body":null,"created":"1494350522","gmt_created":"2017-05-09 17:22:02","changed":"1494350522","gmt_changed":"2017-05-09 17:22:02","alt":"New Georgia Tech Research May Help Combat Abusive Online Comments","file":{"fid":"225462","name":"CHI.png","image_path":"\/sites\/default\/files\/images\/CHI.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/CHI.png","mime":"image\/png","size":317173,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/CHI.png?itok=MeX4sECE"}}},"media_ids":["591570"],"related_links":[{"url":"http:\/\/www.chi.gatech.edu\/2017\/","title":"Georgia Tech @ CHI 2017"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"8862","name":"Student Research"},{"id":"135","name":"Research"},{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"174386","name":"cyberabuse"},{"id":"174387","name":"online abuse"},{"id":"174388","name":"cyberbullying"},{"id":"654","name":"College of Computing"},{"id":"166848","name":"School of Interactive Computing"},{"id":"174389","name":"conference on human factors in computing systems"},{"id":"1027","name":"chi"},{"id":"13342","name":"Eric Gilbert"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"},{"id":"71901","name":"Society and Culture"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["david.mitchell@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"591306":{"#nid":"591306","#data":{"type":"news","title":"IPaT In-Depth Spotlight: Gheric Speiginer","body":[{"value":"\u003Cp\u003EGheric Speiginer is a Ph.D. student in Human-Centered Computing at Georgia Tech, advised by Blair MacIntyre, professor in the School of Interactive Computing. The southern California native received his undergraduate degree in Computer Science at Hampton University. Speiginer is interested in exploring novel user interfaces and interaction techniques, particularly those that exploit the unique capabilities of augmented reality.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\n\u003Cstrong\u003EWhat are you currently researching?\u003C\/strong\u003E\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nMy focus is in augmented reality (AR). One aspect of my research is developing the software tools and semantics necessary to express the rich AR content that is envisioned by AR content designers. The other aspect of it is developing software abstractions and architectures that enable the use of multiple AR apps at the same time in the same space.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\n\u003Cstrong\u003EHow did you become interested in augmented reality?\u003C\/strong\u003E\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nI sort of stumbled into it. In undergrad, I did an internship at Brown University with a professor in the robotics department. I had noticed these strange black and white images that were placed around the room, which I\u0026rsquo;d never seen before. So I asked about them, and I found out they were \u0026quot;markers\u0026quot; which were used as part of a computer vision tracking system, and then I started researching more about computer vision on my own. I found out that these kinds of \u0026quot;markers\u0026quot; were also used in certain augmented reality toolkits, and that led me to start researching more into augmented reality. Eventually I decided to start experimenting with AR in my dorm room, just for fun. I had an idea to combine several projects I learned about, and I didn\u0026rsquo;t have all of the same equipment, but I basically found a different way to do it using some open source computer vision software. Through that, I ended up learning more about augmented reality, and every time I would research stuff online I kept seeing Georgia Tech over and over again, especially papers by Blair MacIntyre. It was at that point that I realized Georgia Tech would be a great fit for grad school.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EHow has your experience been at Georgia Tech?\u003C\/strong\u003E\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nIt\u0026rsquo;s been really cool being exposed to all sorts of interesting projects here at Georgia Tech. Everybody\u0026rsquo;s brilliant and I\u0026rsquo;ve been able to have all sorts of opportunities with some of the leading researchers in the field. It\u0026rsquo;s just been really amazing.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\n\u003Cstrong\u003EWhat are your plans after graduation?\u003C\/strong\u003E\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nI\u0026rsquo;ve considered academia, but I\u0026rsquo;m definitely leaning more towards industry. On the one hand I do enjoy teaching and tutoring and I\u0026rsquo;ve done a lot of that in the past, so I could see myself doing some part time teaching. But I\u0026rsquo;ll probably go into industry first and perhaps eventually become a consultant and do something more entrepreneurial. I\u0026#39;ve also become increasingly interested in alternate (post-scarcity) economic systems in the last several years, so another thing I will definitely want to explore after I graduate is how we can use technology to introduce and facilitate new ways of living and working together as a society.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Gheric Speiginer is interested in exploring novel user interfaces and interaction techniques, particularly those that exploit the unique capabilities of augmented reality."}],"uid":"33939","created_gmt":"2017-05-03 20:26:02","changed_gmt":"2017-05-03 20:26:02","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-05-03T00:00:00-04:00","iso_date":"2017-05-03T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"591305":{"id":"591305","type":"image","title":"Gheric Speiginer","body":null,"created":"1493842827","gmt_created":"2017-05-03 20:20:27","changed":"1493842827","gmt_changed":"2017-05-03 20:20:27","alt":"Gheric Speiginer displays his augmented reality interface that shows how robots are functioning.","file":{"fid":"225319","name":"gheric_0.png","image_path":"\/sites\/default\/files\/images\/gheric_0.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/gheric_0.png","mime":"image\/png","size":884063,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/gheric_0.png?itok=WHnAzC5b"}}},"media_ids":["591305"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"174330","name":"Gheric Speiginer"},{"id":"1597","name":"Augmented Reality"},{"id":"166848","name":"School of Interactive Computing"},{"id":"654","name":"College of Computing"},{"id":"1600","name":"Blair MacIntrye"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAlyson Powell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"591108":{"#nid":"591108","#data":{"type":"news","title":"How Do You Perform CPR? This Device Will Teach You","body":"","field_subtitle":"","field_summary":[{"value":"\u003Cp\u003ERead more here:\u0026nbsp;\u003Ca href=\u0022http:\/\/www.news.gatech.edu\/2017\/03\/14\/how-do-you-perform-cpr-device-will-teach-you\u0022\u003Ehttp:\/\/www.news.gatech.edu\/2017\/03\/14\/how-do-you-perform-cpr-device-will-teach-you\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech students built CPR+, a CPR mask with LED lights that offers user feedback throughout the resuscitation process, in the GVU Prototyping Lab. The device is one of six inventions is competing for Georgia Tech\u2019s 2017 InVenture Prize."}],"uid":"28466","created_gmt":"2017-04-28 17:19:02","changed_gmt":"2017-04-28 17:19:02","author":"Meghana Melkote","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-03-14T00:00:00-04:00","iso_date":"2017-03-14T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"591088":{"#nid":"591088","#data":{"type":"news","title":"Controlling a Robot is Now as Simple as Point and Click","body":[{"value":"\u003Cp\u003EThe traditional interface for remotely operating robots works just fine for roboticists. They use a computer screen and mouse to independently control six degrees of freedom, turning three virtual rings and adjusting arrows to get the robot into position to grab items or perform a specific task.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut for someone who isn\u0026rsquo;t an expert, the ring-and-arrow system is cumbersome and error-prone. It\u0026rsquo;s not ideal, for example, for older people trying to control assistive robots at home.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA new interface designed by Georgia Institute of Technology researchers is much simpler, more efficient and doesn\u0026rsquo;t require significant training time. The user simply points and clicks on an item, then chooses a grasp. The robot does the rest of the work.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Instead of a series of rotations, lowering and raising arrows, adjusting the grip and guessing the correct depth of field, we\u0026rsquo;ve shortened the process to just two clicks,\u0026rdquo; said Sonia Chernova, the Georgia Tech assistant professor in robotics who advised the research effort.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHer team tested college students on both systems, and found that the point-and-click method resulted in significantly fewer errors, allowing participants to perform tasks more quickly and reliably than using the traditional method.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Roboticists design machines for specific tasks, then often turn them over to people who know less about how to control them,\u0026rdquo; said David Kent, the Georgia Tech Ph.D. robotics student who led the project. \u0026ldquo;Most people would have a hard time turning virtual dials if they needed a robot to grab their medicine. But pointing and clicking on the bottle? That\u0026rsquo;s much easier.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe traditional ring-and-arrow-system is a split-screen method. The first screen shows the robot and the scene; the second is a 3-D, interactive view where the user adjusts the virtual gripper and tells the robot exactly where to go and grab. This technique makes no use of scene information, giving operators a maximum level of control and flexibility. But this freedom and the size of the workspace can become a burden and increase the number of errors.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe point-and-click format doesn\u0026rsquo;t include 3-D mapping. It only provides the camera view, resulting in a simpler interface for the user. After a person clicks on a region of an item, the robot\u0026rsquo;s perception algorithm analyzes the object\u0026rsquo;s 3-D surface geometry to determine where the gripper should be placed. It\u0026rsquo;s similar to what we do when we put our fingers in the correct locations to grab something. The computer then suggests a few grasps. The user decides, putting the robot to work.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The robot can analyze the geometry of shapes, including making assumptions about small regions where the camera can\u0026rsquo;t see, such as the back of a bottle,\u0026rdquo; said Chernova. \u0026ldquo;Our brains do this on their own \u0026mdash; we correctly predict that the back of a bottle cap is as round as what we can see in the front. In this work, we are leveraging the robot\u0026rsquo;s ability to do the same thing to make it possible to simply tell the robot which object you want to be picked up.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy analyzing data and recommending where to place the gripper, the burden shifts from the user to the algorithm, which reduces mistakes. During a study, college students performed a task about two minutes faster using the new method vs. the traditional interface. The point-and-click method also resulted in approximately one mistake per task, compared to nearly four for the ring-and-arrow technique.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn addition to assistive robots in homes, the researchers see applications in search-and-rescue operations and space exploration. The\u0026nbsp;\u003Ca href=\u0022https:\/\/github.com\/gt-rail\/remote_manipulation_markers\u0022\u003Einterface has been released\u003C\/a\u003E\u0026nbsp;as\u0026nbsp;\u003Ca href=\u0022https:\/\/github.com\/gt-rail\/rail_agile_grasp\u0022\u003Eopen-source software\u003C\/a\u003E\u0026nbsp;and was presented in Vienna, Austria, March 6-9 at the\u0026nbsp;\u003Ca href=\u0022http:\/\/humanrobotinteraction.org\/2017\/\u0022\u003E2017 Conference on Human-Robot\u003C\/a\u003E\u0026nbsp;Interaction (HRI2017).\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EThe study is partially supported by National Science Foundation Fellowship (\u003C\/em\u003EIIS 13-17775\u003Cem\u003E) and and the Office of Naval Research (N000141410795). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors.\u003C\/em\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EThe traditional interface for remotely operating robots works just fine for roboticists. They use a computer screen and mouse to independently control six degrees of freedom, turning three virtual rings and adjusting arrows to get the robot into position to grab items or perform a specific task.But for someone who isn\u0026rsquo;t an expert, the ring-and-arrow system is cumbersome and error-prone. It\u0026rsquo;s not ideal, for example, for older people trying to control assistive robots at home.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA new interface designed by Georgia Institute of Technology researchers is much simpler, more efficient and doesn\u0026rsquo;t require significant training time. The user simply points and clicks on an item, then chooses a grasp. The robot does the rest of the work.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERead more here:\u0026nbsp;\u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/news\/590819\/controlling-robot-now-simple-point-and-click\u0022\u003Ehttp:\/\/www.cc.gatech.edu\/news\/590819\/controlling-robot-now-simple-point-and-click\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"A new interface designed by Georgia Institute of Technology researchers is much simpler, more efficient and doesn\u2019t require significant training time. The user simply points and clicks on an item, then chooses a grasp. The robot does the rest of the work."}],"uid":"28466","created_gmt":"2017-04-28 16:51:53","changed_gmt":"2017-04-28 16:52:28","author":"Meghana Melkote","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-04-24T00:00:00-04:00","iso_date":"2017-04-24T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"590637":{"#nid":"590637","#data":{"type":"news","title":"Interactive Visualization Illustrates Uncertainty of NFL Draft","body":[{"value":"\u003Cp\u003ENext week, 253 players will hear their names called over the course of three days in the 2017 NFL Draft. For many, it will be the beginning of a long and lucrative career in professional football. For most, it will be the highlight in an increasingly competitive business.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAn \u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/gvu\/ii\/sportvis\/nfldraft\/run\/\u0022\u003Einteractive visualization\u003C\/a\u003E created by a team of researchers in the Georgia Institute of Technology\u0026rsquo;s School of Interactive Computing illustrates just how fleeting the career of a professional football player can be and how difficult it can be for teams to differentiate between the superstars and the busts.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe visualization, which catalogues each of the 32 teams\u0026rsquo; draft picks from 2007-16, indicates with a green icon a player who is currently active on the team that drafted him. A blue icon indicates a player still in the league, but playing on a different team, and a red icon indicates a player that is no longer active in the NFL.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA quick glance at all 32 teams\u0026rsquo; charts presents a healthy dose of red in comparison to the green and blue, illustrating the brevity of the average NFL career. An analysis has shown that the average length of a career decreased by about two years, from 4.99 years to 2.66, from 2008-14.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOnly one team, the Carolina Panthers, have more than one player still active on their roster from their 2007 draft.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFrom a team perspective, the ebb and flow of a given franchise\u0026rsquo;s success can be traced within the colors of the visualization.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe Atlanta Falcons, owners of an 11-5 record and a near Super Bowl championship this past season, have experienced their fair share. After a 13-3 season in 2012, their third straight season of double-digit wins, they surprised many by slipping to four, six, and eight victories over the next three years, missing the playoffs in each.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe visualization, however, shows why it probably shouldn\u0026rsquo;t have come as such a surprise. The Falcons drafted just three players from 2007-12 that are currently on their roster. Since 2013, however, the mass of red icons has turned to green, as the team has hit on 21 of 30 picks.\u003C\/p\u003E\r\n\r\n\u003Ch2\u003ECheck out\u0026nbsp;highlights from a \u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/content\/highlights-information-interfaces-nfl-draft-visualization\u0022\u003Ehandful of other teams\u003C\/a\u003E in the NFL.\u003C\/h2\u003E\r\n\r\n\u003Cp\u003EAlso evident in the visualization is which teams have seen relative success in the draft in comparison to others, as well as how that draft success has correlated to improved returns on the field.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOn one hand, there are the Houston Texans, who have seen their average wins per season increase from 5.33 over the course of their first seven years of existence to 8.22 in the nine years since. That increase in wins coincides with a string of nine straight hits in the first round of the draft, shown in the visualization by a green icon in the first-round column for each year from 2008-16.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EComparatively, the consistently unsuccessful Cleveland Browns display just five first-round green icons since 2007, three of which have come in the past two years. They have just three in the second round, none coming before 2014.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn addition to the player\u0026rsquo;s league status, active or inactive, the visualization allows the user to toggle to two other categories: Games started and approximate value.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe \u0026ldquo;games started\u0026rdquo; option indicates much of what you would expect \u0026ndash; that players taken earlier in the draft see the field more often \u0026ndash; but also indicates which teams have had the most success in finding the proverbial diamonds in the rough.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe Seattle Seahawks, for example, found much of the talent that led it to back-to-back Super Bowl appearances in 2013-14 in the later rounds of the 2010-11 drafts. Kam Chancellor, K.J. Wright, and Richard Sherman, who help form the nucleus of Seattle\u0026rsquo;s stingy defense, were taken in the fourth and fifth rounds but are colored orange to indicate 65-99 NFL starts.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe team of researchers includes undergraduate student Se Yeon Kim, graduate student Sakshi Pratap, and School of Interactive Computing Professor John Stasko. Stasko is the director of the \u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/gvu\/ii\/\u0022\u003EInformation Interfaces Research Group\u003C\/a\u003E, whose mission is to help people take advantage of information to enrich their lives by creating information visualizations and visual analytics tools to help analyze and understand large data sets.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EInformation for the visualization was compiled from NFL.com\u0026#39;s \u003Ca href=\u0022http:\/\/www.nfl.com\/draft\/history\/fulldraft?type=team\u0022\u003Edraft history\u003C\/a\u003E and \u003Ca href=\u0022http:\/\/www.nfl.com\/players\u0022\u003Eplayer\u003C\/a\u003E pages. Games started and approximate value was taken from \u003Ca href=\u0022http:\/\/www.pro-football-reference.com\/\u0022\u003EPro Football Reference\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFollow the link for a look at the \u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/gvu\/ii\/sportvis\/nfldraft\/\u0022\u003Eproject page\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"An interactive visualization created by the Information Interfaces Research Groups shows just how few draftees make it long-term in the NFL."}],"uid":"33939","created_gmt":"2017-04-19 16:56:25","changed_gmt":"2017-04-26 14:03:47","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-04-19T00:00:00-04:00","iso_date":"2017-04-19T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"590625":{"id":"590625","type":"image","title":"Atlanta Falcons Interactive Draft Vis","body":null,"created":"1492616251","gmt_created":"2017-04-19 15:37:31","changed":"1492616251","gmt_changed":"2017-04-19 15:37:31","alt":"Atlanta Falcons Interactive Draft Visualization","file":{"fid":"225019","name":"Atlanta Falcons.png","image_path":"\/sites\/default\/files\/images\/Atlanta%20Falcons.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Atlanta%20Falcons.png","mime":"image\/png","size":268340,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Atlanta%20Falcons.png?itok=mtJS8xOt"}}},"media_ids":["590625"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"8862","name":"Student Research"}],"keywords":[{"id":"166848","name":"School of Interactive Computing"},{"id":"11632","name":"john stasko"},{"id":"172919","name":"Information Interfaces Group"},{"id":"654","name":"College of Computing"},{"id":"174093","name":"NFL Draft"},{"id":"12397","name":"Atlanta Falcons"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71901","name":"Society and Culture"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["david.mitchell@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"590849":{"#nid":"590849","#data":{"type":"news","title":"GT Computing Moving the Needle Forward in Autism Research","body":[{"value":"\u003Cp\u003EWhen neuroimaging took gigantic leaps forward in the 1970s and 80s with the introduction of magnetic resonance imaging (MRI) and computed tomography (CT), it was a sign of just how closely advances in medicine or diagnostics correlate to the technological advances within the field.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESuddenly, researchers were able to more safely observe and document the brain in live subjects, opening them to a world of study that was previously unattainable. There was a massive increase in understanding about things like medical conditions or effects of alcohol and drugs on the brain.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThat kind of technological advancement, one that drastically moves the needle of study in the field forward, hasn\u0026rsquo;t been as prominent in the field of behavioral psychology. It\u0026rsquo;s a challenge that many researchers in the Georgia Institute of Technology\u0026rsquo;s School of Interactive Computing (IC) are trying to overcome.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The tools that exist today in neuroimaging as compared to 50 or 60 years ago, there\u0026rsquo;s just no comparison,\u0026rdquo; IC Professor \u003Cstrong\u003EJim Rehg\u003C\/strong\u003E said. \u0026ldquo;People are looking at resolutions or structures in the brain that weren\u0026rsquo;t even on the map 50 years ago. We\u0026rsquo;re just trying to bring the behavioral measurement forward in the same way that is already happening for imaging and genetics.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERehg is one of a number of faculty focusing their efforts on developing new computational analysis tools to measure behavior. A key goal of the work is to improve understanding of Autism Spectrum Disorder, a complex group of disorders of brain development characterized by repetitive behaviors and difficulties in social interaction and communication.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe began collaborating with Professor \u003Cstrong\u003EGregory Abowd\u003C\/strong\u003E and senior research scientist \u003Cstrong\u003EAgata Rozga\u003C\/strong\u003E, among others, during a five-year National Science Foundation Expeditions in Computing grant and has subsequently continued his work with Rozga under a grant from the Simons Foundation. The former grant was instrumental in setting up the \u003Ca href=\u0022http:\/\/www.childstudylab.gatech.edu\/\u0022\u003EChild Study Lab\u003C\/a\u003E at Georgia Tech, which studies early social, communication, and play behavior in children, including those with autism.\u003C\/p\u003E\r\n\r\n\u003Ch2\u003ETracking Problem Behaviors With Technology\u003C\/h2\u003E\r\n\r\n\u003Cp\u003EMore recently, Rozga, the director of the lab, received \u0026ndash; along with Associate Professor \u003Cstrong\u003EThomas Ploetz\u003C\/strong\u003E and Dr. \u003Cstrong\u003ENathan Call\u003C\/strong\u003E of the \u003Ca href=\u0022http:\/\/www.marcus.org\/\u0022\u003EMarcus Autism Center\u003C\/a\u003E \u0026ndash; an NIH R21 grant for a project titled \u003Cem\u003EObjective Measurement of Challenging Behaviors in Individuals with Autism Spectrum Disorder\u003C\/em\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe latter research deals in problem behaviors as exhibited by individuals with autism. Call, who is the director of \u003Ca href=\u0022http:\/\/www.marcus.org\/About-Us\/For-Professionals\/~\/media\/Marcus\/Documents\/About-Marcus\/FactSheet-BehaviorTreatment.pdf\u0022\u003EBehavior Treatment Clinics\u003C\/a\u003E at the Marcus Autism Center, described the challenge the research is aiming to address.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Individuals with autism and other developmental disorders are more likely to exhibit problem behaviors like self-injury, pica, or property destruction,\u0026rdquo; he said. \u0026ldquo;Behavioral interventions exist, and can be very effective, but there are a few barriers. Data collection on the behavior is a key ingredient, but is most often done by a human observer, which is expensive, has the potential for reactivity, doesn\u0026rsquo;t work for covert behaviors, cannot always provide a good estimate of severity, and may not always be accurate.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe project involves the use of accelerometers and machine learning to develop a measurement system that will detect and differentiate between different types of problem behavior in a way that addresses each of those challenges.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn the best of circumstances, such as a research or clinical setting, videos can be recorded and research assistants can go through the videos to find moments where the child engages in some type of behavior. Currently, that is the standard. As Rozga said, though, that is not something that scales to large samples or allows you to study behaviors outside the strictures of a research setting.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe approach, then, is to combine currently available wearable technology with computational analysis to see whether that might be used to advance the state of the art.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing sensors attached to the wrists and ankles, the team records movement data from the individual.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;From a technical point of view, we want to know whether we can see when an activity starts, when it ends, and of what nature that activity actually was,\u0026rdquo; said Ploetz, who has worked with Rozga in the past and joined the IC faculty in February of this year. \u0026ldquo;An automated recognition of problem behaviors is a substantial challenge that involves capturing through sensors and analyzing through machine learning-based assessment techniques.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe hope is that they can build statistical models that can analyze data streams and automatically pick out which kinds of activities or problem behaviors an individual engages at a given time, as well as their frequency and intensity.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;One of the things that Dr. Call said was a clinically-relevant measure they have not been able to gather is severity of the problem behavior,\u0026rdquo; Rozga said. \u0026ldquo;It\u0026rsquo;s hard to get two people to agree on any rating scale. We had this moment where we said, \u0026lsquo;You know, that information is already in the signal.\u0026rsquo; If you look at the amplitude at the moment of impact, we have potentially a signal there that can speak to the intensity, or severity, of the behavior. What other things can you measure if you had access to this new measure?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFurther, and most importantly in the early stages, can these models measure with comparable accuracy to \u0026ldquo;ground truth\u0026rdquo; \u0026ndash; labor-intensive, frame-by-frame coding \u0026ndash; in the strict clinical setting.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIf so, the long-term goal is to then deploy these behavior monitors into the home, a much less structured environment.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Can we use this for treatment follow-up, or to understand how these behaviors manifest in the home or school?\u0026rdquo; Rozga said. \u0026ldquo;Does this work beyond just the clinical setting?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch2\u003EMeasuring Social-Communication Behaviors\u003C\/h2\u003E\r\n\r\n\u003Cp\u003ERozga\u0026rsquo;s work with Rehg is similar in that it attempts to take advantage of the vast availability of sensor technologies to improve measurement of social-communication behaviors in young children, such as eye contact, shifts of attention between objects and faces, and gestures.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Historically, the problem was that our tools for getting information were very limited,\u0026rdquo; Rehg said. \u0026ldquo;What\u0026rsquo;s really changed is our ability to collect large-scale data. Generally speaking, this is the best moment in time as far as sensor tools go. Cameras, microphones, accelerometers, inertial measurement units \u0026ndash; these are the sensors we\u0026rsquo;re most interested in.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWith them, they can continuously track individuals\u0026rsquo; eyes, heads, limbs, posture, and many other movements associated with the production of relevant social behaviors, increasing the overall pool of available data.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Large amounts of data from kids is what it takes to characterize behavior and how it changes over time,\u0026rdquo; Rehg said. \u0026ldquo;This is something that scales. You can replicate it in other settings, other labs. You can demonstrate that this approach works well across different data sets. We want to show that this is something that can be generalized.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIf they can, a more accurate picture of childhood development, as well as the response to treatment in behavioral problems, could emerge.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThere are other important contributors to the research, Rozga said. \u003Cstrong\u003EAudrey Southerland\u003C\/strong\u003E is the lab coordinator at the Child Study Lab, where she has helped with research for over six years. She began as an undergraduate research assistant under the Expeditions in Computing grant in 2011 and joined the staff full time as the lab coordinator after graduating with a Bachelor\u0026rsquo;s degree in psychology in 2012. In this role, she oversees the lab, including data collection and current undergraduate research assistants, on a daily basis.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EDr. Mindy Scheithauer\u003C\/strong\u003E has also been a key collaborator at the Marcus Autism Center, where she works in the Severe Behavior Program.\u003C\/p\u003E\r\n\r\n\u003Ch2\u003ELearn the Signs, Act Early\u003C\/h2\u003E\r\n\r\n\u003Cp\u003EOthers throughout the College of Computing have pursued other extensive research surrounding autism. Senior research scientist and developmental psychologist \u003Cstrong\u003ERosa Arriaga\u003C\/strong\u003E is leading a team that has developed \u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/news\/584605\/actearly-app-helps-parents-track-childhood-developmental-milestones\u0022\u003EActEarly\u003C\/a\u003E, a mobile Android app that gives parents and caregivers a comprehensive and convenient way to track developmental milestones for children.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe app is designed to support kids \u0026ndash; newborns to age five \u0026ndash; by providing information on social, language, cognitive, and physical milestones children should achieve at each age.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Parents may be unaware that a child is failing to meet important developmental milestones and this might put the child at risk,\u0026rdquo; Arriaga said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWorking with \u003Cstrong\u003ELaurel Warrell\u003C\/strong\u003E, a Master\u0026rsquo;s of Science Candidate in \u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/academics\/degree-programs\/masters\/ms-hci\u0022\u003EHuman-Computer Interaction\u003C\/a\u003E, they are working to deploy and conduct usability studies with the app, which leverages expertise from the Centers for Disease Control and Prevention (CDC) and is part of a broader \u0026ldquo;\u003Ca href=\u0022https:\/\/www.cdc.gov\/ncbddd\/actearly\/\u0022\u003ELearn the Signs, Act Early\u003C\/a\u003E\u0026rdquo; campaign. This initiative seeks to identify developmental disabilities in young children and provide families with needed services.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EArriaga and her team are seeking parents to participate in their studies. They are asking that parents of children between one month old and 5 years old, who have an Android phone, to download the ActEarly mobile app and provide feedback. Interested individuals can follow the \u003Ca href=\u0022http:\/\/ipat.gatech.edu\/study-recruitment\u0022\u003Elink\u003C\/a\u003E for more information.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAdditionally, Arriaga\u0026rsquo;s team is currently developing with the CDC an interactive e-book that will allow parents to track their 3-year-old child\u0026rsquo;s milestones while they read. She is also working with undergraduates to develop toddler games to help inform parents about what their child can do. A demo of the latter project can be viewed in a video \u003Ca href=\u0022https:\/\/www.youtube.com\/watch?v=7nfrFV5M2z4\u0026amp;feature=youtu.be\u0022\u003Ehere\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Multiple grants have helped develop fields of research into how technology can assist detection and, perhaps, treatment of problem behaviors associated with autism."}],"uid":"33939","created_gmt":"2017-04-24 19:34:48","changed_gmt":"2017-04-24 19:34:48","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-04-24T00:00:00-04:00","iso_date":"2017-04-24T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"590844":{"id":"590844","type":"image","title":"Child Study Lab Autism Research","body":null,"created":"1493061979","gmt_created":"2017-04-24 19:26:19","changed":"1493061979","gmt_changed":"2017-04-24 19:26:19","alt":"Lab coordinator Audrey Southerland, along with undergraduate assistants, leads data collection at the Child Study Lab.","file":{"fid":"225112","name":"Autism5.jpg","image_path":"\/sites\/default\/files\/images\/Autism5.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Autism5.jpg","mime":"image\/jpeg","size":329499,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Autism5.jpg?itok=a3RDfy3M"}}},"media_ids":["590844"],"related_links":[{"url":"http:\/\/www.childstudylab.gatech.edu\/","title":"Child Study Lab"},{"url":"http:\/\/www.marcus.org\/","title":"Marcus Autism Center"},{"url":"http:\/\/ipat.gatech.edu\/study-recruitment","title":"ActEarly"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"654","name":"College of Computing"},{"id":"166848","name":"School of Interactive Computing"},{"id":"6053","name":"Autism"},{"id":"108751","name":"Autism Spectrum Disorder"},{"id":"11172","name":"Agata Rozga"},{"id":"14419","name":"jim rehg"},{"id":"11178","name":"Rosa Arriaga"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["david.mitchell@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"590531":{"#nid":"590531","#data":{"type":"news","title":"Professor Amy Bruckman to Serve as School of Interactive Computing Interim Chair","body":[{"value":"\u003Cp\u003EGeorgia Institute of Technology Professor \u003Cstrong\u003EAmy Bruckman\u003C\/strong\u003E will serve as interim chair of the \u003Ca href=\u0022http:\/\/www.ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E beginning on July 1, after current Chair \u003Cstrong\u003EAnnie Ant\u0026oacute;n\u0026rsquo;s\u003C\/strong\u003E term comes to an end. Bruckman, who currently serves as associate chair of the school, will serve until the school hires its new chair at the completion of an international search process.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s been a pleasure working with Annie these past three years, and I\u0026rsquo;m excited about the candidates in our chair search,\u0026rdquo; Bruckman said. \u0026ldquo;I don\u0026rsquo;t aspire to a bigger administrative role myself, but I\u0026rsquo;m happy to fill in during this time of transition.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/fac\/Amy.Bruckman\/\u0022\u003EBruckman\u003C\/a\u003E has been a faculty member in Georgia Tech\u0026rsquo;s College of Computing since 1997, when she was brought on as an assistant professor. She became an associate professor in 2003, a professor in 2012, and began serving as associate chair of the School of Interactive Computing in 2014.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs a researcher, she and her students focus on social computing and online collaboration. Current projects include studying the introduction of the internet to Cuba, and trying to understand online harassment. She also studies how social media can support social movements, and is currently doing action research with the\u0026nbsp;organization Science for the People.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBruckman received her Ph.D. from the Massachusetts Institute of Technology (MIT) Media Lab\u0026rsquo;s Epistemology and Learning group in 1997, her Master\u0026rsquo;s from the MIT Media Lab\u0026rsquo;s Interactive Cinema Group in 1991, and her Bachelor\u0026rsquo;s in physics from Harvard University in 1987.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EProfessor \u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/people\/annie-anton\u0022\u003EAnnie Ant\u0026oacute;n\u003C\/a\u003E began her five years of service as chair of the School of Interactive Computing in 2012, joining Georgia Tech\u0026rsquo;s faculty ranks after 14 years at North Carolina State University\u0026rsquo;s College of Engineering.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnt\u0026oacute;n earned each of her Bachelor\u0026rsquo;s, Master\u0026rsquo;s, and Ph.D. from Georgia Tech in 1990, 1992, and 1997, respectively.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe search for a new chair is being conducted by a committee of 11 faculty, staff, and students. It is chaired by School of Computer Science Professor Ellen Zegura. Other members include School of Interactive Computing Professors Ian Bogost, Ashok Goel, and John Stasko, Associate Professors Mark Riedl and James Hays, Assistant Professors Betsy DiSalvo and Jacob Eisenstein, research scientist Agata Rozga, financial administrator Connie Irish, and Ph.D. student Maia Jacobs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EClick the link for a full description of the \u003Ca href=\u0022http:\/\/www.ic.gatech.edu\/chair-school-interactive-computing\u0022\u003Eopen chair position\u003C\/a\u003E and background on the School of Interactive Computing.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"School of Interactive Professor and Associate Chair Amy Bruckman will serve as interim chair upon the completion of Annie Ant\u00f3n\u0027s five years of service."}],"uid":"33939","created_gmt":"2017-04-17 19:45:56","changed_gmt":"2017-04-17 19:45:56","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-04-17T00:00:00-04:00","iso_date":"2017-04-17T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"590524":{"id":"590524","type":"image","title":"Amy Bruckman","body":null,"created":"1492457925","gmt_created":"2017-04-17 19:38:45","changed":"1492457925","gmt_changed":"2017-04-17 19:38:45","alt":"Professor Amy Bruckman to serve as School of Interactive Computing Interim Chair","file":{"fid":"224980","name":"asb_full.jpg","image_path":"\/sites\/default\/files\/images\/asb_full.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/asb_full.jpg","mime":"image\/jpeg","size":74680,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/asb_full.jpg?itok=717qrDXl"}}},"media_ids":["590524"],"related_links":[{"url":"http:\/\/www.ic.gatech.edu\/chair-school-interactive-computing","title":"Chair, School of Interactive Computing"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"166848","name":"School of Interactive Computing"},{"id":"8472","name":"amy bruckman"},{"id":"27641","name":"annie anton"},{"id":"654","name":"College of Computing"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["david.mitchell@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"590161":{"#nid":"590161","#data":{"type":"news","title":"RoboJackets Providing Opportunity for Both Competition and Outreach","body":[{"value":"\u003Cp\u003EThe origins of Georgia Tech\u0026rsquo;s \u003Ca href=\u0022https:\/\/robojackets.org\/\u0022\u003E\u003Cem\u003ERoboJackets\u003C\/em\u003E\u003C\/a\u003E organization can be traced back to 1999, when a BattleBots team was founded for the first time within the School of Mechanical Engineering.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBack then, there were just a few members working on projects in their spare time. The school\u0026rsquo;s focus on co-curricular involvement was not as widespread as it has become today, so members had to be more resourceful in their pursuit of knowledge and competition.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt\u0026rsquo;s a far cry from what the popular student group has become.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EToday, there are over 200 members representing at least nine different degrees, from mechanical engineering, electrical engineering and computer science, to computational engineering and aerospace engineering, among others.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s become such an active organization,\u0026rdquo; said \u003Cem\u003ERoboJackets\u003C\/em\u003E president Ryan Strat, a fourth-year computer science major nearing the end of his one-year term. \u0026ldquo;And our members are dedicated to improving on every facet.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThere are currently five teams within the organization, sub-groups that work and compete in varying capacities. The original team, BattleBots, has maintained a continued presence since the group\u0026rsquo;s inception nearly two decades ago. There is also RoboCup, a robotic soccer league, RoboRacing, the youngest of the five groups, the Intelligent Ground Vehicle Competition (IGVC), and Outreach. It is this latter group, Strat said, that sets the \u003Cem\u003ERoboJackets\u003C\/em\u003E apart from many other organizations across Georgia Tech\u0026rsquo;s campus.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EOutreach\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe Outreach team was created in 2001 to fulfill a need the organization felt was being overlooked at the time. Building robots was great, they said, but members felt that they had a valuable skill that should be shared.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPartnering with FIRST Robotics, a partnership that is still growing today, the \u003Cem\u003ERoboJackets\u003C\/em\u003E began a mentorship program for high school teams in the Atlanta area. Teams are invited in once a week to a presentation by the \u003Cem\u003ERoboJackets\u003C\/em\u003E on things they need to be a successful team \u0026ndash; how to manage resources, how to recruit team members, sessions on vital subjects like computer vision, for example.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe \u003Cem\u003ERoboJackets\u003C\/em\u003E are currently affiliated with Toaster Tech, a team of high school students in the Atlanta area. Past affiliations include Westlake Roarbotics, Reboot, Tech High School, Georgia Robotics Alliance SOUP, Wheeler High CircuitRunners, and Roswell High Chimera.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Service is a core component of being an organization, and I think that\u0026rsquo;s what sets us apart from others on campus,\u0026rdquo; Strat said. \u0026ldquo;The fact that it\u0026rsquo;s a combination of hands-on engineering practicum as well as a public service is very unique. I think that\u0026rsquo;s what helps us produce such well-rounded students.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe group maintains a YouTube channel with an archive of learning resources for teens. Recently, for the in-person presentations, they invited some of the high school students to submit their own presentations, assisted them in crafting it, and allowed them to present themselves.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EVolunteering at high school competitions and assorted events has been a growing component, as well. The \u003Cem\u003ERoboJackets\u003C\/em\u003E provide highly-skilled volunteers that can handle tasks like officiating and audio\/video assistance, among others.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We have a dedicated base in the state, and \u003Cem\u003ERoboJackets\u003C\/em\u003E is helping to grow that footprint,\u0026rdquo; Strat said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe \u003Cem\u003ERoboJackets \u003C\/em\u003Ehelp put on events like the FIRST Robotics Competition Kickoff each January, which reveals games and begins to league\u0026rsquo;s season. The event is held each year at the Ferst Theater and welcomes around 1,400 people to campus. Also, the Robotics Symposium was a new event for Fall 2016 that brought in speakers from various parts of Georgia FIRST and industry partners to give over 30 talks to Georgia middle and high school students.\u003C\/p\u003E\r\n\r\n\u003Ch2\u003EVIDEO: To see more from the RoboJackets, including both instruction and competition, visit their YouTube channel \u003Ca href=\u0022https:\/\/www.youtube.com\/user\/RoboJackets\u0022\u003Ehere\u003C\/a\u003E.\u003C\/h2\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EBattleBots\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe BattleBots have long been a pop-culture phenomenon, earning spots on popular television networks as they fight to the death.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe \u003Cem\u003ERoboJackets\u003C\/em\u003E version has been around since 1999 and comprises a number of different facets. There are the small editions, the 3-lb. robots that are relatively inexpensive and can be designed and manufactured within a couple of months.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENewer members of the \u003Cem\u003ERoboJackets\u003C\/em\u003E start here in groups of 4-6 and, working with more experienced mentors, create the BattleBot from scratch.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s an art in many ways,\u0026rdquo; Strat said. \u0026ldquo;You have to learn what can actually be manufactured and what can\u0026rsquo;t. You can make something in any shape on a computer, but that doesn\u0026rsquo;t mean you can actually make it.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAfter the 3-lb. program, members step up in size for other larger competitions. Strat said the team has created robots in the 60- and 120-lb. weight classes.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ERoboCup\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOriginally a project within the Institute for Robotics and Intelligent Machines (IRIM), the RoboCup team is in a small-size league, part of the RoboCup Federation, for robotic soccer competition. The federation is a research group dedicated to building humanoid robotic soccer players capable of beating the World Cup champions by the year 2050.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe league the \u003Cem\u003ERoboJackets\u003C\/em\u003E participate in is 6-on-6, utilizing small wheeled robots about the size of a coffee can.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECurrently, the team is focusing on soccer strategy.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We\u0026rsquo;re trying to solve the multi-agent problem,\u0026rdquo; Strat explained. \u0026ldquo;You have \u003Cem\u003En \u003C\/em\u003Eplayers on the field \u0026ndash; how do you decide who does what? How do you plan things like aggression?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe \u003Cem\u003ERoboJackets\u003C\/em\u003E team participates in international events, which often take place in the same location as the World Cup. Strat said the team will send 10-11 students in July to Japan to compete. Last year, they competed in Germany.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s one of the more research-focused competitions,\u0026rdquo; Strat said. \u0026ldquo;BattleBots is more fun and concerned with winning or losing. This one, everyone competing is writing a research paper, and your prize for winning is another research paper.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EIGVC\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe Intelligent Ground Vehicle Competition is tasked with the construction of an autonomous robot capable of navigating an off-road obstacle course. Essentially, Strat said, it is an autonomous all-terrain vehicle.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe IGVC is held by the Association of Unmanned Vehicle Systems International. Each year, the \u003Cem\u003ERoboJackets \u003C\/em\u003Esend a team to Michigan to compete in mapping and navigation challenges. Given certain GPS waypoints, the vehicles must travel to each location on the course while hauling a payload.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The robot itself is very similar structurally to an ATV,\u0026rdquo; Strat said. \u0026ldquo;It is loaded with a few cameras, and this year we\u0026rsquo;ll be loading Intel real-sense cameras, which are more or less a Kinect. It\u0026rsquo;s a depth camera to give more information about where things are.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETeams are scored on performance in the autonomous challenge, presentation, and the design of the robot.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ERoboRacing\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERoboRacing is the youngest of the five teams, having been established just four years ago. Despite its youth, it is already one of the most successful of all the \u003Cem\u003ERoboJackets\u0026rsquo;\u003C\/em\u003E groups.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt has won gold two of the three years it has competed, sweeping the competition at least once with design awards, circuit racing, and drag racing at the International Autonomous Robot Racing Challenge in Waterloo, Canada.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELast year, they added the Sparkfun Autonomous Vehicle Challenge, which involves the same car but more challenging vision targets. Instead of looking at cones, which are easier to identify, the Sparkfun course is marked with things like chain-linked fences or bales pine straw.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s much more difficult from a computer vision standpoint,\u0026rdquo; Strat said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBeyond those competitions, there is also an autonomous Power Wheels racing series.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EYes, Power Wheels \u0026ndash; the same small car you drove around in as a toddler is used in a hobbyist community for racing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;You sit in them as an adult, and it is comical,\u0026rdquo; Strat said. \u0026ldquo;That competition is in October. At this point, we\u0026rsquo;re very much in the design phase.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The RoboJackets are a five-team robotics organization with membership over 200 at Georgia Tech."}],"uid":"33939","created_gmt":"2017-04-10 19:39:56","changed_gmt":"2017-04-10 19:39:56","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-04-10T00:00:00-04:00","iso_date":"2017-04-10T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"590156":{"id":"590156","type":"image","title":"RoboJackets 3","body":null,"created":"1491852627","gmt_created":"2017-04-10 19:30:27","changed":"1491852627","gmt_changed":"2017-04-10 19:30:27","alt":"","file":{"fid":"224835","name":"3_DSC_0105.jpg","image_path":"\/sites\/default\/files\/images\/3_DSC_0105_0.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/3_DSC_0105_0.jpg","mime":"image\/jpeg","size":151121,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/3_DSC_0105_0.jpg?itok=snUbbbEq"}}},"media_ids":["590156"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"134","name":"Student and Faculty"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"11489","name":"RoboJackets"},{"id":"654","name":"College of Computing"},{"id":"594","name":"college of engineering"},{"id":"79181","name":"national robotics week"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["david.mitchell@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"590057":{"#nid":"590057","#data":{"type":"news","title":"Guthman Musical Instrument Competition 2017 Winners","body":[{"value":"\u003Cp\u003ERead more about the 2017 Winners of Guthman Musical Instrument Competition here:\u0026nbsp;https:\/\/guthman.gatech.edu\/2017-winners\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The 2017 Winners of the Guthman Musical Instrument Competition"}],"uid":"28466","created_gmt":"2017-04-07 20:24:06","changed_gmt":"2017-04-07 20:29:16","author":"Meghana Melkote","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-03-15T00:00:00-04:00","iso_date":"2017-03-15T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"590058":{"#nid":"590058","#data":{"type":"news","title":"How Do You Perform CPR? This Device Will Teach You","body":[{"value":"\u003Cp\u003ECPR+ is a CPR mask with LED lights that offers user feedback throughout the resuscitation process. The device is one of six inventions is competing for Georgia Tech\u0026rsquo;s 2017 InVenture Prize.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe other inventors are: Dave Ehrlich, a computer engineering major; Samuel Clarke, a mechanical engineering and computer science major; and Ryan Williams, a computer engineering major.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERead more here:\u0026nbsp;\u003Ca href=\u0022http:\/\/www.news.gatech.edu\/2017\/03\/14\/how-do-you-perform-cpr-device-will-teach-you\u0022 id=\u0022LPlnk260928\u0022 target=\u0022_blank\u0022\u003Ehttp:\/\/www.news.gatech.edu\/2017\/03\/14\/how-do-you-perform-cpr-device-will-teach-you\u003C\/a\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"CPR+ is one of six finalists for the 2017 InVenture Prize"}],"uid":"28466","created_gmt":"2017-04-07 20:27:03","changed_gmt":"2017-04-07 20:27:03","author":"Meghana Melkote","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-03-14T00:00:00-04:00","iso_date":"2017-03-14T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"589909":{"#nid":"589909","#data":{"type":"news","title":"Vivian Chu Working to Provide Robots Basic Building Blocks for Cognition","body":[{"value":"\u003Cp\u003EGeorgia Institute of Technology robotics student \u003Cstrong\u003EVivian Chu\u003C\/strong\u003E shares a familiar path to computer science with plenty of other students:\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs a child, she loved engineering and computer science, taking things apart and putting them back together. Both of her parents were software engineers, so her road to STEM was paved long before she had the means to travel it.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EShe attended the University of California, Berkeley, for her undergraduate degree, where she earned a Bachelor\u0026rsquo;s degree in Electrical Engineering and Computer Science. She focused on Embedded Software, and, by and large, she enjoyed the experience.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut something was always missing when she took her computer science or electrical engineering classes. In them, she might design an algorithm or program a circuit, but she wasn\u0026rsquo;t seeing visual representation of her work in the way she wanted.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;You could put these things together, but there wasn\u0026rsquo;t a lot that you could actually see happen,\u0026rdquo; she said. \u0026ldquo;Then I took this one class where we got to program a Roomba to climb ramps or do other actions with an accelerometer. That was the first time things kind of clicked.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EShe began to see and appreciate how a robot could understand how to interact with the world and also how it processed the information it gathered.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EShe got another taste a year later as a senior while working on an autonomous helicopter project. The helicopter didn\u0026rsquo;t do much \u0026ndash; just hovering a few feet off the ground \u0026ndash; but she realized during her work that she could sit in the lab for 12 hours without realizing it and come back excited to work the next day.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;That drove it home,\u0026rdquo; she said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENow a Ph.D. student in robotics at Georgia Tech, Chu is interested in how to advance robotics to a point where robots could be deployed in care facilities or the home. Specifically, she is taking an approach of teaching robots the basic building blocks of cognition.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThere are certain things humans learn as children that help them develop an understanding of the material world around them. A cup is a cup because it is fully containable, able to hold something like water inside; a spoon is a spoon because it can scoop other materials and hold them within its concave structure.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If you could teach these robots these basic components, these basic building blocks, then when they go into your home, they could better reason how to perform other tasks,\u0026rdquo; she said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELike making pasta. If a robot knows it needs something containable to hold something, heat to cook, and a spoon to stir, it could carry out that and other similar jobs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHer inspiration came when she was working on her Master\u0026rsquo;s degree in robotics at the University of Pennsylvania. She attended a guest lecture by Georgia Tech alum \u003Cstrong\u003EAlex Stoytchev\u003C\/strong\u003E, who is now an assistant professor at Iowa State University. In the talk, Stoytchev discussed developmental psychology in children, how they explore basic actions and movements.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;A lot of my research is similar in that I want to teach these building blocks by having robots play with objects the way children play,\u0026rdquo; Chu said. \u0026ldquo;Adults give a child a nudge in the right direction here or there. Rather than having a robot do it blindly, we can have someone in the room and give it a bump here or there.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It presents something that is much faster than a robot doing it on its own.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe ideal goal is for a robot to truly understand its different sensory inputs. People use touch, sight, and sound, for example, to accomplish a task like turning on a lamp. Currently, robots are either very visual, which is the majority of the research, or incorporate touch.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;There\u0026rsquo;s very little being done to sort of merge these senses,\u0026rdquo; she said. \u0026ldquo;Audio is almost unheard of.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EChu would like to achieve a scenario where the robot could understand that to turn on a lamp there is a touch component (learning the correct force with which to pull the rope), a visual component (to see where to pull, as well as whether the light turns on or not), and an auditory component (to hear the click as it pulls the rope).\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Those are all things I\u0026rsquo;m trying to research for my thesis,\u0026rdquo; she said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe applications for this research are wide-ranging, but the enormous potential for the aging population is one of the aspects that interests Chu the most.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;As people get older, how do I make sure they could retire and have a dignified lifestyle toward the end of their life?\u0026rdquo; she asked.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlthough she is still pursuing answers to these questions and is yet to defend her thesis, she was already recognized in the robotics community by \u003Cem\u003ERobohub\u003C\/em\u003E\u0026rsquo;s 2016 list \u003Cem\u003E\u003Ca href=\u0022http:\/\/robohub.org\/25-women-in-robotics-you-need-to-know-about-2016\/\u0022\u003E25 Women in Robotics You Need to Know About\u003C\/a\u003E\u003C\/em\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe inclusion on the list took Chu by surprise, but she said it was rewarding because it acknowledges the importance of the work she is doing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;As a Ph.D. student, the biggest fear is that you are going to write your thesis and no one is going to know about it,\u0026rdquo; she said. \u0026ldquo;That it\u0026rsquo;s just a document that gets tossed aside and doesn\u0026rsquo;t have an impact. It\u0026rsquo;s nice to know that, on a high level, there\u0026rsquo;s acknowledgement of what I\u0026rsquo;m working on.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EShe plans to complete her degree within the next year, and still has plenty of goals she\u0026rsquo;d like to achieve going forward. While she is undecided whether she\u0026rsquo;ll pursue a career in academia or one working with a startup \u0026ndash; a lifelong goal of hers \u0026ndash; she knows she ultimately wants to impact society as a whole in any way she can.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I often joke with my wife about the ways in which we can try to save the world,\u0026rdquo; she said. \u0026ldquo;But all jokes aside, for me, technology for just technology\u0026rsquo;s sake isn\u0026rsquo;t enough. The goal really is: How can the things I\u0026rsquo;m working on help improve the lives of those around us?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ENational Robotics Week is April 8-16. Follow the \u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/\u0022\u003ECollege of Computing\u003C\/a\u003E and \u003Ca href=\u0022http:\/\/www.gatech.edu\/\u0022\u003EGeorgia Tech\u003C\/a\u003E pages for additional content throughout the week.\u003C\/strong\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"As National Robotics Week is set to begin, one of Georgia Tech\u0027s Ph.D. students is helping teach robots vital reasoning skills."}],"uid":"33939","created_gmt":"2017-04-06 14:47:32","changed_gmt":"2017-04-06 14:47:32","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-04-06T00:00:00-04:00","iso_date":"2017-04-06T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"589904":{"id":"589904","type":"image","title":"Vivian Chu 1","body":null,"created":"1491489769","gmt_created":"2017-04-06 14:42:49","changed":"1491489769","gmt_changed":"2017-04-06 14:42:49","alt":"Vivian Chu poses with the robot Curi, which she works with in her lab.","file":{"fid":"224725","name":"Vivian Chu Main.jpg","image_path":"\/sites\/default\/files\/images\/Vivian%20Chu%20Main.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Vivian%20Chu%20Main.jpg","mime":"image\/jpeg","size":159613,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Vivian%20Chu%20Main.jpg?itok=6KbQ7Rr8"}}},"media_ids":["589904"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"8862","name":"Student Research"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"172726","name":"Vivian Chu"},{"id":"106591","name":"25 Women in Robotics You Need to Know About"},{"id":"667","name":"robotics"},{"id":"654","name":"College of Computing"},{"id":"79181","name":"national robotics week"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["david.mitchell@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"588532":{"#nid":"588532","#data":{"type":"news","title":"CS Minor Providing Versatility for GT Alum Jon Eisen","body":[{"value":"\u003Cp\u003EFor \u003Cstrong\u003EJon Eisen\u003C\/strong\u003E, everything has always been about numbers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe path that led him to speak on behalf of the prominent video game hub Activision, publishers of the popular \u003Cem\u003ECall of Duty\u003C\/em\u003E franchise, at last week\u0026rsquo;s \u003Ca href=\u0022http:\/\/gvu.gatech.edu\/\u0022\u003EGVU\u003C\/a\u003E Brown Bag event has been paved with them.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe majored in Applied Mathematics at the \u003Ca href=\u0022http:\/\/gatech.edu\u0022\u003EGeorgia Institute of Technology\u003C\/a\u003E, graduating with his degree in 2009 and carrying along a Computer Science minor for good measure. He spent time designing RADAR algorithms for Northrop Grumman Corporation in Baltimore, Md., and then worked as an application developer for a short period at Under Armour.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEven hobbies in his free time are unique because of specific numbers associated with them. Take the number 50, for example: The number of miles he plans to run in his first ultra-marathon, the Quad Rock 50, in May.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026lt;iframe width=\u0026quot;560\u0026quot; height=\u0026quot;315\u0026quot; src=\u0026quot;https:\/\/www.youtube.com\/embed\/ZPISUOrgzYI\u0026quot; frameborder=\u0026quot;0\u0026quot; allowfullscreen\u0026gt;\u0026lt;\/iframe\u0026gt;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile his focus has always been on numbers and equations, though, Eisen said it has been his versatility \u0026ndash; merging his background in math and computer science \u0026ndash; that has helped him establish a career he\u0026rsquo;s excited to pursue on a daily basis.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe\u0026rsquo;s worked at Activision for just over a year, where he combines his fascination with raw numbers with a background in video games.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs a data analyst, he works to answer questions. For example, does the game play fast?\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Well, that\u0026rsquo;s a broad question,\u0026rdquo; he explained. \u0026ldquo;Answering that might involve asking more questions. It\u0026rsquo;s very research-oriented. You might look at map size or how players play the game or the way different elements are designed.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt\u0026rsquo;s a familiar process for Eisen, who has been a sports fan for years. Growing up a fan of the Atlanta Braves and eventually delving deeper into the world of fantasy sports, Eisen learned unique ways to look at the long list of available statistics.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I started getting into sabermetrics, advanced analytics in baseball,\u0026rdquo; he said. \u0026ldquo;I began to understand that there\u0026rsquo;s a better way to look at stats than just at the typical ones. They help provide answers to questions like whether you should always intentionally walk Barry Bonds. That\u0026rsquo;s an interesting question. The numbers help answer it. I got really into those question-answer analytics, and at Activision I had the opportunity to go deeper into this stuff.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe looks at win probability, value metrics, and any number of additional stats that help answer the question: Are you good?\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEisen doesn\u0026rsquo;t work exclusively in programming, but his understanding of the development side has been a boon to his career, as well.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026lt;iframe width=\u0026quot;560\u0026quot; height=\u0026quot;315\u0026quot; src=\u0026quot;https:\/\/www.youtube.com\/embed\/mU1BcvoFjgw\u0026quot; frameborder=\u0026quot;0\u0026quot; allowfullscreen\u0026gt;\u0026lt;\/iframe\u0026gt;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe earned a minor in computer science at Georgia Tech after realizing he was on track to graduate with his degree in Applied Mathematics too early. In his major, he needed only 120 credit hours, and he carried a fair portion with him from high school.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe had already pursued a working knowledge in computer science beginning in his freshman year of high school, working with Flash and building websites, including one for rush for his fraternity, Alpha Epsilon Pi, in college.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe didn\u0026rsquo;t pursue a major in the field because, he said, he wanted to learn it all on his own.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I was a kid,\u0026rdquo; he said, laughing, by way of explanation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWith his extra time, though, he focused on computer science courses that filled gaps in his knowledge. He was glad that he did.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Some of those classes helped me get my first job,\u0026rdquo; he said. \u0026ldquo;When I was working on the RADAR stuff, I had this unique ability to merge two key disciplines. They had a lot of math people, and they had a lot of CS people. They had to take these algorithms done by the math people and put them into systems. At some point, I found that I was good at that. That helped me take interesting math algorithms and put them into scalable code.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt\u0026rsquo;s something he said he has gotten back to doing at Activision.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Computing is taking over the world,\u0026rdquo; he said. \u0026ldquo;If you like your discipline, whatever that is, learning a bit about how to program with it is going to be very beneficial in creating your career.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Jon Eisen graduated with a degree in Applied Mathematics, but a minor in Computer Science has helped improve his versatility."}],"uid":"33939","created_gmt":"2017-03-09 19:38:45","changed_gmt":"2017-03-09 19:38:45","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-03-09T00:00:00-05:00","iso_date":"2017-03-09T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"588525":{"id":"588525","type":"image","title":"Jon Eisen","body":null,"created":"1489086375","gmt_created":"2017-03-09 19:06:15","changed":"1489086375","gmt_changed":"2017-03-09 19:06:15","alt":"Jon Eisen speaks to a gathered audience at a GVU Brown Bag session.","file":{"fid":"224262","name":"Eisen1.JPG","image_path":"\/sites\/default\/files\/images\/Eisen1.JPG","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Eisen1.JPG","mime":"image\/jpeg","size":434851,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Eisen1.JPG?itok=hRh0kuVw"}}},"media_ids":["588525"],"related_links":[{"url":"http:\/\/www.cc.gatech.edu\/academics\/degree-programs\/minors","title":"Minors - College of Computing"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"130","name":"Alumni"}],"keywords":[{"id":"1051","name":"Computer Science"},{"id":"2449","name":"video games"},{"id":"8586","name":"applied mathematics"},{"id":"171795","name":"data engineering"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["david.mitchell@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"588223":{"#nid":"588223","#data":{"type":"news","title":"IC Associate Professor Karen Liu Earns Google Research Faculty Award","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing Associate Professor \u003Cstrong\u003EKaren Liu\u003C\/strong\u003E earned a \u003Ca href=\u0022https:\/\/research.googleblog.com\/2017\/02\/google-research-awards-2016.html?m=1\u0022\u003EGoogle Research Faculty Award\u003C\/a\u003E for her research titled \u003Cem\u003EClosing the \u0026ldquo;Reality Gap\u0026rdquo;: A Machine Learning Approach to Contact Modeling\u003C\/em\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research addresses the problem that robotic applications working in a simulation setting often struggle to learn motor skills. As a result, they can often perform poorly on physical hardware due to inaccurate parameters, idealized dynamic and contact models, or other un-modelled factors. Her research proposes to accurately compute contact states \u0026ndash; like sticking, sliding, or breaking \u0026ndash; and contact forces such that the simulated results will match the real-world phenomena.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Our approach constructs a data-driven model that utilizes real-world observations to improve the accuracy of simulation,\u0026rdquo; Liu wrote in the abstract of her research proposal. \u0026ldquo;The key insight is that the contact problem can be broken down to two steps: predicting the next state of each contact point and calculating contact forces based on the prediction and current dynamic state.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs a proof-of-concept demonstration, Liu plans to show that a humanoid can perform tasks involving whole-body dynamic balance in the real world using the control policy trained by the improved simulator.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe award will fund one graduate student for one year. Liu is one of two recipients of the Google Research Faculty Award at the Georgia Institute of Technology, the other being fellow IC faculty member \u003Cstrong\u003E\u003Ca href=\u0022http:\/\/www.ic.gatech.edu\/news\/588083\/pair-ic-assistant-professors-earn-awards-research-visual-question-answering\u0022\u003EDevi Parikh\u003C\/a\u003E\u003C\/strong\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"School of Interactive Computing Associate Professor Karen Liu is the second faculty member to earn a Google Research Faculty Award."}],"uid":"33939","created_gmt":"2017-03-03 16:25:04","changed_gmt":"2017-03-03 16:25:04","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-03-03T00:00:00-05:00","iso_date":"2017-03-03T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"588222":{"id":"588222","type":"image","title":"Karen Liu new","body":null,"created":"1488558126","gmt_created":"2017-03-03 16:22:06","changed":"1488558126","gmt_changed":"2017-03-03 16:22:06","alt":"","file":{"fid":"224171","name":"karen-liu.jpg","image_path":"\/sites\/default\/files\/images\/karen-liu.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/karen-liu.jpg","mime":"image\/jpeg","size":11430,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/karen-liu.jpg?itok=NktButdq"}}},"media_ids":["588222"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"667","name":"robotics"},{"id":"2296","name":"Karen Liu"},{"id":"166848","name":"School of Interactive Computing"},{"id":"654","name":"College of Computing"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Edavid.mitchell@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["david.mitchell@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"587980":{"#nid":"587980","#data":{"type":"news","title":"Georgia Tech Shapes Research in Computer-Supported Cooperative Work as ACM Conference Turns 20","body":[{"value":"\u003Cp\u003EGeorgia Tech computing faculty,\u0026nbsp;students and alumni\u0026nbsp;will play a central part in the Association for Computing Machinery\u0026rsquo;s\u0026nbsp;Conference on Computer-Supported Cooperative Work and Social Computing in Portland, Ore., where the main program runs Feb. 27 \u0026ndash; March 1.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/public.tableau.com\/views\/CSCW2017_GeorgiaTech\/DashboardAll?:embed=y\u0026amp;:display_count=no\u0026amp;:showVizHome=no\u0022 target=\u0022_blank\u0022\u003ESix faculty from the School of Interactive Computing\u003C\/a\u003E have a combined eight papers accepted at CSCW 2017, including two of six best papers at the conference. These Atlanta-based researchers\u0026rsquo; work covers a \u003Ca href=\u0022http:\/\/www.cscw.gatech.edu\/2017\/\u0022 target=\u0022_blank\u0022\u003Erange of challenge areas\u003C\/a\u003E, including privacy for social media, fake news, online movements, health tracking and digital self-harm.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/public.tableau.com\/views\/CSCW2017_GeorgiaTechAlumni\/DashboardAll?:embed=y\u0026amp;:display_count=no\u0026amp;:showVizHome=no\u0022 target=\u0022_blank\u0022\u003EGeorgia Tech alumni\u003C\/a\u003E are also making considerable contributions to the field, with 17 papers, including 3 honorable mention papers, by 13 authors.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECSCW convenes its 20th\u0026nbsp;conference this year \u0026ndash; it took\u0026nbsp;place biannually from 1986-2010 and annually since 2010 \u0026ndash; having become the premier venue for research in the design and use of technologies that affect groups, organizations, communities, and networks. The conference explores the technical, social, material, and theoretical challenges of designing technology to support collaborative work and life activities.\u003C\/p\u003E\r\n\r\n\u003Ch2\u003E\u0026nbsp;\u003C\/h2\u003E\r\n\r\n\u003Ch2\u003EResearch Highlights\u003C\/h2\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ELikelihood of Dieting Success Lies Within Your Tweets\u003C\/strong\u003E\u003Cstrong\u003E\u2028\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThere is a direct link between a person\u0026rsquo;s attitude on social media and the likelihood that their dieting efforts will succeed.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn fact, Georgia Institute of Technology researchers have determined that dieting success \u0026shy;\u0026ndash; or failure \u0026ndash; can be predicted with an accuracy rate of 77 percent based on the sentiment of the words and phrases one uses on Twitter.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We see that those who are more successful at sticking to their daily dieting goals express more positive sentiments and have a greater sense of achievement in their social interactions,\u0026rdquo; said Assistant Professor \u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E, who is lead researcher on the project. \u0026ldquo;They are focused on the future, generally more social and have larger social networks.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.news.gatech.edu\/2017\/02\/21\/likelihood-dieting-success-lies-within-your-tweets\u0022 target=\u0022_blank\u0022\u003ERead More\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EFinding Credibility Clues on Twitter\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy scanning 66 million tweets linked to nearly 1,400 real-world events, Georgia Institute of Technology researchers have built a language model that identifies words and phrases that lead to strong or weak perceived levels of credibility on Twitter.\u0026nbsp; Their findings suggest that the words of millions of people on social media have considerable information about an event\u0026rsquo;s credibility \u0026ndash; even when an event is still ongoing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;There have been many studies about social media credibility in recent years, but very little is known about what types of words or phrases create credibility perceptions during rapidly unfolding events,\u0026rdquo; said Tanushree Mitra, the Georgia Tech Ph.D. candidate who led the research.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe team looked at tweets surrounding events in 2014 and 2015, including the emergence of Ebola in West Africa, the Charlie Hebdo attack in Paris and the death of Eric Garner in New York City. They asked people to judge the posts on their credibility (from \u0026ldquo;certainly accurate\u0026rdquo; to \u0026ldquo;certainly inaccurate\u0026rdquo;). Then the team fed the words into a model that split them into 15 different linguistic categories. The classifications included positive and negative emotions, hedges and boosters, and anxiety.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.news.gatech.edu\/2017\/01\/26\/finding-credibility-clues-twitter\u0022 target=\u0022_blank\u0022\u003ERead More\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EMost of Facebook is \u0026lsquo;Friends Only,\u0026rsquo; But Public and Private Posts are Likely Similar\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESocial media content, while driving a sizable portion of today\u0026rsquo;s web traffic, is not all public, and according to a new study, about 75 percent of Facebook posts, or three in four, are shared only with friends or subsets of friends. This translates into billions of daily online conversations that are seen by only a few.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.munmund.net\/pubs\/CSCW17_PubPvt.pdf\u0022 target=\u0022_blank\u0022\u003EResearchers from the Georgia Institute of Technology\u003C\/a\u003E enlisted almost 2,000 Facebook users \u0026ndash; who shared their most recent posts \u0026ndash; and used machine learning methods as well as qualitative hand coding to determine content types and topics for roughly 11,000 public and private posts. They analyzed patterns of choices for privacy settings and found, contrary to expectations, that content type is not a significant predictor of privacy settings. They did find however that some demographics such as gender and age are predictive, suggesting that privacy choices may be driven more by the attributes of the person rather than by the content of the posts.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA full look at Georgia Tech\u0026#39;s work at CSCW 2017 can be found at \u003Ca href=\u0022http:\/\/cscw.gatech.edu\u0022 target=\u0022_blank\u0022\u003Ehttp:\/\/cscw.gatech.edu\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech computing faculty,\u0026nbsp;students and alumni\u0026nbsp;will play a central part in the Association for Computing Machinery\u0026rsquo;s\u0026nbsp;Conference on Computer-Supported Cooperative Work and Social Computing in Portland, Ore., where the main program runs Feb. 27 \u0026ndash; March 1.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech computing faculty,\u00a0students and alumni\u00a0will play a central part in the Association for Computing Machinery\u2019s\u00a0Conference on Computer-Supported Cooperative Work and Social Computing in Portland, Ore., Feb. 27 \u2013 March 1."}],"uid":"27592","created_gmt":"2017-02-27 15:48:32","changed_gmt":"2017-02-28 14:00:12","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-02-27T00:00:00-05:00","iso_date":"2017-02-27T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"587989":{"id":"587989","type":"image","title":"CSCW 2017 faculty authors","body":null,"created":"1488217831","gmt_created":"2017-02-27 17:50:31","changed":"1488217831","gmt_changed":"2017-02-27 17:50:31","alt":"","file":{"fid":"224087","name":"Faculty authors.png","image_path":"\/sites\/default\/files\/images\/Faculty%20authors.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Faculty%20authors.png","mime":"image\/png","size":123865,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Faculty%20authors.png?itok=HfWCvoRM"}}},"media_ids":["587989"],"groups":[{"id":"1299","name":"GVU Center"},{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"585617":{"#nid":"585617","#data":{"type":"news","title":"Jill Watson, Round Three","body":[{"value":"\u003Cp\u003EGeorgia Tech is beginning its third semester using virtual teaching assistants (TAs) in an online course about artificial intelligence (AI). The new term comes one year after Jill Watson was introduced during Knowledge Based Artificial Intelligence (KBAI), a core course of the College of Computing\u0026rsquo;s Master of Science in Computer Science degree program.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EJill, which is implemented on IBM\u0026rsquo;s Watson platform, was first used during the spring 2016 semester to successfully answer particular types of frequently asked questions without the help of humans. The students weren\u0026rsquo;t told her identity until the final day of the class.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EProfessor Ashok Goel then introduced two \u0026ldquo;Jill Watsons\u0026rdquo; this past fall to work alongside 13 human TAs. With Jill no longer a secret, Goel gave 14 of 15 TAs pseudonyms (only the head assistant kept his real identity). Jill Watson became Stacy Sisko and Ian Braun. Stacy interacted with the 400 enrolled students during class introductions and posted weekly updates; Ian answered common questions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I told the students at the beginning of the semester that some of their TAs may or may not be computers,\u0026rdquo; said Goel, a professor of computer science. \u0026ldquo;Then I watched the chat rooms for months as they tried to differentiate between human and artificial intelligence.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EStacy dove into the discussion forum first. All members of the class were encouraged to introduce themselves. She responded to about half of them, chiming in with short paragraphs and relevant details. She received no human assistance.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If a student mentioned that they lived in, say, Chicago and worked at a specific company, for example, Stacy might comment on the city or the workplace,\u0026rdquo; Goel said. \u0026ldquo;If a student mentioned they were taking another Georgia Tech course, she would sometimes make a comment about the instructor.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGoel said there were a few mistakes, but nothing alarming. Stacy also wrote her own weekly previews of the content, then summarized on Fridays. Sometimes her wrap-ups referenced conversations among students. For instance, if she noticed a helpful, engaging online discussion from a few days prior, she would highlight it during her summary and encourage students to check it out for added insight.\u003Cbr \/\u003E\r\n\u003Cbr \/\u003E\r\nThe other non-human TA, Ian, wasn\u0026rsquo;t much different from the original Jill Watson. He answered routine questions typically asked each semester, such as the allowed length and format of written assignments.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Ian wasn\u0026rsquo;t as efficient in fall as Jill was in spring. He didn\u0026rsquo;t answer as many questions as we had expected,\u0026rdquo; Goel admitted. Ian only posted responses if he was 97 percent confident. \u0026ldquo;We\u0026rsquo;re still sorting through the data, but it looks like some students may have deliberately tried to outsmart the computer by asking questions in new ways.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnd because Ian could only pull answers from his episodic memory of previous offerings of the class, Goel thinks the variety of the student questions may have been a bit overwhelming. So his research team has developed a new version of Jill based on semantic analysis that he will introduce to the incoming class this semester.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAt the end of the term, the students were polled about who was human and what was AI. Slightly more than 50 percent of the students correctly guessed that Stacy was a computer. Sixteen percent figured out that Ian wasn\u0026rsquo;t human. On the other hand, more than 10 percent mistakenly thought two of the human TAs weren\u0026rsquo;t real.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We\u0026rsquo;re seeing more engagement in the course. For instance, in fall of 2015 before Jill Watson, each student averaged 32 comments during the semester. This fall it was close to 38 comments per student, on average,\u0026rdquo; Goel said. \u0026ldquo;I attribute this increased involvement partly to our AI TAs. They\u0026rsquo;re able to respond to inquiries more quickly than us.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis isn\u0026rsquo;t something Goel expected when he began the Jill Watson project. He just wanted to free up more time for his staff so they could concentrate on tasks computers can\u0026rsquo;t do.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlso in the fall, approximately 40 students built chatbots (their own avatars of Jill Watson) that could converse about the course. This allowed the students to operationalize some of the techniques they were learning in the class.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;When we started, I had no idea that this would blossom into a project with so many dimensions. It\u0026rsquo;s been a bonanza of low-hanging fruit we\u0026rsquo;re just starting to pluck.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EVirtual teaching assistants as illustrated by Jill were recently recognized as \u003Ca href=\u0022http:\/\/www.chronicle.com\/interactives\/50-years-of-technology\u0022\u003Eone of the most transformative technologies to impact college\u003C\/a\u003E within the past 50 years by the Chronicle of Higher Education. \u0026nbsp;\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":[{"value":"Georgia Tech course prepares for third semester with virtual teaching assistants"}],"field_summary":[{"value":"\u003Cp\u003EGeorgia Tech is beginning its third semester using virtual teaching assistants (TAs) in an online course about artificial intelligence (AI). The new term comes one year after Jill Watson was introduced during Knowledge Based Artificial Intelligence (KBAI), a core course of the College of Computing\u0026rsquo;s Master of Science in Computer Science degree program.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"A class on artificial intelligence will again include non-human teaching assistants."}],"uid":"27560","created_gmt":"2017-01-09 14:52:05","changed_gmt":"2017-01-09 14:52:05","author":"Jason Maderer","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2017-01-09T00:00:00-05:00","iso_date":"2017-01-09T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"558051":{"id":"558051","type":"image","title":"Jill Watson","body":null,"created":"1470163198","gmt_created":"2016-08-02 18:39:58","changed":"1475895361","gmt_changed":"2016-10-08 02:56:01","alt":"Jill Watson","file":{"fid":"218246","name":"original_0.jpeg","image_path":"\/sites\/default\/files\/images\/original_0_0.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/original_0_0.jpeg","mime":"image\/jpeg","size":1444421,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/original_0_0.jpeg?itok=MqbXaidI"}},"487761":{"id":"487761","type":"image","title":"Ashok Goel in the Classroom","body":null,"created":"1453233601","gmt_created":"2016-01-19 20:00:01","changed":"1475895242","gmt_changed":"2016-10-08 02:54:02","alt":"","file":{"fid":"204360","name":"16c10303-p20-005.jpg","image_path":"\/sites\/default\/files\/images\/16c10303-p20-005_0.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/16c10303-p20-005_0.jpg","mime":"image\/jpeg","size":626685,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/16c10303-p20-005_0.jpg?itok=tKMoejMN"}}},"media_ids":["558051","487761"],"related_links":[{"url":"http:\/\/www.omscs.gatech.edu\/","title":"Online Master of Science in Computer Science Program"}],"groups":[{"id":"1214","name":"News Room"},{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1299","name":"GVU Center"}],"categories":[{"id":"135","name":"Research"}],"keywords":[{"id":"169183","name":"Jill Watson"},{"id":"112431","name":"ashok goel"},{"id":"2556","name":"artificial intelligence"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71871","name":"Campus and Community"},{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EJason Maderer\u003Cbr \/\u003E\r\nNational Media Relations\u003Cbr \/\u003E\r\nmaderer@gatech.edu\u003Cbr \/\u003E\r\n404-660-2926\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["maderer@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"584775":{"#nid":"584775","#data":{"type":"news","title":"Social Media Could Take Only a Fraction of Users\u2019 Time With New Georgia Tech Method","body":[{"value":"\u003Cp\u003EA new visualization technique from the Georgia Institute of Technology could help users end the time-consuming habit of continually checking social media streams and endless updates. Where users might now commit minutes or hours on a single topic spanning thousands of posts, the Georgia Tech technique produces a \u003Ca href=\u0022https:\/\/mengdieh.github.io\/SentenTreeDemo\/app\/demo.html\u0022 target=\u0022_blank\u0022\u003Esingle compiled social post\u003C\/a\u003E that reads almost like a headline. Users are able to immediately understand the conversation and interact with the words and ideas that are being talked about the most, whether they are from an election, major sporting event, or latest product release.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The technique seeks a balance between showing the most frequent words and preserving sentence structure,\u0026rdquo; says lead researcher Mengdie Hu, a Ph.D. student in Human-Centered Computing. \u0026ldquo;It gives people a high-level overview of the most common expressions in a document collection and how they are connected to each other.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EImplemented in a web browser, the visualization tool, called SentenTree (short for Sentence Tree), has been used to take almost a quarter of a million tweets shared in a 15-minute window of time during the 2014 World Cup and filter the conversation. The resulting single 100-word social post revealed that Brazil scored a goal in its own net, putting them down 0-1 in their match against Croatia. In the example post, \u0026ldquo;World Cup\u0026rdquo; and \u0026ldquo;own goal\u0026rdquo; are larger than other words, signaling that they appear more frequently. In the middle of and connecting these two phrases are \u0026ldquo;2014,\u0026rdquo; \u0026ldquo;bad,\u0026rdquo; and \u0026ldquo;Brazil,\u0026rdquo; which together give an idea of the larger social conversation. If users want more context, SentenTree allows them to hover over any word and drill down to see more details, including the number of times the phrases appear along with the original tweets.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Even if you don\u0026rsquo;t know anything about soccer, there are visual cues to help users connect the concepts and play with the data,\u0026rdquo; Hu says. \u0026ldquo;The central idea behind SentenTree is to take a large social media dataset, find the most frequent sequences of words, and build a visualization out of them that mirrors the real-time conversation.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe researchers say that while there are numerous analytical tools for social media data that highlight concept relationships, topical changes, or physical locations, less common are tools that visualize the actual text content itself. SentenTree is designed to remedy this by consolidating, finding patterns in, and delivering useful content from many sources into one simple interactive view.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe algorithms developed for SentenTree analyze the unstructured text data \u0026mdash; developing a baseline sequential pattern of similar ideas and sentiments, all the while keeping a sentence-like structure \u0026mdash; then incrementally add new words that build on the pattern as the algorithms search the text and kick out duplicate language. This allows the visualization to be a concise, readable representation of multiple thousands of threads. The visualization is even modified in length, based on the size of the screen, and is usually between 100-200 words.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;There is an unwieldy volume of unstructured text on the web that continues to grow\u0026nbsp;explosively,\u0026rdquo; says John Stasko, professor of Interactive Computing at Georgia Tech and part of the research team. \u0026ldquo;Social media text includes rich information on the public\u0026rsquo;s interests and opinions, and we hope this technique can start to uncover important patterns and ideas that exist in this data.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe Georgia Tech researchers are developing their tool to allow for a broader cross section of ideas to surface on the social web \u0026ndash; anywhere from YouTube to Facebook to Reddit \u0026ndash; instead of simply relying on what social media influencers, such as celebrities or prominent public figures, post on their channels.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESentenTree eventually will be available online for users to upload their own datasets to visualize. The work, presented in October at the IEEE Vis 2016 conference in Baltimore, Maryland, is published in the paper \u0026ldquo;Visualizing Social Media Content with SentenTree.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E###\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EThis research is supported in part by the DARPA XDATA program and the National Science Foundation, Award IIS-1320537. The views and opinions expressed are those of the authors and do not necessarily represent the funding partners.\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EA new visualization technique from the Georgia Institute of Technology could help users end the time-consuming habit of continually checking social media streams and endless updates. Where users might now commit minutes or hours on a single topic spanning thousands of posts, the Georgia Tech technique produces a single compiled social post that reads almost like a headline.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"A new visualization technique from the Georgia Institute of Technology could help users end the time-consuming habit of continually checking social media streams and endless updates."}],"uid":"27592","created_gmt":"2016-12-07 17:37:52","changed_gmt":"2016-12-08 18:21:02","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2016-12-07T00:00:00-05:00","iso_date":"2016-12-07T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"584774":{"id":"584774","type":"image","title":"Sententree Visualization - Information Interfaces Group","body":null,"created":"1481131947","gmt_created":"2016-12-07 17:32:27","changed":"1481131947","gmt_changed":"2016-12-07 17:32:27","alt":"","file":{"fid":"222970","name":"Sententree.jpg","image_path":"\/sites\/default\/files\/images\/Sententree.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Sententree.jpg","mime":"image\/jpeg","size":193187,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Sententree.jpg?itok=l72ZRVfj"}},"394731":{"id":"394731","type":"image","title":"John Stasko","body":null,"created":"1449246346","gmt_created":"2015-12-04 16:25:46","changed":"1475895089","gmt_changed":"2016-10-08 02:51:29","alt":"John Stasko","file":{"fid":"75643","name":"stasko14.jpg","image_path":"\/sites\/default\/files\/images\/stasko14.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/stasko14.jpg","mime":"image\/jpeg","size":61355,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/stasko14.jpg?itok=7W7zKdFy"}},"584780":{"id":"584780","type":"image","title":"Mengdie Hu","body":null,"created":"1481136357","gmt_created":"2016-12-07 18:45:57","changed":"1481136357","gmt_changed":"2016-12-07 18:45:57","alt":"","file":{"fid":"222972","name":"Hu, Mengdie.jpg","image_path":"\/sites\/default\/files\/images\/Hu%2C%20Mengdie.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Hu%2C%20Mengdie.jpg","mime":"image\/jpeg","size":326346,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Hu%2C%20Mengdie.jpg?itok=-NzsPDIS"}}},"media_ids":["584774","394731","584780"],"related_links":[{"url":"http:\/\/www.cc.gatech.edu\/gvu\/ii\/","title":"Information Interfaces Group"},{"url":"http:\/\/www.news.gatech.edu\/2012\/04\/26\/how-twitter-broke-its-biggest-story-wegotbinladen","title":"How Twitter Broke Its Biggest Story, #WeGotBinLaden"}],"groups":[{"id":"1299","name":"GVU Center"},{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"11632","name":"john stasko"},{"id":"172916","name":"Mengdie Hu"},{"id":"7257","name":"visualization"},{"id":"172922","name":"information visualization"},{"id":"314","name":"twitter"},{"id":"172918","name":"world cup 2014"},{"id":"172921","name":"infoviz"},{"id":"4887","name":"GVU Center"},{"id":"172917","name":"sententree"},{"id":"172919","name":"Information Interfaces Group"}],"core_research_areas":[{"id":"39431","name":"Data Engineering and Science"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71901","name":"Society and Culture"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\n678.231.0787\u003Cbr \/\u003E\r\nCommunications Officer\u003Cbr \/\u003E\r\nGVU Center and College of Computing\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"584765":{"#nid":"584765","#data":{"type":"news","title":"Analysis of 2016 AP Computer Science Testing Reveals Ongoing Need for Qualified High School Teachers","body":[{"value":"\u003Cp\u003EAccording to recently released analysis from the Georgia Institute of Technology, 54,379 students\u0026nbsp;took the\u0026nbsp;Advanced Placement (AP) Computer Science (CS) A exam in the United States in\u0026nbsp;2016, which sets a new record.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis\u0026nbsp;is a 17.3 percent increase over the previous year and great news said\u0026nbsp;\u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/people\/barbara-ericson\u0022\u003E\u003Cstrong\u003EBarbara Ericson\u003C\/strong\u003E\u003C\/a\u003E, director of computing outreach for the\u0026nbsp;\u003Ca href=\u0022http:\/\/coweb.cc.gatech.edu\/ice-gt\/\u0022\u003EInstitute for Computing Education\u003C\/a\u003E\u0026nbsp;(ICE) at Georgia Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In 2012 fewer than 25,000 students took the exam. So, more than doubling in five years is pretty good growth,\u0026rdquo; said Ericson.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDespite positive overall growth however, a closer loook at the data reveals mixed results for 2016.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile the number of female high school students taking the AP CS A exam last year increased by 25 percent over 2015, females still only account for 23 percent of exam takers. In eight states fewer than 10 females took the exam. Mississippi and Montana had no females take the exam.\u003C\/p\u003E\r\n\r\n\u003Ch5\u003E\u003Cstrong\u003ETake an interactive look at \u003Ca href=\u0022https:\/\/public.tableau.com\/views\/APCSexamfemaletesttakers2016\/Dashboard1?:embed=y\u0026amp;:display_count=yes\u0026amp;:showVizHome=no#9\u0022 target=\u0022_blank\u0022\u003Efemale test takers by state\u003C\/a\u003E\u003C\/strong\u003E\u003C\/h5\u003E\r\n\r\n\u003Cp\u003EThe number of African American students taking the AP CS A exam also increased by 14 percent this year, but the overall pass rate for these students decreased from 38 percent in the previous year to 33 percent in 2016. The top five states for the percentage of African Americans taking the exam in 2016 were: the District of Columbia, Maryland, Georgia, Oklahoma, and Louisiana. Nearly half of all states had less than 10 black students take the AP CS A exam.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHispanic participation in the exam grew by 46 percent in 2016 with 6,256 students taking the test. The pass rate for this group increased just one percentage point to 42 percent during the same period. The top five states for the percentage of Hispanic students taking the exam were: New Mexico, Florida, Texas, Wyoming, and California. In all, 15 states had fewer than 10 Hispanics take the exam.\u003C\/p\u003E\r\n\r\n\u003Ch5\u003E\u003Cstrong\u003EMoving the needle forward\u003C\/strong\u003E\u003C\/h5\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We\u0026rsquo;ve had positive overall growth, but we are still way below where we should be in general and especially for\u0026nbsp;underrepresented groups,\u0026rdquo; said Ericson. \u0026ldquo;We need to be where AP calculus is, which had nearly 300,000 students taking the course this year.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo move the needle forward on this goal, Ericson said more qualified teachers are needed. \u0026ldquo;The biggest bottleneck right now to achieving more participation and more diversity is that there are not nearly enough trained educators who can effectively teach and prepare students to succeed on the AP CS A exam,\u0026rdquo; said Ericson.\u003C\/p\u003E\r\n\r\n\u003Ch5\u003E\u003Cstrong\u003EExamine the \u003Ca href=\u0022http:\/\/home.cc.gatech.edu\/ice-gt\/595\u0022 target=\u0022_blank\u0022\u003Ecomplete results\u003C\/a\u003E of the 2016 analysis\u003C\/strong\u003E\u0026nbsp;\u003C\/h5\u003E\r\n\r\n\u003Cp\u003EAlthough she doesn\u0026rsquo;t see an immediate solution to the problem, Ericson is optimistic that the new AP CSP (computer\u0026nbsp;science\u0026nbsp;principles) course launched this year by the College Board will help bring more qualified teachers to the table. With more of a focus on problem solving, creativity, and the impact of computing innovations, the CSP course is primarily intended for non-CS majors.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Along with paving the way for more diversity in the A course,\u0026rdquo; said Ericson, \u0026ldquo;CSP is an easier place for teachers to get started if they don\u0026rsquo;t have any prior experience. We\u0026rsquo;ve developed several free interactive e-books intended to help teachers, especially with programming because that\u0026rsquo;s the part they don\u0026rsquo;t know or are afraid of. Once they\u0026rsquo;ve mastered the CSP course, our hope is that they will move on to become qualified for the A\u0026nbsp;course.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Despite improvement, results of the 2016 AP CS A test show more high school teachers are needed."}],"uid":"32045","created_gmt":"2016-12-07 15:22:49","changed_gmt":"2016-12-08 17:02:22","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2016-12-07T00:00:00-05:00","iso_date":"2016-12-07T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"584767":{"id":"584767","type":"image","title":"2016 AP CS A female participation by state","body":null,"created":"1481124528","gmt_created":"2016-12-07 15:28:48","changed":"1481124528","gmt_changed":"2016-12-07 15:28:48","alt":"2016 AP CS A female participation by state","file":{"fid":"222969","name":"AP CS A exam 2016.jpg","image_path":"\/sites\/default\/files\/images\/AP%20CS%20A%20exam%202016.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/AP%20CS%20A%20exam%202016.jpg","mime":"image\/jpeg","size":219364,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/AP%20CS%20A%20exam%202016.jpg?itok=V7ZmmjP2"}}},"media_ids":["584767"],"related_links":[{"url":"http:\/\/home.cc.gatech.edu\/ice-gt\/595","title":"2016 AP CS A Exam Results Analysis"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1299","name":"GVU Center"}],"categories":[{"id":"42911","name":"Education"}],"keywords":[{"id":"172913","name":"AP CS A"},{"id":"87891","name":"Barb Ericson; Barbara Ericson; CS; AP Computer Science; Women; Minorities; Computer Science Education"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAlbert \u0026quot;Ben\u0026quot; Snedeker, Communications Manager\u003C\/p\u003E\r\n\r\n\u003Cp\u003E404-894-7253\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["albert.snedeker@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"583212":{"#nid":"583212","#data":{"type":"news","title":"Learning Morse Code without Trying","body":[{"value":"\u003Cp\u003EIt\u0026rsquo;s not exactly beating something into someone\u0026rsquo;s head. More like tapping it into the side.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearchers at the Georgia Institute of Technology have developed a system that teaches people Morse code within four hours using a series of vibrations felt near the ear. Participants wearing Google Glass learned it without paying attention to the signals \u0026mdash;they played games while feeling the taps and hearing the corresponding letters. After those few hours, they were 94 percent accurate keying a sentence that included every letter of the alphabet and 98 percent accurate writing codes for every letter.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis is the latest chapter of passive haptic learning (PHL) studies at Georgia Tech. The same method \u0026mdash; using vibrations while participants aren\u0026rsquo;t paying attention \u0026mdash; \u003Ca href=\u0022http:\/\/www.news.gatech.edu\/2014\/06\/23\/wearable-computing-gloves-can-teach-braille-even-if-you%E2%80%99re-not-paying-attention\u0022\u003Ehas taught people braille\u003C\/a\u003E, \u003Ca href=\u0022http:\/\/www.news.gatech.edu\/2008\/11\/07\/reinventing-way-people-learn-play-piano\u0022\u003Ehow to play the piano\u003C\/a\u003E and \u003Ca href=\u0022http:\/\/www.news.gatech.edu\/hg\/item\/140221\u0022\u003Eimproved hand sensation for those with partial spinal cord injury. \u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe PHL projects are all led by Georgia Tech Professor Thad Starner and his Ph.D. student Caitlyn Seim. The team decided to use Glass for this study because it has both a built-in speaker and tapper (Glass\u0026rsquo;s bone-conduction transducer).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn the study, participants played a game while feeling vibration taps between their temple and ear. The taps represented the dots and dashes of Morse code and passively \u0026ldquo;taught\u0026rdquo; users through their tactile senses \u0026mdash; even while they were distracted by the game.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe taps were created when researchers sent a very low-frequency signal to Glass\u0026rsquo;s speaker system. At less than 15 Hz, the signal was below hearing range but, because it was played very slowly, the sound was felt as a vibration.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHalf of the participants in the study felt the vibration taps and heads a voice prompt for each corresponding letter. The other half \u0026mdash; the control group \u0026mdash; felt no taps to help them learn.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EParticipants were tested throughout the study on their knowledge of Morse code and their ability to type it.\u0026nbsp; After less than four hours of feeling every letter, everyone was challenged to type the alphabet in Morse code in a final test.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe control group was accurate only half the time.\u0026nbsp; Those who felt the passive cues were nearly perfect.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research was recently presented in Germany at the 20\u003Csup\u003Eth\u003C\/sup\u003E International Symposium on Wearable Computers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Does this new study mean that people will rush out to learn Morse code? Probably not,\u0026rdquo; said Starner. \u0026ldquo;It shows that PHL lowers the barrier to learn text-entry methods \u0026mdash; something we need for smartwatches and any text-entry that doesn\u0026rsquo;t require you to look at your device or keyboard.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPrevious research on PHL used custom hardware to provide the tactile stimuli, but here researchers use an existing wearable device.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This research also shows that other common devices with an actuator could be used for passive haptic learning,\u0026rdquo; he says. \u0026ldquo;Your smartwatch, Bluetooth headset, fitness tracker or phone.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In our Braille and piano PHL studies, people felt vibrations on their fingers, then used their fingers for the task,\u0026rdquo; said Seim. \u0026ldquo;This study was different and surprising. People were tapped on their heads, but the skill they learned was using their finger.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeim\u0026rsquo;s next study will go a step further, investigating whether PHL can teach people how to type on the trusted QWERTY keyboard. That would mean several letters assigned to the same finger, rather than using only one finger like Morse code.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EThe work is supported in part by the National Science Foundation (Grant Number 1217473). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors. \u003C\/em\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":[{"value":"New study demonstrates silent, eyes-free text entry"}],"field_summary":[{"value":"\u003Cp\u003EResearchers have developed a system that teaches people Morse code within four hours using a series of vibrations felt near the ear. Participants wearing Google Glass learned it without paying attention to the signals \u0026mdash;they played games while feeling the taps and hearing the corresponding letters. After those few hours, they were 94 percent accurate keying a sentence that included every letter of the alphabet and 98 percent accurate writing codes for every letter.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Researchers have developed a system that teaches people Morse code within four hours using a series of vibrations felt near the ear"}],"uid":"27560","created_gmt":"2016-10-27 15:49:38","changed_gmt":"2016-10-27 15:49:38","author":"Jason Maderer","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2016-10-27T00:00:00-04:00","iso_date":"2016-10-27T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"416531":{"id":"416531","type":"image","title":"Thad Starner","body":null,"created":"1449254258","gmt_created":"2015-12-04 18:37:38","changed":"1475895155","gmt_changed":"2016-10-08 02:52:35","alt":"Thad Starner","file":{"fid":"202549","name":"thad_starner_2.jpg","image_path":"\/sites\/default\/files\/images\/thad_starner_2_0.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/thad_starner_2_0.jpg","mime":"image\/jpeg","size":120584,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/thad_starner_2_0.jpg?itok=CYln5AeS"}},"583210":{"id":"583210","type":"image","title":"Morse Code 2","body":null,"created":"1477581892","gmt_created":"2016-10-27 15:24:52","changed":"1477581892","gmt_changed":"2016-10-27 15:24:52","alt":"","file":{"fid":"222323","name":"InputTest2.jpeg","image_path":"\/sites\/default\/files\/images\/InputTest2.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/InputTest2.jpeg","mime":"image\/jpeg","size":23120,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/InputTest2.jpeg?itok=2I_KcCWX"}},"583209":{"id":"583209","type":"image","title":"Morse Code 1","body":null,"created":"1477581798","gmt_created":"2016-10-27 15:23:18","changed":"1477585830","gmt_changed":"2016-10-27 16:30:30","alt":"","file":{"fid":"222322","name":"tap2.jpeg","image_path":"\/sites\/default\/files\/images\/tap2.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/tap2.jpeg","mime":"image\/jpeg","size":95725,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/tap2.jpeg?itok=ZgZet6-h"}}},"media_ids":["416531","583210","583209"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1183","name":"Home"},{"id":"1214","name":"News Room"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1299","name":"GVU Center"}],"categories":[{"id":"135","name":"Research"},{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"1944","name":"Thad Starner"},{"id":"82341","name":"Google Glass"},{"id":"132141","name":"wearables"},{"id":"172604","name":"Morse Code"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EJason Maderer\u003Cbr \/\u003E\r\nNational Media Relations\u003Cbr \/\u003E\r\nmaderer@gatech.edu\u003Cbr \/\u003E\r\n404-660-2926\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["maderer@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"582752":{"#nid":"582752","#data":{"type":"news","title":"Wireless, Freely Behaving Rodent Cage Helps Scientists Collect More Reliable Data","body":"","field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EInstead of building a better mouse trap, Georgia Institute of Technology researchers have built a better mouse cage. They\u0026rsquo;ve created a system called EnerCage (Energized Cage) for scientific experiments on awake, freely behaving small animals. It wirelessly powers electronic devices and sensors traditionally used during rodent research experiments, but without the use of interconnect wires or bulky batteries. Their goal is to create as natural an environment within the cage as possible for mice and rats in order for scientists to obtain consistent and reliable results. The EnerCage system also uses Microsoft\u0026rsquo;s Kinect video game technology to track the animals and recognize their activities, automating a process that typically requires researchers to stand and directly observe the rodents or watch countless hours of recorded footage to determine how they react to experiments.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERead the rest of the article here:\u0026nbsp;\u003Ca href=\u0022http:\/\/www.news.gatech.edu\/2016\/09\/28\/wireless-freely-behaving-rodent-cage-helps-scientists-collect-more-reliable-data\u0022\u003Ehttp:\/\/www.news.gatech.edu\/2016\/09\/28\/wireless-freely-behaving-rodent-cage-helps-scientists-collect-more-reliable-data\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"System uses video game technology to track lab animal behavior"}],"uid":"28466","created_gmt":"2016-10-18 19:23:32","changed_gmt":"2016-10-18 20:52:14","author":"Meghana Melkote","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2016-09-28T00:00:00-04:00","iso_date":"2016-09-28T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"538611":{"#nid":"538611","#data":{"type":"news","title":"New Technique Controls Autonomous Vehicles in Extreme Conditions","body":[{"value":"\u003Cp\u003EA Georgia Institute of Technology research team has devised a novel way to help keep a driverless vehicle under control as it maneuvers at the edge of its handling limits. The approach could help make self-driving cars of the future safer under hazardous road conditions.\u003C\/p\u003E\u003Cp\u003EResearchers from Georgia Tech\u2019s Daniel Guggenheim School of Aerospace Engineering (AE) and the School of Interactive Computing (IC) have assessed the new technology by racing, sliding, and jumping one-fifth-scale, fully autonomous auto-rally cars at the equivalent of 90 mph. The technique uses advanced algorithms and onboard computing, in concert with installed sensing devices, to increase vehicular stability while maintaining performance.\u003C\/p\u003E\u003Cp\u003EThe work, tested at the Georgia Tech Autonomous Racing Facility, is sponsored by the U.S. Army Research Office. A paper covering this research was presented at the recent International Conference on Robotics and Automation (ICRA), held May 16-21.\u003C\/p\u003E\u003Cp\u003E\u201cAn autonomous vehicle should be able to handle any condition, not just drive on the highway under normal conditions,\u201d said Panagiotis Tsiotras, an AE professor who is an expert on the mathematics behind rally-car racing control. \u201cOne of our principal goals is to infuse some of the expert techniques of human drivers into the brains of these autonomous vehicles.\u201d\u003C\/p\u003E\u003Cp\u003ETraditional robotic-vehicle techniques use the same control approach whether a vehicle is driving normally or at the edge of roadway adhesion, Tsiotras explained. The Georgia Tech method \u2013 known as model predictive path integral control (MPPI) \u2013 was developed specifically to address the non-linear dynamics involved in controlling a vehicle near its friction limits. \u003Cbr \/\u003E \u003Cbr \/\u003E\u003Cstrong\u003EUtilizing Advanced Concepts\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003E\u201cAggressive driving in a robotic vehicle \u2013 maneuvering at the edge \u2013 is a unique control problem involving a highly complex system,\u201d said Evangelos Theodorou, an AE assistant professor who is leading the project. \u201cHowever, by merging statistical physics with control theory, and utilizing leading-edge computation, we can create a new perspective, a new framework, for control of autonomous systems.\u201d\u003C\/p\u003E\u003Cp\u003EThe Georgia Tech researchers used a stochastic trajectory-optimization capability, based on a path-integral approach, to create their MPPI control algorithm, Theodorou explained. Using statistical methods, the team integrated large amounts of handling-related information, together with data on the dynamics of the vehicular system, to compute the most stable trajectories from myriad possibilities.\u003C\/p\u003E\u003Cp\u003EProcessed by the high-power graphics processing unit (GPU) that the vehicle carries, the MPPI control algorithm continuously samples data coming from global positioning system (GPS) hardware, inertial motion sensors, and other sensors. The onboard hardware-software system performs real-time analysis of a vast number of possible trajectories and relays optimal handling decisions to the vehicle moment by moment.\u003C\/p\u003E\u003Cp\u003EIn essence, the MPPI approach combines both the planning and execution of optimized handling decisions into a single highly efficient phase. It\u2019s regarded as the first technology to carry out this computationally demanding task; in the past, optimal- control data inputs could not be processed in real time.\u003Cbr \/\u003E \u003Cbr \/\u003E\u003Cstrong\u003EFully Autonomous Vehicles\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EThe researchers\u2019 two auto-rally vehicles \u2013 custom built by the team \u2013 utilize special electric motors to achieve the right balance between weight and power. The cars carry a motherboard with a quad-core processor, a potent GPU, and a battery.\u003C\/p\u003E\u003Cp\u003EEach vehicle also has two forward-facing cameras, an inertial measurement unit, and a GPS receiver, along with sophisticated wheel-speed sensors. The power, navigation, and computation equipment is housed in a rugged aluminum enclosure able to withstand violent rollovers. Each vehicle weighs about 48 pounds and is about three feet long.\u003C\/p\u003E\u003Cp\u003EThese rolling robots are able to test the team\u2019s control algorithms without any need for off-vehicle devices or computation, except for a nearby GPS receiver. The onboard GPU lets the MPPI algorithm sample more than 2,500, 2.5-second-long trajectories in under 1\/60 of a second.\u003C\/p\u003E\u003Cp\u003EAn important aspect in the team\u2019s autonomous-control approach centers on the concept of \u201ccosts\u201d \u2013 key elements of system functionality. Several cost components must be carefully matched to achieve optimal performance.\u003C\/p\u003E\u003Cp\u003EIn the case of the Georgia Tech vehicles, the costs consist of three main areas: the cost for staying on the track, the cost for achieving a desired velocity, and the cost of the control system. A sideslip-angle cost was also added to improve vehicle stability.\u003C\/p\u003E\u003Cp\u003EThe cost approach is important to enabling a robotic vehicle to maximize speed while staying under control, explained James Rehg, a professor in the Georgia Tech School of Interactive Computing who is collaborating with Theodorou and Tsiotras.\u003C\/p\u003E\u003Cp\u003EIt\u2019s a complex balancing act, Rehg said. For example, when the researchers reduced one cost term to try to prevent vehicle sliding, they found they got increased drifting behavior.\u003C\/p\u003E\u003Cp\u003E\u201cWhat we\u0027re talking about here is using the MPPI algorithm to achieve relative \u003Cbr \/\u003Eentropy minimization \u2013 and adjusting costs in the most effective way is a big part of that,\u201d he said. \u201cTo achieve the optimal combination of control and performance in an autonomous vehicle is definitely a non-trivial problem.\u201d\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EResearch News\u003C\/strong\u003E\u003Cbr \/\u003E\u003Cstrong\u003EGeorgia Institute of Technology\u003C\/strong\u003E\u003Cbr \/\u003E\u003Cstrong\u003E177 North Avenue\u003C\/strong\u003E\u003Cbr \/\u003E\u003Cstrong\u003EAtlanta, Georgia 30332-0181 USA\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EMedia Relations Contacts\u003C\/strong\u003E: Jason Maderer (\u003Ca href=\u0022mailto:jason.maderer@comm.gatech.edu\u0022\u003Ejason.maderer@comm.gatech.edu\u003C\/a\u003E) (404-385-2966) or John Toon (\u003Ca href=\u0022mailto:jtoon@gatech.edu\u0022\u003Ejtoon@gatech.edu\u003C\/a\u003E) (404-894-6986).\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EWriter\u003C\/strong\u003E: Rick Robinson\u003C\/p\u003E","summary":null,"format":"limited_html"}],"field_subtitle":[{"value":"MPPI strategy helps self-driving, robotic vehicles maintain control at edge of handling limits"}],"field_summary":[{"value":"\u003Cp\u003EA Georgia Institute of Technology research team has devised a novel way to help keep a driverless vehicle under control as it maneuvers at the edge of its handling limits. The approach could help make self-driving cars of the future safer under hazardous road conditions.\u0026nbsp;\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech researchers have devised a novel way to help keep a driverless vehicle under control as it maneuvers at the edge of its handling limits."}],"uid":"27303","created_gmt":"2016-05-23 10:04:14","changed_gmt":"2016-10-08 03:21:42","author":"John Toon","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2016-05-23T00:00:00-04:00","iso_date":"2016-05-23T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"538541":{"id":"538541","type":"image","title":"autonomous racing vehicle","body":null,"created":"1464703200","gmt_created":"2016-05-31 14:00:00","changed":"1475895326","gmt_changed":"2016-10-08 02:55:26","alt":"autonomous racing vehicle","file":{"fid":"89509","name":"autonomoous-racing1-horiz.jpg","image_path":"\/sites\/default\/files\/images\/autonomoous-racing1-horiz.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/autonomoous-racing1-horiz.jpg","mime":"image\/jpeg","size":1348509,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/autonomoous-racing1-horiz.jpg?itok=UvkmehQD"}},"538561":{"id":"538561","type":"image","title":"Researchers with autonomous racing vehicle","body":null,"created":"1464703200","gmt_created":"2016-05-31 14:00:00","changed":"1475895326","gmt_changed":"2016-10-08 02:55:26","alt":"Researchers with autonomous racing vehicle","file":{"fid":"89511","name":"autonomous-racing2.jpg","image_path":"\/sites\/default\/files\/images\/autonomous-racing2.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/autonomous-racing2.jpg","mime":"image\/jpeg","size":2057339,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/autonomous-racing2.jpg?itok=Zvlu7z7b"}},"538571":{"id":"538571","type":"image","title":"autonomous racing vehicle2","body":null,"created":"1464703200","gmt_created":"2016-05-31 14:00:00","changed":"1475895326","gmt_changed":"2016-10-08 02:55:26","alt":"autonomous racing vehicle2","file":{"fid":"89512","name":"autonomous-racing1.jpg","image_path":"\/sites\/default\/files\/images\/autonomous-racing1.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/autonomous-racing1.jpg","mime":"image\/jpeg","size":1192803,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/autonomous-racing1.jpg?itok=-hRsGuUW"}}},"media_ids":["538541","538561","538571"],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[{"id":"136","name":"Aerospace"},{"id":"145","name":"Engineering"},{"id":"147","name":"Military Technology"},{"id":"135","name":"Research"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"7264","name":"autonomous"},{"id":"97281","name":"autonomous vehicles"},{"id":"172051","name":"control system"},{"id":"170305","name":"driverless"},{"id":"205","name":"GPU"},{"id":"667","name":"robotics"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EJason Maderer\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022mailto:jason.maderer@comm.gatech.edu\u0022\u003Ejason.maderer@comm.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003E(404) 385-2966\u003C\/p\u003E","format":"limited_html"}],"email":["jason.maderer@comm.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"534651":{"#nid":"534651","#data":{"type":"news","title":"Georgia Tech Research Finds Fan Communities Are Reshaping the Social Web for the Better","body":[{"value":"\u003Cp\u003EModern fan groups predate the Internet by more than half a century (think Star Trek conventions), and their shared interests include everything from science fiction to knitting. But replicating the connections fans make in person in a digital space has proved difficult. Instead, groups with special interests are often forced onto Facebook and other social media with a one-size-fits-all approach to interacting online.\u003C\/p\u003E\u003Cp\u003EIn a new study, Georgia Institute of Technology researchers have found one group of fan fiction writers that has created a successful online community, which might serve as a model to help make the future social web markedly different from today\u2019s landscape.\u003C\/p\u003E\u003Cp\u003EBy adopting a user-centric approach to design, this community has created a rarity on the web, a \u201cdigital commons\u201d without advertising where harassment is almost nonexistent, and a large installed audience enjoys a culture of genuine diversity.\u003C\/p\u003E\u003Cp\u003EThe study, from Georgia Tech and University of Colorado-Boulder, is based on the website \u003Ca href=\u0022https:\/\/archiveofourown.org\/\u0022\u003EArchive of Our Own\u003C\/a\u003E (AO3), an 840,000 member community of fan fiction or \u201cfanfic\u201d writers who post and share user-generated content. The site was launched in 2008 and boasts nearly 2 million story posts to date. Its web traffic outpaces such heavyweights as CareerBuilder\u0026nbsp;and FoxSports, among others, ranking number 418 in U.S. web metrics, according to alexa\u003Ca href=\u0022http:\/\/alexa.com\/\u0022\u003E.\u003C\/a\u003Ecom.\u003C\/p\u003E\u003Cp\u003E\u201cAO3\u2019s success demonstrates how beneficial it is to have a technology\u2019s users as part of its development team,\u201d said Casey Fiesler, lead researcher on the study while a Ph.D. candidate at Georgia Tech, and now assistant professor at University of Colorado-Boulder.\u003C\/p\u003E\u003Cp\u003E\u201cThis is particularly striking when users are mostly women, who are traditionally underrepresented in tech. Because there was no existing technology that reflected their values, they built their own and it has been massively successful.\u201d\u003C\/p\u003E\u003Cp\u003EA small team of coders, coordinators and designers from the ranks of AO3 members took input from users and coupled it with the guiding values of the fan fiction community \u0026shy;\u0026shy;\u2013 which are accessibility and inclusivity \u2013 to create the basic structure of AO3. After more than eight years, this structure remains largely unchanged.\u003C\/p\u003E\u003Cp\u003EDuring interviews with users and developers, researchers discovered that AO3\u2019s intentional design approach, which baked the ethos of the community right into the website, accounts for much of the site\u2019s organic growth and success.\u003C\/p\u003E\u003Cp\u003E\u201cWhat makes the rise of this online platform exceptional is that it was built primarily by its fans, some of whom started with little or no programming experience,\u201d said Amy Bruckman, a professor of Interactive Computing at Georgia Tech and author on the study\u003Cstrong\u003E.\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EShe added, \u201cFanfic writers, mostly women, who felt exploited or that other platforms weren\u2019t meeting their needs, started this open source project and invited the larger community of fanfic writers to provide input. AO3 is a case study in building a digital commons around a group of users and addressing nuanced technical issues in order to successfully engage the community.\u201d\u003C\/p\u003E\u003Cp\u003EOne of the technical issues AO3 faced early on, tag structure, has since become a favorite feature and essential to the website\u2019s success. Designers did not limit what or how many tags can be used with published stories, but rather created an open-ended system. AO3 \u201ctag wranglers,\u201d member volunteers, manually combine tags submitted by users (such as \u201cmermaid,\u201d \u201cmerman,\u201d and \u201cmerfolk\u201d) into one meta tag (\u201cmerpeople\u201d), allowing for a robust search of multiple terms.\u003C\/p\u003E\u003Cp\u003EThis level of control allows users to find a wide cross section of relevant content, something that is often not possible on other platforms beyond giant search engines, according to the research. Fiesler notes that the tag system also gives writers more control over how to describe their work, and this contributes to the inclusiveness and diversity of the community.\u003C\/p\u003E\u003Cp\u003EBut like any online space, there are competing values among users. Anonymity, like elsewhere on the web, can allow for more openness and sharing, but it can also invite harassment. To limit this, the AO3 site allows users to post comments anonymously, but it also allows users to turn off incoming anonymous comments so they do not have to see them. The site also prohibits the intentional \u201couting\u201d (revealing real identities) of users, does not offer tier accounts and never collects personal data. All of this means the AO3 community can enjoy a high degree of privacy while respecting the rights of all of its users.\u003C\/p\u003E\u003Cp\u003EAlthough AO3 makes every effort to limit harassment, it does not censor or restrict content on the site, unless it is illegal. However, to ensure readers know they are reading content \u201cat their own risk,\u201d warning labels are required on mature content that is posted.\u003C\/p\u003E\u003Cp\u003EAnother concern among users is how to preserve the entirety of the archive while also respecting users\u2019 rights to erase their own work. AO3 again turned to its members for a solution. For writers wanting to remove their fiction, the site gives them the option to \u201corphan\u201d their work. This removes their pseudonym or name from the work, but allows the content to remain in the community.\u003C\/p\u003E\u003Cp\u003E\u201cOther sites would do well to understand their users as well as AO3 does in order to achieve long-term goals and address some of the emerging issues on the social web, such as those involving harassment, privacy, security and sustainability,\u201d says Fiesler, the lead researcher.\u003C\/p\u003E\u003Cp\u003EThe research, \u201c\u003Ca href=\u0022https:\/\/cfiesler.files.wordpress.com\/2016\/02\/chi2016_ao3_fiesler.pdf\u0022\u003EAn Archive of Their Own: A Case Study of Feminist HCI and Values in Design\u003C\/a\u003E,\u201d co-authored by Fiesler, Bruckman and Shannon Morrison (a former visiting undergraduate at Georgia Tech), will be presented at CHI 2016, the Association for Computing Machinery\u2019s Conference on Human Factors in Computing Systems, taking place May 7-12 in San Jose, Calif. The conference is the largest gathering of human-computer interaction researchers worldwide, with more than 2,000 authors in this year\u2019s technical program.\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cem\u003E\u0026nbsp;\u003C\/em\u003E\u003Cem\u003EResearch was funded by NSF IIS Award #1216347. The views expressed are those of the researchers and do not necessarily represent those of the National Science Foundation.\u003C\/em\u003E\u003C\/p\u003E","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EIn a new study, Georgia Institute of Technology researchers have found that one successful online community could serve as a model to help make the future social web a safer, more inclusive space.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"In a new study, Georgia Institute of Technology researchers have found that one successful online community could serve as a model to help make the future social web a safer, more inclusive space."}],"uid":"27592","created_gmt":"2016-05-09 11:44:28","changed_gmt":"2016-10-08 03:21:39","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2016-05-09T00:00:00-04:00","iso_date":"2016-05-09T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"534751":{"id":"534751","type":"image","title":"Casey Fiesler and Amy Bruckman","body":null,"created":"1462910400","gmt_created":"2016-05-10 20:00:00","changed":"1475895319","gmt_changed":"2016-10-08 02:55:19","alt":"Casey Fiesler and Amy Bruckman","file":{"fid":"88795","name":"casey_and_amy_web.jpg","image_path":"\/sites\/default\/files\/images\/casey_and_amy_web_0.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/casey_and_amy_web_0.jpg","mime":"image\/jpeg","size":303960,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/casey_and_amy_web_0.jpg?itok=52nlSTRA"}},"534561":{"id":"534561","type":"image","title":"CHI 2016 - Web Culture Research, Archive of Our Own","body":null,"created":"1462892400","gmt_created":"2016-05-10 15:00:00","changed":"1475895319","gmt_changed":"2016-10-08 02:55:19","alt":"CHI 2016 - Web Culture Research, Archive of Our Own","file":{"fid":"88787","name":"a03_merpeople_screenshot.jpg","image_path":"\/sites\/default\/files\/images\/a03_merpeople_screenshot.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/a03_merpeople_screenshot.jpg","mime":"image\/jpeg","size":359271,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/a03_merpeople_screenshot.jpg?itok=g_oESPme"}}},"media_ids":["534751","534561"],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[{"id":"143","name":"Digital Media and Entertainment"}],"keywords":[{"id":"167543","name":"social media"},{"id":"172017","name":"web culture"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"},{"id":"71901","name":"Society and Culture"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003ECollege of Computing, GVU Center\u003Cbr \/\u003E678.231.0787\u003C\/p\u003E","format":"limited_html"}],"email":["jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"513541":{"#nid":"513541","#data":{"type":"news","title":"\u2018Civic Computing\u2019 workshop leads an unlikely group of youth to help advance metro city\u2019s vision","body":[{"value":"\u003Cp\u003EDefining community in the digital age is often a nuanced exercise that involves looking at social connections far beyond where one works and lives. But even in an age of tweets, texts, and video chats, young people are willing to use their voice to support and shape the communities in which they live.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EOne such group of College Park students\u2014who are completing their high school credits at The Bridge Academy\u2014recently participated in the Georgia Tech design[ED] Lab workshop where they reviewed College Park\u2019s 20-year Comprehensive Plan (2011 \u2013 2031) to identify community issues and create computing technology solutions that could enhance community engagement and increase career opportunities for young people. The students chose to address three key issues from the city\u2019s policy guide: perception of crime, decreasing standardized test scores, and impact of crime on youth.\u003C\/p\u003E\u003Cp\u003E\u201cFor six weeks, the students were exposed to a design-thinking process and tools for creating user-focused technology that gave them an understanding of how to frame and tackle challenges within their community,\u201d says Monet Spells, graduate student in the Master of Science in Human-Computer Interaction program and workshop organizer.\u003C\/p\u003E\u003Cp\u003EStudents in the program brainstormed solutions, iterated on the prototypes, and critiqued their peers\u2019 work to come up with three viable technology concepts. These concepts were displayed in February at a special event open to the public at the Museum of Design Atlanta.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cGiving students an authentic opportunity to present their work\u2014such as at the MODA public exhibit\u2014acts as a motivation for students to engage with learning, take ownership of their projects, and to see their efforts pay off,\u201d says Betsy DiSalvo, assistant professor in Interactive Computing and Spells\u2019 advisor.\u0026nbsp;\u003C\/p\u003E\u003Ch4\u003EResults included:\u0026nbsp;\u003C\/h4\u003E\u003Cul\u003E\u003Cli\u003EA physical prototype and supporting mobile application wireframe to change the perception of crime for the benefit of College Park citizens and businesses by highlighting positive things happening in the community.\u003C\/li\u003E\u003Cli\u003EA customized test preparation system, using hip-hop music to motivate and prepare students to increase standardized test scores, which could otherwise limit post-secondary and future opportunities.\u003C\/li\u003E\u003Cli\u003EA social network to address the impact of crime on College Park youth, by providing tips for resisting peer pressure, sharing community events, and facilitating a healthy relationship with law enforcement.\u003C\/li\u003E\u003C\/ul\u003E\u003Cp\u003E\u201cIt was very important that the students\u0027 solutions addressed practical and verified problems in the community,\u201d says Spells. \u201cThe College Park Comprehensive Plan allowed us to pursue validated, researched, high-level problem spaces that the community\u2019s elected officials are seeking to address over the next 20 years.\u201d\u003C\/p\u003E\u003Cp\u003ESpells says the lab also aimed to expose underrepresented minorities to design-thinking as a method to solve important problems and empower young people with the tools to make a difference and inspire change. For example, some students\u0026nbsp;on their own accord are learning the technical skills required to pursue their ideas beyond the workshop by refining their designs and apps.\u003C\/p\u003E\u003Cp\u003E\u201cThe public exhibit brought the work of these young people and their insights about the city to the attention of city council members, who have invited the students to present their ideas in other public forums,\u201d says DiSalvo.\u003C\/p\u003E\u003Cp\u003ESpells, who will graduate in May, is part of the\u0026nbsp;\u003Ca href=\u0022http:\/\/catlab.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003ECulture and Technology Lab\u003C\/a\u003E\u0026nbsp;at Georgia Tech, directed by DiSalvo, which aims to understand how culture impacts people\u2019s practices with technology and designing new learning interventions with these understandings. Spells was also named the GVU Center\u2019s inaugural\u0026nbsp;\u003Ca href=\u0022http:\/\/gvu.gatech.edu\/foley-scholar-finalistsgvu-dist-masters-student-2015-16-feature-story\u0022 target=\u0022_blank\u0022\u003EDistinguished Master\u2019s Student\u003C\/a\u003E\u0026nbsp;this academic year for her work with underrepresented minorities and women in technology-enhanced dance performance.\u003C\/p\u003E","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003ECollege Park students\u2014who are completing their high school credits at The Bridge Academy\u2014recently participated in the Georgia Tech design[ED] Lab workshop where they reviewed College Park\u2019s 20-year Comprehensive Plan (2011 \u2013 2031) to identify community issues and create computing technology solutions that could enhance community engagement and increase career opportunities for young people.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Even in an age of tweets, texts, and video chats, young people are willing to use their voice to support and shape the communities in which they live."}],"uid":"27592","created_gmt":"2016-03-15 11:35:41","changed_gmt":"2016-10-08 03:21:05","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2016-03-15T00:00:00-04:00","iso_date":"2016-03-15T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"513551":{"id":"513551","type":"image","title":"College Park students at MODA with MS HCI student Monet Spells","body":null,"created":"1458923790","gmt_created":"2016-03-25 16:36:30","changed":"1475895277","gmt_changed":"2016-10-08 02:54:37","alt":"College Park students at MODA with MS HCI student Monet Spells","file":{"fid":"205054","name":"designedlab_workshop_for_college_park_students_cr.jpg","image_path":"\/sites\/default\/files\/images\/designedlab_workshop_for_college_park_students_cr_0.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/designedlab_workshop_for_college_park_students_cr_0.jpg","mime":"image\/jpeg","size":1167827,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/designedlab_workshop_for_college_park_students_cr_0.jpg?itok=T2YXbXx8"}},"513561":{"id":"513561","type":"image","title":"Monet Spells (MS HCI sudent)","body":null,"created":"1458923790","gmt_created":"2016-03-25 16:36:30","changed":"1475895277","gmt_changed":"2016-10-08 02:54:37","alt":"Monet Spells (MS HCI sudent)","file":{"fid":"205055","name":"pub_monet_spells.jpg","image_path":"\/sites\/default\/files\/images\/pub_monet_spells_0.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/pub_monet_spells_0.jpg","mime":"image\/jpeg","size":156266,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/pub_monet_spells_0.jpg?itok=yauq849o"}},"355701":{"id":"355701","type":"image","title":"Betsy DiSalvo - Compressed","body":null,"created":"1449245756","gmt_created":"2015-12-04 16:15:56","changed":"1475895087","gmt_changed":"2016-10-08 02:51:27","alt":"Betsy DiSalvo - Compressed","file":{"fid":"202045","name":"betsy-disalvo.jpg","image_path":"\/sites\/default\/files\/images\/betsy-disalvo.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/betsy-disalvo.jpg","mime":"image\/jpeg","size":13256,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/betsy-disalvo.jpg?itok=i8iysekB"}}},"media_ids":["513551","513561","355701"],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EJoshua Preston\u003Cbr \/\u003EGVU Center and College of Computing\u003Cbr \/\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003Ejpreston@cc.gatech.edu\u003C\/a\u003E\u003Cbr \/\u003E678.231.0787\u003C\/p\u003E","format":"limited_html"}],"email":["jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"469201":{"#nid":"469201","#data":{"type":"news","title":"Georgia Tech trains Watson AI to \u0027chat,\u0027 spark more creativity in humans","body":[{"value":"\u003Cp\u003EGeorgia Institute of Technology researchers are exploring and pushing the boundaries of artificial intelligence (AI) by partnering with one\u0026nbsp;of AI\u2019s most\u0026nbsp;notable citizens \u2014\u0026nbsp;IBM\u2019s Watson \u2014 to\u0026nbsp;advance how computers could help humans creatively solve problems in a wide variety of professions.\u003C\/p\u003E\u003Cp\u003E\u201cSearching Google still requires a lot of search,\u201d says Ashok Goel, professor at Georgia Tech\u2019s School of Interactive Computing. \u201cImagine if you could ask Google a complicated question and it immediately responded with your answer \u2014 not just a list of links to manually open. That\u2019s what we did with Watson.\u201d\u003C\/p\u003E\u003Cp\u003EWatson was trained by student teams in a class at Georgia Tech using 1,200 question-answer pairs (200 for each of six teams), which allowed them to \u201cchat\u201d with Watson and seek out inspiration for big design challenges in areas such as engineering, architecture, systems, and computing. The teams worked with the AI to learn about solutions that could be replicated from the natural world \u2014\u0026nbsp;something known as biologically inspired design \u2014\u0026nbsp;after first feeding Watson several hundred\u0026nbsp;biology articles\u0026nbsp;from\u0026nbsp;\u003Cem\u003EBiologue\u003C\/em\u003E, an interactive biology repository. Teams then posed questions to Watson about the research it had learned.\u003C\/p\u003E\u003Cp\u003EQuestions included, \u201cHow do you make a better desalination process for consuming sea water?\u201d Animals, it turns out, have a variety of answers for this, such as how seagulls filter out seawater salt through special glands. Another question asked, \u201cHow can manufacturers develop better solar cells for long-term space travel?\u201d One answer: Replicate how plants in harsh climates use high-temperature fibrous insulation material to regulate temperature. IBM\u2019s Watson quickly culled answers for students from the \u003Cem\u003EBiologue\u003C\/em\u003E articles in a fraction of a second.\u003C\/p\u003E\u003Cp\u003EWatson effectively acted as an intelligent sounding board to steer students through what would otherwise be a daunting task of parsing a wide volume of research that may fall outside their expertise. This approach to using Watson could assist professionals in a variety of fields by allowing them to ask questions and receive answers as quickly as in natural conversation to help with problem solving.\u003C\/p\u003E\u003Cp\u003EGeorgia Tech discovered that Watson\u2019s ability to retrieve natural language information would allow a novice to quickly \u201ctrain up\u201d about complex topics and better determine whether their idea or hypothesis is worth pursuing.\u003C\/p\u003E\u003Cp\u003EThe students call their technique \u201cGT-Watson Plus,\u201d a moniker that implies the system\u2019s advanced capabilities. In addition to the ability to \u201cchat\u201d on a topic, this version of Watson prompts users with alternate ways to ask questions for better results. Those results are packaged in an intuitive presentation \u2014 visualized as a \u201ctreetop\u201d where each answer is a \u201cleaf\u201d that varies in size based on its weighted importance.\u0026nbsp;This allows the average person to navigate results more easily on a given topic.\u003C\/p\u003E\u003Cp\u003E\u201cResearchers are provided a quickly digestible visual map of the concepts relevant to the query and the degree to which they are relevant,\u201d says Goel, who taught the course. \u201cWe were able to add more semantic and contextual meaning to Watson to give some notion of a conversation with the AI.\u201d\u003C\/p\u003E\u003Cp\u003EGoel plans to investigate other areas with Watson such as online learning and healthcare.\u003C\/p\u003E\u003Cp\u003E\u003Cem\u003EThe work will be presented at the Association for the Advancement of Artificial Intelligence (AAAI) 2015 Fall Symposium on Cognitive Assistance in Government, Nov. 12-14, in Arlington, Va.\u003C\/em\u003E\u003C\/p\u003E","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Institute of Technology researchers are exploring and pushing the boundaries of artificial intelligence (AI) by partnering with one of AI\u2019s most notable citizens \u2014 IBM\u2019s Watson \u2014 to advance how computers could help humans creatively solve problems in a wide variety of professions.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Institute of Technology researchers are exploring and pushing the boundaries of artificial intelligence (AI) by partnering with one of AI\u2019s most notable citizens \u2014 IBM\u2019s Watson \u2014 to advance how computers help humans creatively solve problems."}],"uid":"27592","created_gmt":"2015-11-12 11:12:35","changed_gmt":"2016-10-08 03:19:58","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2015-11-12T00:00:00-05:00","iso_date":"2015-11-12T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"469491":{"id":"469491","type":"image","title":"Watson Screenshot","body":null,"created":"1449257160","gmt_created":"2015-12-04 19:26:00","changed":"1475895218","gmt_changed":"2016-10-08 02:53:38","alt":"Watson Screenshot","file":{"fid":"203860","name":"screen_shot_of_watson.png","image_path":"\/sites\/default\/files\/images\/screen_shot_of_watson_0.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/screen_shot_of_watson_0.png","mime":"image\/png","size":615157,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/screen_shot_of_watson_0.png?itok=O-Z99d1_"}},"469221":{"id":"469221","type":"image","title":"Ashok Goel","body":null,"created":"1449257160","gmt_created":"2015-12-04 19:26:00","changed":"1475895218","gmt_changed":"2016-10-08 02:53:38","alt":"Ashok Goel","file":{"fid":"203850","name":"ashok_goel_teaching2_cr.jpg","image_path":"\/sites\/default\/files\/images\/ashok_goel_teaching2_cr_0.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/ashok_goel_teaching2_cr_0.jpg","mime":"image\/jpeg","size":458686,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/ashok_goel_teaching2_cr_0.jpg?itok=ksdVuXem"}},"469251":{"id":"469251","type":"image","title":"GT-Watson Plus Concept Results","body":null,"created":"1449257160","gmt_created":"2015-12-04 19:26:00","changed":"1475895218","gmt_changed":"2016-10-08 02:53:38","alt":"GT-Watson Plus Concept Results","file":{"fid":"203851","name":"watson_graphic_-_treemap_for_biology_concepts.png","image_path":"\/sites\/default\/files\/images\/watson_graphic_-_treemap_for_biology_concepts_0.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/watson_graphic_-_treemap_for_biology_concepts_0.png","mime":"image\/png","size":135654,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/watson_graphic_-_treemap_for_biology_concepts_0.png?itok=tRpjhkIr"}}},"media_ids":["469491","469221","469251"],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[{"id":"2835","name":"ai"},{"id":"2556","name":"artificial intelligence"},{"id":"112431","name":"ashok goel"},{"id":"147691","name":"IBM Watson"},{"id":"12208","name":"watson"}],"core_research_areas":[{"id":"39431","name":"Data Engineering and Science"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003EResearch Communications Officer\u003Cbr \/\u003EGVU Center and College of Computing\u003Cbr \/\u003E 678.231.0787\u003C\/p\u003E","format":"limited_html"}],"email":["jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"456791":{"#nid":"456791","#data":{"type":"news","title":"Georgia Tech alumni win Gold at Bio-Engineering Olympics","body":[{"value":"\u003Cp class=\u0022p1\u0022\u003EBlacki Migliozzi, MS HCI 12, was recently part of a team that won a top prize at the 2015 International Genetically Engineered Machines Competition (\u003Ca href=\u0022http:\/\/2015.igem.org\/Giant_Jamboree\u0022 target=\u0022_blank\u0022\u003EiGEM\u003C\/a\u003E)\u0026nbsp;in Boston Mass., Sept. 24-28. His team Genspace brought home a gold medal for synthetic biology work on two devices and creating 11 new BioBrick\u0026nbsp;parts,\u0026nbsp;standardized DNA\u0026nbsp;building blocks used to design and assemble synthetic biological circuits. The team, which included fellow alumna\u0026nbsp;Christal\u0026nbsp;Gordon,\u0026nbsp;MS,\u0026nbsp;PhD\u0026nbsp;EE,\u0026nbsp;also won an award for best community lab\u0026nbsp;for the work centered around the\u0026nbsp;\u003Ca href=\u0022http:\/\/2015.igem.org\/Team:Genspace\u0022 target=\u0022_blank\u0022\u003EGowanus Canal\u003C\/a\u003E.\u0026nbsp;\u003C\/p\u003E\u003Cp class=\u0022p2\u0022\u003EOn his path to iGEM, Migliozzi studied\u0026nbsp;human-computer interaction at Georgia Tech\u0026nbsp;and was often found working with various research groups on campus to learn about\u0026nbsp;biology-related work. He admits his HCI graduate thesis was a bit abnormal, being centered around bio-hobbyists growing mushrooms.\u0026nbsp;\u003C\/p\u003E\u003Cp class=\u0022p2\u0022\u003EResearchers that had some of the biggest impact on the alum were in the Digital Media program - advisor Carl DiSalvo and Andrew Quitmeyer among them - and they encouraged him to\u0026nbsp;explore his research connecting biology and technology.\u0026nbsp;Migliozzi\u0026nbsp;managed to start a DIY bio-lab in a corner of the Technology Square Research Building - not normally a building for fauna\u0026nbsp;and petri dishes - and for a short time he commandeered the Digital Media program\u2019s refrigerator\u0026nbsp;as a research\u0026nbsp;compost bin.\u003C\/p\u003E\u003Cp class=\u0022p2\u0022\u003E\u201cI was very lucky to work with and learn from several groups around campus, namely\u0026nbsp;\u003Ca href=\u0022http:\/\/www.arkfab.org\/\u0022 target=\u0022_blank\u0022\u003EArkFab\u003C\/a\u003E, the\u0026nbsp;\u003Ca href=\u0022http:\/\/www.astrobiology.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003EAstrobiology group\u003C\/a\u003E\u0026nbsp;and within Tucker Balch\u0027s\u0026nbsp;\u003Ca href=\u0022http:\/\/www.bio-tracking.org\/\u0022 target=\u0022_blank\u0022\u003EBio-Tracking project\u003C\/a\u003E\u201d says Migliozzi, now a\u0026nbsp;data visualization developer for Bloomberg News in NYC. \u201cThose years were formative for me in my continued love of\u0026nbsp;biology.\u201d\u0026nbsp;\u003C\/p\u003E\u003Cp class=\u0022p2\u0022\u003EHe says that the iGEM award is one of the biggest accomplishments of his life and gives credit to his experience at Georgia Tech.\u0026nbsp;\u003C\/p\u003E\u003Cp class=\u0022p2\u0022\u003E\u201cI hope both the HCI and Digital Media programs continue to be as interdisciplinary as possible,\u201d he says. \u201cI encourage students to seek out research across campus and I hope other\u0026nbsp;departments invite these students in with open arms.\u201d\u003C\/p\u003E\u003Cp class=\u0022p2\u0022\u003E\u201cStudents like Blacki are wonderful \u2014 they challenge us to grow and learn. Blacki made an amazing contribution to the culture of the Public Design Workshop and I could not be more delighted by his successes,\u201d says his former advisor Carl\u0026nbsp;DiSalvo.\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003EAmong the\u0026nbsp;280\u0026nbsp;teams and 2,700 participants at\u0026nbsp;iGEM\u0026nbsp;2015, Georgia Tech also had a team, which placed with a bronze medal for its\u0026nbsp;\u003Ca href=\u0022http:\/\/%28http\/\/2015.igem.org\/Team:GeorgiaTech)\u0022 target=\u0022_blank\u0022\u003Eproject\u003C\/a\u003E.\u003C\/p\u003E\u003Cbr \/\u003E","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EBlacki Migliozzi, MS HCI 12, and Christal Gordon, MS, PhD EE,\u0026nbsp;were recently part of a team that won a top prize at the 2015 International Genetically Engineered Machines Competition (\u003Ca href=\u0022http:\/\/2015.igem.org\/Giant_Jamboree\u0022 target=\u0022_blank\u0022\u003EiGEM\u003C\/a\u003E)\u0026nbsp;in Boston Mass., Sept. 24-28.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Blacki Migliozzi, MS HCI 12, and Christal Gordon, MS, PhD EE, were recently part of a team that won the top prize at the 2015 International Genetically Engineered Machines Competition (iGEM) in Boston Mass., Sept. 24-28."}],"uid":"27592","created_gmt":"2015-10-07 11:55:48","changed_gmt":"2016-10-08 03:19:43","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2015-10-07T00:00:00-04:00","iso_date":"2015-10-07T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"456771":{"id":"456771","type":"image","title":"Blacki Migliozzi","body":null,"created":"1449256334","gmt_created":"2015-12-04 19:12:14","changed":"1475895202","gmt_changed":"2016-10-08 02:53:22","alt":"Blacki Migliozzi","file":{"fid":"203494","name":"igem_-_blacki_migliozzi_ms_hci_12_single.jpg","image_path":"\/sites\/default\/files\/images\/igem_-_blacki_migliozzi_ms_hci_12_single_0.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/igem_-_blacki_migliozzi_ms_hci_12_single_0.jpg","mime":"image\/jpeg","size":62989,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/igem_-_blacki_migliozzi_ms_hci_12_single_0.jpg?itok=Or0B52Y8"}},"456781":{"id":"456781","type":"image","title":"Christal Gordon","body":null,"created":"1449256334","gmt_created":"2015-12-04 19:12:14","changed":"1475895202","gmt_changed":"2016-10-08 02:53:22","alt":"Christal Gordon","file":{"fid":"203495","name":"igem_-_christal_gordon_ms_phd_ee_single.jpg","image_path":"\/sites\/default\/files\/images\/igem_-_christal_gordon_ms_phd_ee_single_0.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/igem_-_christal_gordon_ms_phd_ee_single_0.jpg","mime":"image\/jpeg","size":75980,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/igem_-_christal_gordon_ms_phd_ee_single_0.jpg?itok=ym6XC7Xu"}}},"media_ids":["456771","456781"],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":["gvu@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"454461":{"#nid":"454461","#data":{"type":"news","title":"Bad Design Atlanta Contest sponsored by Georgia Tech student group","body":[{"value":"\u003Cp\u003EHave you ever seen something around the city or on campus that is poorly designed to the point of absurdity? Have you dealt with impossible public transit signs or maybe your favorite coffee bistro has poorly designed chairs? Got an idea to improve things?\u0026nbsp;The Georgia Tech chapter of the Human Factors and Ergonomics Society, sponsor of Bad Design Atlanta, is seeking submissions that\u0026nbsp;bring some attention to designs that need a little rethinking.\u0026nbsp;Submit an entry for your chance to win a cash prize (Deadline: Nov. 6)\u003C\/p\u003E\u003Cp\u003EContest is open to any student or group of students at Georgia Tech. Examples of inspired bad design are at: \u003Ca href=\u0022http:\/\/www.baddesigns.com\/examples.html\u0022 title=\u0022http:\/\/www.baddesigns.com\/examples.html\u0022\u003Ehttp:\/\/www.baddesigns.com\/examples.html\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003E\u003Cbr \/\u003E\u003Cstrong\u003EBAD DESIGN ATLANTA CONTEST PRIZES AND RULES:\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003E1st place: $75; 2nd place: $50; 3rd place: $25\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ESubmissions should include:\u003C\/strong\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E- Title of submission Name, email, area of study Problem description and Proposed Solution (no more than one page each)\u003Cbr \/\u003E- PDF or Word document (2-page and 500-word maximum); any pictures should fit on the 2-page document\u003C\/p\u003E\u003Cp\u003E- One submission per person\/group\u003Cbr \/\u003E- Submissions are due on Friday, Nov. 6, 2015 at 5pm ET\u003C\/p\u003E\u003Cp\u003E\u003Cem\u003EEmail submissions to: hfes@gatech.edu\u003C\/em\u003E\u003C\/p\u003E","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EThe Bad Design Atlanta Contest, sponsored by the\u0026nbsp;Georgia Tech chapter of the Human Factors and Ergonomics Society,\u0026nbsp;is a chance to bring some attention to designs that need a little rethinking. Submit an entry for a chance to win a cash prize (Deadline: Nov. 6)\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"The Bad Design Atlanta Contest, sponsored by the Georgia Tech chapter of the Human Factors and Ergonomics Society, is a chance to bring some attention to designs that need a little rethinking. The top three entries receive cash prizes. (Deadline: Nov. 6)"}],"uid":"27592","created_gmt":"2015-10-01 13:18:26","changed_gmt":"2016-10-08 03:19:40","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2015-10-01T00:00:00-04:00","iso_date":"2015-10-01T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:darwei.chen@gatech.edu\u0022\u003EDar-Wei Chen\u003C\/a\u003E \u003Cbr \/\u003E \u003Ca href=\u0022http:\/\/hfes.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003EGeorgia Tech chapter of the Human Factors and Ergonomics Society\u003C\/a\u003E\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"452221":{"#nid":"452221","#data":{"type":"news","title":"\u2018On You\u2019 Wearable Computing Exhibit draws over 30,000 attendees, closes with alumni receptions","body":[{"value":"\u003Cp\u003EFrom the ancient abacus to supercomputers, the Computer History Museum in Mountain View, Calif., offers an array of\u0026nbsp;rare artifacts and milestones from 2,000 years of \u201ccomputing\u201d history. This summer at a special exhibition, a new breed of computer drew more than 30,000 visitors and showed how computing has become synonymous with daily life.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EGeorgia Tech\u2019s \u201cOn You: A Story of Wearable Computing\u201d exhibit curated more than 60 gadgets chronicling the history of making on-body technology a reality. The exhibit showed devices that have been envisioned for consumers and professionals and by \u201cmakers.\u201d It showed four major challenges to a consumer wearable computer - power and heat, networking, mobile input, and displays - and the product categories that have resulted.\u003C\/p\u003E\u003Cp\u003EMore than 100 alumni, family members and students gathered for the exhibit\u2019s closing reception\u0026nbsp;on Sept. 19, many from the College of Computing, including OMSCS students in the Bay Area.\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022http:\/\/gvu.gatech.edu\/you-wearable-computing-exhibit-alumni-receptions\u0022\u003ERead More\u003C\/a\u003E\u003C\/p\u003E","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech\u2019s \u201cOn You: A Story of Wearable Computing\u201d exhibit at the Computer History Museum curated more than 60 gadgets chronicling the history of making on-body technology a reality.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EMore than 100 alumni, family members and students gathered for the exhibit\u2019s closing reception\u0026nbsp;on Sept. 19, many from the College of Computing, including OMSCS students in the Bay Area.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech\u2019s \u201cOn You: A Story of Wearable Computing\u201d exhibit at the Computer History Museum curated more than 60 gadgets chronicling the history of making on-body technology a reality."}],"uid":"27592","created_gmt":"2015-09-25 10:02:28","changed_gmt":"2016-10-08 03:19:36","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2015-09-23T00:00:00-04:00","iso_date":"2015-09-23T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"416531":{"id":"416531","type":"image","title":"Thad Starner","body":null,"created":"1449254258","gmt_created":"2015-12-04 18:37:38","changed":"1475895155","gmt_changed":"2016-10-08 02:52:35","alt":"Thad Starner","file":{"fid":"202549","name":"thad_starner_2.jpg","image_path":"\/sites\/default\/files\/images\/thad_starner_2_0.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/thad_starner_2_0.jpg","mime":"image\/jpeg","size":120584,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/thad_starner_2_0.jpg?itok=CYln5AeS"}}},"media_ids":["416531"],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[{"id":"9873","name":"clint zeagler"},{"id":"132111","name":"Computer History Museum"},{"id":"1944","name":"Thad Starner"},{"id":"10353","name":"wearable computing"},{"id":"115211","name":"wearable tech"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E GVU Center, College of Computing\u003C\/p\u003E","format":"limited_html"}],"email":["jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"433811":{"#nid":"433811","#data":{"type":"news","title":"GT computing education experts present work on learning methods, broadening diversity and more","body":[{"value":"\u003Cp class=\u0022p1\u0022\u003EThe ACM International Computing Education Research Conference, ICER 2015, and the first IEEE Broadening Participation in Computing Research Conference, RESPECT 2015, take place this week and include new research by Georgia Tech faculty and graduate students from three colleges, including computing, architecture, and liberal arts.\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003EICER - dedicated to the study of how people understand computational processes and devices - takes place in Omaha, Nebr., Aug. 9-13. RESPECT - focused on improving diversity in the computer science education community - follows immediately, Aug. 13-14, in Charlottesville, N.C..\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003EMark Guzdial, professor of Interactive Computing, is co-chair of the Doctoral Consortium at ICER 2015, which set a record for most participants in a computing education doctoral consortium anywhere in the world with 20 Ph.D. students, including Georgia Tech\u2019s Barbara Ericson, Briana Morrison, and Miranda Parker. Students also represented countries such as Chile, Germany and the United Kingdom.\u0026nbsp;\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003E\u0026nbsp;\u003C\/p\u003E\u003Ch3 class=\u0022p3\u0022\u003EPresenting at\u0026nbsp;\u003Ca href=\u0022http:\/\/icer.hosting.acm.org\/\u0022 target=\u0022_blank\u0022\u003EICER\u003C\/a\u003E:\u0026nbsp;\u003C\/h3\u003E\u003Cp class=\u0022p1\u0022\u003E\u003Cem\u003EPapers:\u003C\/em\u003E\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003E\u003Cstrong\u003ESubgoals, Context, and Worked Examples in Learning Computing Problem Solving\u0026nbsp;\u003C\/strong\u003E(1 of 2 Best Papers)\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003EBriana Morrison (Georgia Tech), Lauren Margulieux (Georgia Tech) and Mark Guzdial (Georgia Tech)\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp class=\u0022p2\u0022\u003E\u003Cstrong\u003EAnalysis of Interactive Features Designed to Enhance Learning in an Ebook\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EBarbara Ericson (Georgia Tech), Mark Guzdial (Georgia Tech) and Briana Morrison (Georgia Tech)\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003E\u003Cem\u003ELightening Talks and Posters:\u003C\/em\u003E\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003E\u003Cstrong\u003EThe MoveLab: Supporting Diversity through Self-Conceptions\u003C\/strong\u003E\u003C\/p\u003E\u003Cp class=\u0022p2\u0022\u003EKayla DesPortes\u003Cem\u003E\u0026nbsp;(Georgia Tech)\u003C\/em\u003E\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003E\u003Cem\u003EDoctoral Consortium:\u0026nbsp;\u003C\/em\u003E\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003E\u003Cstrong\u003EAdaptive Parsons Problems with Discourse Rules\u003C\/strong\u003E\u003C\/p\u003E\u003Cp class=\u0022p2\u0022\u003EBarbara Ericson, Ph.D. HCC student; Director of Computing Outreach, ICE\u003C\/p\u003E\u003Cp class=\u0022p2\u0022\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp class=\u0022p2\u0022\u003E\u003Cstrong\u003EComputer Science Is Different!\u003C\/strong\u003E\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003EBriana B. Morrison, Ph.D. HCC student\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003E\u003Cstrong\u003EPrivilege and Computer Science Education: How Can we Level the Playing Field?\u003C\/strong\u003E\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003EMiranda Parker, Ph.D. HCC student\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003E\u0026nbsp;\u003C\/p\u003E\u003Ch3 class=\u0022p3\u0022\u003EPresenting at\u0026nbsp;\u003Ca href=\u0022http:\/\/respect2015.stcbp.org\/\u0022 target=\u0022_blank\u0022\u003ERESPECT\u003C\/a\u003E:\u0026nbsp;\u003C\/h3\u003E\u003Cp class=\u0022p1\u0022\u003E\u003Cem\u003EPapers:\u0026nbsp;\u003C\/em\u003E\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003E\u003Cstrong\u003EHelping African American Students Pass Advanced Placement Computer Science: A Tale of Two States\u0026nbsp;\u003C\/strong\u003E(1 of 4 \u0022Exemplary\u0022 papers)\u003C\/p\u003E\u003Cp class=\u0022p2\u0022\u003EBarbara Ericson and Tom McKlin\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp class=\u0022p2\u0022\u003E\u003Cstrong\u003EA critical research synthesis of privilege in computing education\u0026nbsp;\u003C\/strong\u003E\u003C\/p\u003E\u003Cp class=\u0022p2\u0022\u003EMiranda Parker and Mark\u0026nbsp;Guzdial (short paper)\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003E\u003Cem\u003EFireside Chat:\u0026nbsp;\u003C\/em\u003E\u003C\/p\u003E\u003Cp class=\u0022p2\u0022\u003E\u003Cstrong\u003EBroadening Participation in Computing\u0026nbsp;\u003C\/strong\u003E\u003C\/p\u003E\u003Cp class=\u0022p2\u0022\u003EMark Guzdial\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003E\u003Cem\u003ELightening Talk:\u003C\/em\u003E\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003E\u003Cstrong\u003EEarSketch: a STEAM\u0026nbsp;approach to broadening participation in Computer Science Principles\u003C\/strong\u003E\u003C\/p\u003E\u003Cp class=\u0022p2\u0022\u003EJason Freeman, Brian Magerko, Doug Edwards, Roxanne Moore, Tom McKlin and Anna Xamb\u00f3.\u0026nbsp;\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp class=\u0022p2\u0022\u003E\u003Cstrong\u003EExploring African-American Middle School Girls\u2019 Perceptions of Themselves as Computational\u0026nbsp;Algorithmic Thinkers and Game Designers Through Reality Confessionals\u003C\/strong\u003E\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003EJakita Thomas, PhD CS 2006\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp class=\u0022p2\u0022\u003E\u003Cstrong\u003EIt\u2019s All In The Mix: Leveraging food to increase Afrian-American women\u2019s\u0026nbsp;persistence in Computer Science\u003C\/strong\u003E\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003EJakita Thomas and Yolanda Rankin\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003EFor more details about research from the Contextualized Support for Learning group:\u003C\/p\u003E\u003Cp class=\u0022p2\u0022\u003E\u003Ca href=\u0022https:\/\/computinged.wordpress.com\/2015\/08\/07\/icer-2015-preview-subgoal-labeling-works-for-text-too\/\u0022 target=\u0022_blank\u0022\u003EICER research\u003C\/a\u003E\u003C\/p\u003E\u003Cp class=\u0022p2\u0022\u003E\u003Ca href=\u0022https:\/\/computinged.wordpress.com\/2015\/08\/10\/respect-2015-preview-the-role-of-privilege-in-cs-education\/\u0022 target=\u0022_blank\u0022\u003ERESPECT research\u003C\/a\u003E\u003C\/p\u003E","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EThe ACM International Computing Education Research Conference, ICER 2015, and the first IEEE Broadening Participation in Computing Research Conference, RESPECT 2015, take place this week and include new research by Georgia Tech faculty and graduate students from three colleges, including computing, architecture, and liberal arts.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"ACM ICER 2015 and the IEEE RESPECT 2015 conferences take place this week and include new research by Georgia Tech faculty and graduate students in computing education."}],"uid":"27592","created_gmt":"2015-08-12 13:29:02","changed_gmt":"2016-10-08 03:19:22","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2015-08-12T00:00:00-04:00","iso_date":"2015-08-12T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"433821":{"id":"433821","type":"image","title":"Georgia Tech @ ICER 2015 alumni, faculty and students","body":null,"created":"1449256148","gmt_created":"2015-12-04 19:09:08","changed":"1475895171","gmt_changed":"2016-10-08 02:52:51"}},"media_ids":["433821"],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EJoshua Preston\u003C\/p\u003E\u003Cp\u003EGVU Center, College of Computing\u003C\/p\u003E\u003Cp\u003E678.231.0787\u003C\/p\u003E","format":"limited_html"}],"email":["jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"418041":{"#nid":"418041","#data":{"type":"news","title":"Georgia Tech Researchers Train Computer to Create Games by Watching YouTube","body":[{"value":"\u003Cp\u003EGeorgia Institute of Technology researchers have developed a computing system that views gameplay video from streaming services like YouTube or Twitch, analyzes the footage and then is able to create original new sections of a game.\u003C\/p\u003E\u003Cp\u003EThe team tested their discovery, the first of its kind, with the original Super Mario Brothers, a well-known two-dimensional platformer game that will allow the new automatic-level designer to replicate results across similar games.\u003C\/p\u003E\u003Cp\u003EThe system focuses on the gaming terrain (not the playable character) and the positioning between elements on-screen \u2013 be it pipes, blocks, coins or Goombas \u2013 and it determines the required relationship or level design rule. For example, pipes in the Mario games tend to stick out of the ground, so the system learns this and prevents any pipes from being flush with grassy surfaces. It also prevents \u201cbreaks\u201d by using spatial analysis \u2013 e.g. no impossibly long jumps for the hero.\u003C\/p\u003E\u003Cp\u003E\u201cAn initial evaluation of our approach indicates an ability to produce level sections that are both playable and close to the original without hand coding any design criteria,\u201d says Matthew Guzdial, lead author and Ph.D. student in Computer Science at Georgia Tech.\u003C\/p\u003E\u003Cp\u003EKey to the process is watching the players in action to see where they actually spend most of their time in the game. After recording on-screen locations of sprites, Georgia Tech\u2019s algorithms determine what are high-interaction areas \u2013 those spots where players spend more time to collect bonus items or master a challenge. The automatic-level designer specifically targets these areas to gain design information. The system is then able to build a new level section, element by element.\u003C\/p\u003E\u003Cp\u003E\u201cOur system creates a model or template, and it\u2019s able to produce level sections that have never been seen before, do not appear random and can be traversed by the player,\u201d says Mark Riedl, the study\u0027s primary investigator and associate professor of Interactive Computing. \u201cOne could say that the system \u2018studies\u2019 the design of Mario levels until it is able to create new playable areas.\u201d\u003C\/p\u003E\u003Cp\u003EThe Georgia Tech system output 151 distinct level sections from 17 samples in the original game, controlling for overall playability and style variables. Output increased to 334 level sections as the system lessened the constraints. The new levels can be played easily by porting them into a game engine.\u003C\/p\u003E\u003Cp\u003ERiedl says this is the first time he is aware of a gameplay video being used to design levels for a Mario game. By applying the technique across a number of different platformer games, a system can theoretically learn genre knowledge, which can be beneficial for procedurally creating games of a given genre. The technique may also extend to other game genres beyond platformers. The researchers next plan to develop full-scale levels and evaluate how gamers interact in those levels compared to the original gameplay videos.\u003C\/p\u003E\u003Cp\u003EThe research, \u201cToward Game Level Generation from Gameplay Videos,\u201d is featured June 22-25 at the Foundations of Digital Games Conference in Pacific Grove, Calif.\u003C\/p\u003E","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Institute of Technology researchers have developed a computing system that views gameplay video from streaming services like YouTube or Twitch, analyzes the footage and then is able to create original new sections of a game.\u003C\/p\u003E\u003Cp\u003EThe team tested their discovery, the first of its kind, with the original Super Mario Brothers, a well-known two-dimensional platformer game that will allow the new automatic-level designer to replicate results across similar games.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Institute of Technology researchers have developed a computing system that views gameplay video from streaming services like YouTube or Twitch, analyzes the footage and then is able to create original new sections of a game."}],"uid":"27592","created_gmt":"2015-06-24 11:41:13","changed_gmt":"2016-10-08 03:18:45","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2015-06-24T00:00:00-04:00","iso_date":"2015-06-24T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"418061":{"id":"418061","type":"image","title":"Automatic game level generator","body":null,"created":"1449254269","gmt_created":"2015-12-04 18:37:49","changed":"1475895155","gmt_changed":"2016-10-08 02:52:35","alt":"Automatic game level generator","file":{"fid":"202584","name":"generated_levels_square_set.png","image_path":"\/sites\/default\/files\/images\/generated_levels_square_set_0.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/generated_levels_square_set_0.png","mime":"image\/png","size":354991,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/generated_levels_square_set_0.png?itok=fAGyoF-z"}},"418101":{"id":"418101","type":"image","title":"Automatic Game Level Generator","body":null,"created":"1449254269","gmt_created":"2015-12-04 18:37:49","changed":"1475895155","gmt_changed":"2016-10-08 02:52:35","alt":"Automatic Game Level Generator","file":{"fid":"202585","name":"overworldgif.gif","image_path":"\/sites\/default\/files\/images\/overworldgif_0.gif","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/overworldgif_0.gif","mime":"image\/gif","size":6856320,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/overworldgif_0.gif?itok=rASeQdPm"}},"50384":{"id":"50384","type":"image","title":"Mark Riedl","body":null,"created":"1449175392","gmt_created":"2015-12-03 20:43:12","changed":"1475894458","gmt_changed":"2016-10-08 02:40:58","alt":"Mark Riedl","file":{"fid":"128682","name":"mark-riedl.jpg","image_path":"\/sites\/default\/files\/images\/mark-riedl_1.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/mark-riedl_1.jpg","mime":"image\/jpeg","size":12265,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/mark-riedl_1.jpg?itok=NlCFZ53t"}}},"media_ids":["418061","418101","50384"],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"},{"id":"71901","name":"Society and Culture"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E \u003Cbr \/\u003EGVU Center, College of Computing \u003Cbr \/\u003E678.231.0787\u003C\/p\u003E","format":"limited_html"}],"email":["jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"396211":{"#nid":"396211","#data":{"type":"news","title":"Research finds adolescents\u2019 time online doubles, hyperlocal social media emerges","body":[{"value":"\u003Cp\u003EA \u003Ca href=\u0022http:\/\/www.chi.gatech.edu\/2015\/young-people-online\/\u0022 target=\u0022_blank\u0022\u003Efour-year study\u003C\/a\u003E of adolescents\u2019 use of technology shows that the average amount of time spent online daily by 10- to 14-year-olds jumped from 3.5 hours to more than eight during the study period of 2010-2013. Georgia Institute of Technology researchers say adolescents\u2019 identities are being shaped through continuous online social activities \u2013 a phenomenon arising from the growth of mobile devices. The research also reveals that adolescents no longer distinguish between time online and offline, as well as how they deal with social pressure, identity, privacy and risky behavior online.\u003C\/p\u003E\u003Cp\u003EThe study, one of the first of its kind to focus on low-income, middle school-aged students from a concentrated geographical area, sought to better understand motivations and behaviors for online social practices among them. Results came from survey responses and focus groups with 179 participants in three middle schools with high minority populations. Demographic representation was approximately 65 percent African American, 18 percent Asian, 9 percent Caucasian, and 8 percent Hispanic.\u003C\/p\u003E\u003Cp\u003ESocial media use showed high levels of experimentation and rapid adoption of certain platforms in specific social contexts. During the four-year period, children\u2019s social media habits became more opaque and nuanced through apps that allow private, anonymous sharing. The\u0026nbsp;participants adopted new Facebook strategies when dealing with different social circles; some posted less for family to view or made second accounts only for friends. In general, video-based communication saw a significant rise in 2012 with 61 percent of participants using Oovoo, a video chat and instant messaging platform. Only Facebook and YouTube outpaced its use.\u003C\/p\u003E\u003Cp\u003EHarmful and risky behaviors, such as eating disorders and sexting, came up in the focus groups. One of the most alarming behaviors, according to researchers, was the use of websites or communities that promoted restrictive eating habits.\u003C\/p\u003E\u003Cp\u003E\u201cWith the rise of new social platforms that bring new capabilities, such as Snapchat and hyperlocal platforms, the potential for negative exploitation is real and already being observed within this population,\u201d says Jessica Pater, lead researcher and Ph.D. Student in Human-Centered Computing.\u003C\/p\u003E\u003Cp\u003EIn a cyber bullying incident, multiple platforms including Kik (for mobile instant messaging) and Keek (video-based social networking) were used to organize discussion around and single out a bully, who was using fake profiles to harass a classmate. Kids didn\u2019t think of the online social tension as cyber bullying, but rude behavior that is simply part of life.\u003C\/p\u003E\u003Cp\u003E\u201cThe social app use we found in this population exemplifies how platforms can become truly engrained in the fabric of technology use within a group of users in a short period of time,\u201d says Pater.\u003C\/p\u003E\u003Cp\u003EResearchers believe their approach can be replicated for understanding large-scale trends in social media in other populations, and that it could be critical for identifying opportunities for design and research on the platforms.\u003C\/p\u003E\u003Cp\u003EThe paper, \u201cThis Digital Life: A Neighborhood-Based Study of Adolescents\u2019 Lives Online,\u201d will be presented at the ACM Conference on Human Factors in Computing Systems (CHI 2015) in Seoul, Korea, April 18-23.\u003C\/p\u003E","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EA\u0026nbsp;\u003Ca href=\u0022http:\/\/www.chi.gatech.edu\/2015\/young-people-online\/\u0022 target=\u0022_blank\u0022\u003Efour-year study\u003C\/a\u003E\u0026nbsp;of adolescents\u2019 use of technology shows that the average amount of time spent online daily by 10- to 14-year-olds jumped from 3.5 hours to more than eight during the study period of 2010-2013. Georgia Tech researchers say adolescents\u2019 identities are being shaped through continuous online social activities \u2013 a phenomenon arising from the growth of mobile devices. The research also reveals that adolescents no longer distinguish between time online and offline, as well as how they deal with social pressure, identity, privacy and risky behavior online.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A four-year study of adolescents\u2019 use of technology shows that the average amount of time spent online daily by 10- to 14-year-olds jumped from 3.5 hours to more than eight during the study period of 2010-2013."}],"uid":"27592","created_gmt":"2015-04-14 17:49:08","changed_gmt":"2016-10-08 03:17:58","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2015-04-14T00:00:00-04:00","iso_date":"2015-04-14T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"396221":{"id":"396221","type":"image","title":"http:\/\/chi.gatech.edu\/2015\/young-people-online","body":null,"created":"1449246361","gmt_created":"2015-12-04 16:26:01","changed":"1475895112","gmt_changed":"2016-10-08 02:51:52","alt":"http:\/\/chi.gatech.edu\/2015\/young-people-online","file":{"fid":"76011","name":"young_people_online_viz_large_thumbnail.jpg","image_path":"\/sites\/default\/files\/images\/young_people_online_viz_large_thumbnail.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/young_people_online_viz_large_thumbnail.jpg","mime":"image\/jpeg","size":340779,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/young_people_online_viz_large_thumbnail.jpg?itok=mvIPgudN"}}},"media_ids":["396221"],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E678.231.0787\u003C\/p\u003E\u003Cp\u003EGVU Center\u003Cbr \/\u003E College of Computing\u003C\/p\u003E","format":"limited_html"}],"email":["jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"396701":{"#nid":"396701","#data":{"type":"news","title":"Research identifies barriers to tracking meals and what foodies want","body":[{"value":"\u003Cp\u003EEating healthy is sometimes a challenge on its own, so technology should ease that burden \u2013 not increase it \u2013 according to new research from the Georgia Institute of Technology and University of Washington. Researchers studied how mobile-based food journals integrate into everyday life and specific challenges when using food journaling technology. Their research suggests how future designs might make it easier and more effective.\u003C\/p\u003E\u003Cp\u003EThe research study uncovered three problem areas: barriers to reliable food entry, negative nudges in current food journal apps and challenges in social features. The findings resulted from data collected in a survey of 141 current and former food loggers as well as analysis of 5,526 public posts on the community forums of mobile-based MyFitnessPal, FatSecret and CalorieCount.\u003C\/p\u003E\u003Cp\u003E\u201cCommunity contributions to the databases allow journalers to publish nutritional entries themselves and create a diverse food base from which to pick, but it also raises concerns about reliability,\u201d says Edison Thomaz, a researcher on the study and Ph.D. candidate in Human-Centered Computing at Georgia Tech.\u003C\/p\u003E\u003Cp\u003ESome users said logging meals took too much effort and was time consuming. They sometimes loosely followed recipes or only ate partial portion sizes, making it difficult to log meals. Another issue was that food databases contained inaccuracies, common foods that were missing, or had multiple listings for a single food because of user-generated listings.\u003C\/p\u003E\u003Cp\u003EResearchers found that not all foods are created equal when it comes to logging them. On a seven-point Likert scale, packaged foods and fast food were a breeze to log (6.5 and 6.3 mean scores), while counting up finger foods at a friend\u2019s house or party took dedication (3.2 and 2.9 mean scores).\u003C\/p\u003E\u003Cp\u003EThis made the mobile journals themselves less effective, with some participants straying from their goals or eating the same thing every day to ease the logging ritual. As one respondent put it, it was easier to \u201cscan a code on some processed stuff and be done with it.\u201d\u003C\/p\u003E\u003Cp\u003EParticipants also wanted to develop social connections around food goals. Encouragement of goal attainment and mutual support helped strengthen journaling habits. Conversely, when people received no comments, had online friends stop journaling, or had comparatively less progress than others, it negatively impacted their food-tracking goals.\u003C\/p\u003E\u003Cp\u003EThe findings led to several recommendations, including one for designing goal-specific systems.\u003C\/p\u003E\u003Cp\u003E\u201cFood journals are an important method for tracking food consumption and can support a variety of goals, including weight loss, healthier food choices, detecting deficiencies, identifying allergies and determining foods that trigger other symptoms,\u201d says James Fogarty, a researcher on the study and associate professor of Computer Science \u0026amp; Engineering at the University of Washington.\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u201cInstead of attempting to capture the elusive \u2018everything,\u2019 the results suggest creating a diversity of journal designs to support specific goals,\u201d says Fogarty.\u003C\/p\u003E\u003Cp\u003EReputation systems were suggested to allow users to filter for specific needs (e.g. tracking sodium intake) or vote on accuracy of entries. Also a priority: streamlining databases with similar foods and providing context for food entry, such as indicating restaurant items or vegan meals.\u003C\/p\u003E\u003Cp\u003EThe results have also led to separate research by team members to implement new journaling solutions. Georgia Tech researchers are testing the feasibility of using a mobile device\u2019s built-in microphone to capture ambient sounds related to eating that, when recognized by the mobile device, nudge users to log their food. Washington researchers are using photo-based journaling to augment or replace methods focused on detailed nutritional input in an attempt to remove or reduce barriers to journaling.\u003C\/p\u003E\u003Cp\u003EThe research paper \u201cBarriers and Negative Nudges: Exploring Challenges in Food Journaling\u201d will be presented at the ACM Conference on Human Factors in Computing Systems (CHI 2015) in Seoul, South Korea, April 18-23. The work is funded in part by the Intel Science and Technology Center for Pervasive Computing, the National Science Foundation (Awards OAI-1028195 and SCH-1344613) and the National Institutes of Health (Award 1U54EB020404-01). \u003Cem\u003EAny conclusions or opinions are those of the authors and do not necessarily represent the official views of the sponsoring agencies.\u003C\/em\u003E\u003C\/p\u003E","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EEating healthy is sometimes a challenge on its own, so technology should ease that burden \u2013 not increase it \u2013 according to new research from the Georgia Institute of Technology and University of Washington. Researchers studied how mobile-based food journals integrate into everyday life and specific challenges when using food journaling technology. Their research suggests how future designs might make it easier and more effective.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Eating healthy is sometimes a challenge on its own, so technology should ease that burden \u2013 not increase it \u2013 according to new research from the Georgia Institute of Technology and University of Washington."}],"uid":"27592","created_gmt":"2015-04-15 14:13:28","changed_gmt":"2016-10-08 03:17:58","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2015-04-16T00:00:00-04:00","iso_date":"2015-04-16T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"396721":{"id":"396721","type":"image","title":"Edison Thomaz","body":null,"created":"1449246361","gmt_created":"2015-12-04 16:26:01","changed":"1475895112","gmt_changed":"2016-10-08 02:51:52","alt":"Edison Thomaz","file":{"fid":"75681","name":"edison_thomaz_chi2015.jpg","image_path":"\/sites\/default\/files\/images\/edison_thomaz_chi2015.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/edison_thomaz_chi2015.jpg","mime":"image\/jpeg","size":86652,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/edison_thomaz_chi2015.jpg?itok=3pkYRNw9"}},"396711":{"id":"396711","type":"image","title":"Gregory Abowd","body":null,"created":"1449246361","gmt_created":"2015-12-04 16:26:01","changed":"1475895112","gmt_changed":"2016-10-08 02:51:52","alt":"Gregory Abowd","file":{"fid":"75680","name":"grregory_abowd_chi2015.jpg","image_path":"\/sites\/default\/files\/images\/grregory_abowd_chi2015.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/grregory_abowd_chi2015.jpg","mime":"image\/jpeg","size":69190,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/grregory_abowd_chi2015.jpg?itok=mTrf27QV"}}},"media_ids":["396721","396711"],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"124021","name":"counting calories"},{"id":"123991","name":"food journaling"},{"id":"123981","name":"food logging"},{"id":"124011","name":"quantified self"},{"id":"124001","name":"quants"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E678-231-0787\u003C\/p\u003E\u003Cp\u003EGVU Center\u003Cbr \/\u003ECollege of Computing\u003C\/p\u003E","format":"limited_html"}],"email":["jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"365591":{"#nid":"365591","#data":{"type":"news","title":"People-Focused Computing Research in 2014 Took Users in New Directions","body":[{"value":"\u003Cp\u003EComputing technology research takes on many forms in the GVU Center, whether it\u0027s deciphering the social media stratosphere, putting Atlanta\u0027s wider public transit information at your fingertips, reimagining digital storytelling, improving sustainable urban farms, or a score of other high-concept applications and prototypes that are advancing how technology impacts our lives.\u003C\/p\u003E\u003Cp\u003EIn 2014, our researchers broke new ground on how to get the most out of technology interactions. This snapshot of our community of researchers shows a small sample of computing possibilities becoming reality through the collaborative and dynamic environments at Georgia Tech and the GVU Center.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003C\/p\u003E\u003Cp\u003E\u003C\/p\u003E\u003Ch3\u003E\u003Cstrong\u003EKickstarter phrases that pay (and don\u0027t)\u003C\/strong\u003E\u003C\/h3\u003E\u003Cp\u003EResearchers at Georgia Tech studying the burgeoning phenomenon of crowdfunding have learned that the language used in online fundraising hold surprisingly predictive power about the success of such campaigns.\u0026nbsp;As part of their study of more than 45,000 projects on Kickstarter, Assistant Professor\u0026nbsp;\u003Cstrong\u003EEric Gilbert\u003C\/strong\u003E\u0026nbsp;and Computer Science doctoral candidate\u0026nbsp;\u003Cstrong\u003ETanushree Mitra\u003C\/strong\u003E\u0026nbsp;reveal dozens of phrases that pay and a few dozen more that may signal the likely failure of a crowd-sourced effort.\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.gvu.gatech.edu\/news\/georgia-tech-researchers-reveal-phrases-pay-kickstarter\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003ERead More\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.cc.gatech.edu\/~gte115v\/wip0483-fieslerSC.pdf\u0022 target=\u0022_blank\u0022\u003E\u003C\/a\u003E\u003C\/p\u003E\u003Ch3\u003E\u003Cstrong\u003EDo you read terms of service? Maybe you should.\u0026nbsp;\u003C\/strong\u003E\u003C\/h3\u003E\u003Cp\u003EA key usability problem for websites is the complexity of their terms and conditions. Within the HCI community, attention to this issue to date has primarily focused on privacy policies. Human-Centered Computing\u0026nbsp;doctoral candidate\u0026nbsp;\u003Cstrong\u003ECasey Fiesler\u003C\/strong\u003E\u0026nbsp;and Professor\u0026nbsp;\u003Cstrong\u003EAmy Bruckman\u003C\/strong\u003E\u0026nbsp;begin to build on this work, extending it to copyright terms. With so many people posting everything from status updates to digital art online, intellectual property rights are increasingly important to the end user. The researchers conducted a content analysis of 30 different websites where users can share creative work, focusing on the licenses and usage rights that users grant to those websites.\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.gvu.gatech.edu\/news\/do-you-read-terms-service-maybe-you-should-0\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003ERead More\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003C\/p\u003E\u003Cp\u003E\u003C\/p\u003E\u003Ch3\u003E\u003Cstrong\u003EIntegrating real-time information for\u0026nbsp;metro Atlanta public transit\u003C\/strong\u003E\u003C\/h3\u003E\u003Cp\u003EThe mobile app\u0026nbsp;\u003Ca href=\u0022http:\/\/atlanta.onebusaway.org\/\u0022 target=\u0022_blank\u0022\u003EOneBusAway\u003C\/a\u003E, which tracks public transportation in real time,\u0026nbsp;added\u0026nbsp;arrival times for MARTA trains in 2014 in addition to the MARTA buses and Georgia Tech shuttles already featured in the app. The app\u0026nbsp;also added the\u0026nbsp;new Atlanta Streetcar route (which opened\u0026nbsp;at the end of 2014),\u0026nbsp;continuing to grow its network of\u0026nbsp;real-time\u0026nbsp;transit information.\u0026nbsp;OneBusAway is being intregrated into Atlanta\u2019s transit network by Georgia Tech researchers, led by Assistant Professor\u0026nbsp;\u003Cstrong\u003EKari Watkins\u003C\/strong\u003E. The app\u2019s developers\u0026nbsp;plan to add bus data for Georgia Regional Transportation Authority (GRTA) Xpress, Cobb Community Transit (CCT), Gwinnett County Transit, the Atlantic Station shuttle, other local university systems, and other systems equipped with GPS tracking. The research\u0026nbsp;has a growing national footprint with the app being used in other major spots such as\u0026nbsp;New York, the Seattle area, Tampa, and elsewhere.\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.news.gatech.edu\/2014\/03\/05\/onebusaway-app-now-tracks-marta-trains-real-time\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003ERead More\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.youtube.com\/watch?v=0Onob10BwgA\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EVideo\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.youtube.com\/playlist?list=PLB7jAXT4DsfaX-d25uuj6N2CxnUJAdIxW\u0022 target=\u0022_blank\u0022\u003E\u003C\/a\u003E\u003C\/p\u003E\u003Ch3\u003E\u003Cstrong\u003ECHI 2014 - One of a CHInd\u003C\/strong\u003E\u003C\/h3\u003E\u003Cp\u003EGeorgia Tech researchers delivered an incredible lineup of work in human-computer interaction at CHI 2014 showing the growing complexities in technology use and emerging needs of end users. Researchers talk about their work and the contributions Georgia Tech - a Top 10 institution with accepted research at CHI -\u0026nbsp;is making to the field. Also, CHI 2014 saw the debut of the Georgia Tech-curated wearable computing exhibit \u0022Meeting the Challenge: The Path Towards a Consumer Wearable Computer.\u0022\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.chi.gatech.edu\/2014\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EGeorgia Tech at CHI Website\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003E\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.youtube.com\/watch?v=URWYhavIPOk\u0022 target=\u0022_blank\u0022\u003E\u003C\/a\u003E\u003C\/p\u003E\u003Ch3\u003E\u003Cstrong\u003EEmerging app-based performance art for shared experiences\u003C\/strong\u003E\u003C\/h3\u003E\u003Cp\u003EChoreographer and former ARTech resident artist\u003Cstrong\u003E\u0026nbsp;Jonah Bokaer\u003C\/strong\u003E\u0026nbsp;finished the first part of a two-year campus residency at Georgia Tech where he is creating \u201cApplied Movement: App Development for Choreography.\u201d Working with the Ferst Center, he is developing an app called Crowd Codes, a framework consisting of software components that enable groups to participate in a shared movement-based artistic and educational experience by using their mobile phones. He has conducted campus workshops and community outreach in addition to the mobile app collaboration, which is designed to explore crowd movement in public spaces on a large scale.\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022http:\/\/jonahbokaer.net\/apps\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003ERead More\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003C\/p\u003E\u003Cp\u003E\u003C\/p\u003E\u003Ch3\u003E\u003Cstrong\u003EWearables exhibit tour\u003C\/strong\u003E\u003C\/h3\u003E\u003Cp\u003ECommercial products for wearable computing technology -\u0026nbsp;Apple Watch and\u0026nbsp;Google Glass being the most high profile -\u0026nbsp;are now being widely announced and becoming a part of the public consciousness. Georgia Tech researchers, led by Professor\u0026nbsp;\u003Cstrong\u003EThad Starner\u003C\/strong\u003E\u0026nbsp;and Research Scientist\u0026nbsp;\u003Cstrong\u003EClint Zeagler\u003C\/strong\u003E,\u0026nbsp;curated a one-of-a-kind collection of wearable technology in 2014 to show the path that the technology has taken through the decades and in different industries. The exhibit - \u0022Meeting the Challenge: The Path Towards a Consumer Wearable Computer\u0022 - was shown at\u0026nbsp;several major international\u0026nbsp;venues (starting at\u0026nbsp;\u003Ca href=\u0022https:\/\/www.youtube.com\/watch?v=9q7PCy28BvU\u0022 target=\u0022_blank\u0022\u003ECHI 2014\u003C\/a\u003E)\u0026nbsp;during the summer and fall. In Germany, it was featured\u0026nbsp;at the\u0026nbsp;\u003Ca href=\u0022http:\/\/www.clintzeagler.com\/2014\/06\/16\/meeting-berlin-wearable-computing-exhibition-at-the-factory\/\u0022 target=\u0022_blank\u0022\u003EFactory Berlin at the Berlin Wall\u003C\/a\u003E, the\u0026nbsp;\u003Ca href=\u0022http:\/\/www.clintzeagler.com\/2014\/07\/15\/meeting-merkel-wearable-exhibition-travels-to-german-cdu-headquarters\/\u0022 target=\u0022_blank\u0022\u003EChristian Democratic Union Headquarters\u003C\/a\u003E, and the\u0026nbsp;\u003Ca href=\u0022http:\/\/www.clintzeagler.com\/2014\/08\/14\/meeting-munich-deutsches-museum-exhibition-august-11-2014-september-26-2014\/\u0022 target=\u0022_blank\u0022\u003EDeutsches Museum\u003C\/a\u003E. Next, it made its way to the\u0026nbsp;\u003Ca href=\u0022http:\/\/www.clintzeagler.com\/2014\/10\/11\/meeting-tianjin-world-economic-forum\/\u0022 target=\u0022_blank\u0022\u003EWorld Economic Forum\u003C\/a\u003E\u0026nbsp;in China. The exhibit\u0027s\u0026nbsp;public debut in the United States is at Georgia Tech this month\u0026nbsp;\u003Ca href=\u0022http:\/\/wcc.gatech.edu\/content\/opening-reception-january-8th-2015\u0022 target=\u0022_blank\u0022\u003Ethrough Jan. 23\u003C\/a\u003E.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.news.gatech.edu\/2014\/05\/29\/future-and-history-wearable-computing\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003ERead More\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.youtube.com\/watch?v=-Dj_jtB368o\u0022 target=\u0022_blank\u0022\u003E\u003C\/a\u003E\u003C\/p\u003E\u003Ch3\u003E\u003Cstrong\u003EData science for social good\u003C\/strong\u003E\u003C\/h3\u003E\u003Cp\u003EAs part of the Data Science for Social Good internship program, sponsored by Georgia Tech and Oracle, GT\u0026nbsp;students talked\u0026nbsp;with farmers and volunteers over a 10-week period during the summer\u0026nbsp;about\u0026nbsp;crops, planting schedules, harvest requests, visitor demographics and other data crucial to\u0026nbsp;daily operations.\u0026nbsp;Urban agriculture, the students realized, is a complex undertaking. Their challenge was to create a streamlined data management system for the farm. Program Director and Professor\u0026nbsp;\u003Cstrong\u003EEllen Zegura\u003C\/strong\u003E\u0026nbsp;said the program allowed\u0026nbsp;students to solve real-world problems instead of relying on sample data sets, and\u0026nbsp;it\u0026nbsp;educated\u0026nbsp;local non-profits on the need for better data systems.\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.news.gatech.edu\/2014\/06\/30\/georgia-tech-uses-data-science-promote-social-good\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003ERead More\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022http:\/\/artnotart.org\/farnear\/projet\/projet.html\u0022 target=\u0022_blank\u0022\u003E\u003C\/a\u003E\u003C\/p\u003E\u003Ch3\u003E\u003Cstrong\u003EBending narratives for new digital\u0026nbsp;experiences\u003C\/strong\u003E\u003C\/h3\u003E\u003Cp\u003EProjet is a location-based story using a panoramic visual effect and narration to transport the viewer metaphorically to the French Massif Central. Professor\u0026nbsp;\u003Cstrong\u003EJay Bolter\u003C\/strong\u003E\u0026nbsp;discusses the project, which is intended as the first in a series of such narratives to explore how panoramas can establish the visual counterpart to text narratives, creating a sense of space and location.\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.youtube.com\/watch?v=VfJPQSbQXXM\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EVideo\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003C\/p\u003E\u003Cp\u003E\u003C\/p\u003E\u003Ch3\u003E\u003Cstrong\u003EWearable tech of many designs\u003C\/strong\u003E\u003C\/h3\u003E\u003Cp\u003EGeorgia Tech continues to advance several research innovations that are helping to shape a wearable computing future rich with applications. Among Georgia Tech\u2019s accepted work at the International Symposium on Wearable Computers\u0026nbsp;in September was\u0026nbsp;\u003Ca href=\u0022https:\/\/www.youtube.com\/watch?v=arqrxglMzIw\u0022 target=\u0022_blank\u0022\u003Ewearable dance technology\u003C\/a\u003E\u0026nbsp;that garnered a Design Exhibition Jury Award, and\u0026nbsp;\u003Ca href=\u0022http:\/\/www.news.gatech.edu\/2014\/06\/23\/wearable-computing-gloves-can-teach-braille-even-if-you%E2%80%99re-not-paying-attention\u0022 target=\u0022_blank\u0022\u003Evibrating gloves\u003C\/a\u003E\u0026nbsp;that allow users to learn braille by simply wearing the haptic-enhanced device. The gloves were nominated for a 2014 Smithsonian People\u2019s Design Award.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022http:\/\/gvu.gatech.edu\/wearable-tech-innovations\u0022 target=\u0022_blank\u0022\u003ERead More\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003E\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003E\u003C\/p\u003E\u003Ch3\u003E\u003Cstrong\u003EResearch Showcase and Foley Scholars Dinner\u003C\/strong\u003E\u003C\/h3\u003E\u003Cp\u003EThe biannual GVU Center Research Showcase invited visitors in October\u0026nbsp;to an alternate reality populated with artificial intelligences, devices to communicate with animals, augmented landscapes bending space and time, computer-embedded fashion garments, futuristic screen experiences, auditory technologies, and much more. Homecoming week also recognized the 2014-2015 Foley Scholars, whose work exemplifies\u0026nbsp;computing-powered innovations that\u0026nbsp;guide\u0026nbsp;users through a rapidly shifting technology culture.\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022http:\/\/gvu.gatech.edu\/homecoming-2014\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003ERead More\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003C\/p\u003E\u003Cp\u003E\u003C\/p\u003E\u003Ch3\u003E\u003Cstrong\u003EVisualizing the world, one data set at a time\u003C\/strong\u003E\u003C\/h3\u003E\u003Cp\u003EAt VIS 2014 - consisting of\u0026nbsp;IEEE\u0027s joint conferences on Visual Analytics Science and Technology, Information Visualization, and Scientific Visualization - Georgia Tech researchers played a leading role in the proceedings, which marked the 25th anniversary of academic research in the field. Professor\u003Cstrong\u003E\u0026nbsp;John Stasko\u003C\/strong\u003E, co-chair of the VIS25 committee, says there is a growing \u2018democratization\u2019 of data visualization where more people and organizations can now create sophisticated interactive visualizations due to some of the tools and toolkits that the\u0026nbsp;research community has created.\u0026nbsp;Georgia Tech\u0027s contributions this year provided both new visualization techniques and case studies of visualization applied to real world problems from areas such as finance, network cybersecurity, pediatric asthma care, and marine biology.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022http:\/\/gvu.gatech.edu\/visualization-2014\u0022 target=\u0022_blank\u0022\u003ERead More\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003C\/p\u003E\u003Cp\u003E\u003C\/p\u003E\u003Ch3\u003E\u003Cstrong\u003EGraduate researchers discuss what drives them in their chosen fields\u003C\/strong\u003E\u003C\/h3\u003E\u003Cp\u003EHuman-Centered Computing doctoral candidates\u0026nbsp;\u003Cstrong\u003EAlexander Zook\u003C\/strong\u003E\u0026nbsp;and\u0026nbsp;\u003Cstrong\u003EDeana Brown\u003C\/strong\u003E\u0026nbsp;and Music Technology doctoral candidate\u0026nbsp;\u003Cstrong\u003EMason Bretan\u003C\/strong\u003E\u0026nbsp;talk about what makes them passionate about their research and what it involves - the graduate\u0026nbsp;work combines technical depth with a focus on human impact, at scales ranging from the individual to the societal.\u0026nbsp;The researchers took time out at the end of the year to share their stories, which show not only insight into their research, but the collaborative nature of the community\u0026nbsp;fostered by GVU founder and Professor\u003Cstrong\u003E\u0026nbsp;James Foley.\u0026nbsp;\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022http:\/\/gvu.gatech.edu\/research\/grants-and-scholarships\/james-d-foley-gvu-center-endowment\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003ERead More\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E- See more at: http:\/\/gvu.gatech.edu\/2014-year-review#sthash.8WjBepAX.dpuf\u003C\/p\u003E","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EIn 2014, GVU Center researchers broke new ground on how to get the most out of technology interactions. This snapshot of our community of researchers shows a small sample of computing possibilities becoming reality through the collaborative and dynamic environments at Georgia Tech and the GVU Center.\u0026nbsp;\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"In 2014, GVU Center researchers broke new ground on how to get the most out of technology interactions."}],"uid":"27592","created_gmt":"2015-01-20 11:38:32","changed_gmt":"2016-10-08 03:17:54","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2015-01-20T00:00:00-05:00","iso_date":"2015-01-20T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EJoshua Preston\u003C\/p\u003E\u003Cp\u003E678.231.0787\u003C\/p\u003E","format":"limited_html"}],"email":["jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"349021":{"#nid":"349021","#data":{"type":"news","title":"Ph.D. Candidate\u2019s Barbie Book Remix Ties to Fair Use Research","body":[{"value":"\u003Cp class=\u0022p1\u0022\u003EIn November, a\u0026nbsp;\u003Ca href=\u0022http:\/\/www.dailydot.com\/geek\/sexist-barbie-book-stem-remix-engineer-gaming\/\u0022 target=\u0022_blank\u0022\u003Ewidely viewed\u003C\/a\u003E\u0026nbsp;and\u0026nbsp;\u003Ca href=\u0022http:\/\/www.npr.org\/2014\/11\/22\/365968465\/after-backlash-computer-engineer-barbie-gets-new-set-of-skills\u0022\u003Ewell-received\u003C\/a\u003E\u0026nbsp;\u003Ca href=\u0022https:\/\/cfiesler.files.wordpress.com\/2014\/11\/barbieremixed.pdf\u0022\u003Edigital\u0026nbsp;remix\u003C\/a\u003E\u0026nbsp;of the children\u2019s book \u201cBarbie: I Can Be a Computer Engineer\u201d was the product of not only Casey Fiesler\u2019s dislike of the original plot, but a practical application of her research into copyright in online communities.\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003EFiesler, a Ph.D. Candidate in Human-Centered Computing at Georgia Tech, took the narrative - which had little to do with contemporary issues in computing and has since been pulled from bookshelves - and rewrote it, with contributions from HCC student Miranda Parker. The Barbie remix directly applies to her research in \u0022fair use,\u0022 a part of U.S. copyright law that allows for the use of copyrighted material without permission from the owners in certain instances. \u0026nbsp;\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003E\u0022One of the core reasons that fair use exists is for criticism,\u0022 says Fiesler. \u0022A noncommercial, transformative work that uses copyrighted material in order to critique the original content,\u0026nbsp;particularly in parody,\u0026nbsp;is a textbook example of fair use.\u0026nbsp;\u0022\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003EAccording to her recent research, \u201cthe law around reuse and remix is particularly confusing, and this kind of creativity is really common: everything from remix videos on YouTube to image memes shared on Facebook.\u201d\u0026nbsp;\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003EConducting a large-scale qualitative analysis of public forum posts, Fiesler,\u0026nbsp;Jessica Feuston, and advisor Amy S. Bruckman\u0026nbsp;found most conversations related to copyright expressed some kind of \u0022problem.\u0022 The eight websites reviewed for the study, from\u0026nbsp;earlier this year,\u0026nbsp;included top communities for writing, video, music and art. The YouTube data set, which had more than 1 in 10 posts (13%) related to copyright, shows the highest level of discussion on the topic. The overall findings show a range of concerns from users, including avoiding trouble, dealing with accusations of copyright infringement and parsing incompletete or conflicting information.\u0026nbsp;\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003EFiesler\u2019s group saw evidence of a general chilling effect, with some content creators simply not publishing online because of the perceived hassle or changing the website where they chose to publish. The study also provides recommendations for online community designers and maintainers, including monitoring user concerns on copyright and rewriting policies on copyright in \u201cplain English.\u201d The\u0026nbsp;\u003Ca href=\u0022https:\/\/cfiesler.files.wordpress.com\/2014\/10\/fiesler_cscw2015.pdf\u0022 target=\u0022_blank\u0022\u003Eresearch study\u003C\/a\u003E\u0026nbsp;is\u0026nbsp;being presented in March at CSCW 2015.\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003E\u0022Unfortunately, fair use can be confusing and scary, especially with so much misinformation floating around,\u0022 says Fiesler.\u0026nbsp;\u0022My advice would be to learn as much as you can, because the more aware of your legal rights you are, the more confident you\u0027ll be.\u0022\u003C\/p\u003E\u003Cp class=\u0022p1\u0022\u003EAfter publishing her Barbie Remix, Fiesler posted on her blog details about\u0026nbsp;\u003Ca href=\u0022http:\/\/caseyfiesler.com\/2014\/11\/24\/fair-use-barbie\/\u0022 target=\u0022_blank\u0022\u003Emisconceptions of fair use\u003C\/a\u003E. She says that anyone facing trouble for a creative work that they think is fair use should use public resources for help, including organizations such as the Electronic Frontier Foundation or the Organization for Transformative Works.\u003C\/p\u003E","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EA digital remix\u0026nbsp;of the children\u2019s book \u201cBarbie: I Can Be a Computer Engineer\u201d was the product of not only Casey Fiesler\u2019s dislike of the original plot, but a practical application of the Ph.D. candidate\u0027s research into copyright in online communities.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A digital remix of the children\u2019s book \u201cBarbie: I Can Be a Computer Engineer\u201d was the product of not only Casey Fiesler\u2019s dislike of the original plot, but a practical application of the Ph.D. candidate\u0027s research into copyright in online communities"}],"uid":"27592","created_gmt":"2014-11-25 13:43:29","changed_gmt":"2016-10-08 03:17:34","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2014-11-25T00:00:00-05:00","iso_date":"2014-11-25T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"349031":{"id":"349031","type":"image","title":"Barbie Remix","body":null,"created":"1449245696","gmt_created":"2015-12-04 16:14:56","changed":"1475895073","gmt_changed":"2016-10-08 02:51:13","alt":"Barbie Remix","file":{"fid":"201008","name":"barbieremix1.jpg","image_path":"\/sites\/default\/files\/images\/barbieremix1_0.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/barbieremix1_0.jpg","mime":"image\/jpeg","size":262686,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/barbieremix1_0.jpg?itok=Mt3pYKlA"}}},"media_ids":["349031"],"groups":[{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71871","name":"Campus and Community"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003EGVU Center, College of Computing\u003C\/p\u003E\u003Cp\u003E678.231.0787\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E","format":"limited_html"}],"email":["jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}}}