{"689256":{"#nid":"689256","#data":{"type":"news","title":"New Study Shows Explainability is a Must for Older Adults to Trust AI","body":[{"value":"\u003Cp\u003EVoice-activated, conversational artificial intelligence (AI) agents must provide clear explanations for their suggestions, or older adults aren\u2019t likely to trust them.\u003C\/p\u003E\u003Cp\u003EThat\u2019s one of the main findings from a study by AI Caring on what older adults expect from explainable AI (XAI).\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/ai-caring.org\/\u0022\u003E\u003Cstrong\u003EAI Caring\u003C\/strong\u003E\u003C\/a\u003E is one of three AI Institutions led by Georgia Tech and funded by the National Science Foundation (NSF). The institution supports AI research that benefits older adults and their caregivers.\u003C\/p\u003E\u003Cp\u003ENiharika Mathur, a Ph.D. candidate in the School of Interactive Computing, was the lead author of a paper based on the study. The paper will be presented in April at the \u003Ca href=\u0022https:\/\/chi2026.acm.org\/\u0022\u003E\u003Cstrong\u003E2026 ACM Conference on Human Factors in Computing Systems (CHI) in Barcelona\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003EMathur worked with the \u003Ca href=\u0022https:\/\/empowerment.emory.edu\/\u0022\u003E\u003Cstrong\u003ECognitive Empowerment Program at Emory University\u003C\/strong\u003E\u003C\/a\u003E to interview 23 older adults who live alone and use voice-activated AI assistants like Amazon\u2019s Alexa and Google Home.\u003C\/p\u003E\u003Cp\u003EMany of them told her they feel excluded from the design of these products.\u003C\/p\u003E\u003Cp\u003E\u201cThe assumption is that all people want interactions the same way and across all kinds of situations, but that isn\u2019t true,\u201d Mathur said. \u201cHow older people use AI and what they want from it are different from what younger people prefer.\u201d\u003C\/p\u003E\u003Cp\u003EOne example she gave is that young people tend to be informal when talking with AI. Older people, on the other hand, talk to the agent like they would a person.\u003C\/p\u003E\u003Cp\u003E\u201cIf Older adults are talking to their family members about Alexa, they usually refer to Alexa as \u2018she\u2019 instead of \u2018it,\u2019\u201d Mathur said. \u201cThey tend to humanize these systems a lot more than young people.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EGood Explanations\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EThe study evaluated AI explanations that drew information from four sources of data:\u003C\/p\u003E\u003Cul\u003E\u003Cli\u003EUser history (past conversations with the agent)\u003C\/li\u003E\u003Cli\u003EEnvironmental data (indoor temperature or the weather forecast)\u003C\/li\u003E\u003Cli\u003EActivity data (how much time a user spends in different areas of the home)\u003C\/li\u003E\u003Cli\u003EInternal reasoning (mathematical probabilities and likely outcomes)\u003C\/li\u003E\u003C\/ul\u003E\u003Cp\u003EMathur said older users trust the agent more when it bases its explanations on data from the first three sources. However, internal reasoning creates skepticism.\u003C\/p\u003E\u003Cp\u003EInternal reasoning means the AI doesn\u2019t have enough data from the other sources to give an explanation. It provides a percentage to reflect its confidence based on what it knows.\u003C\/p\u003E\u003Cp\u003E\u201cThe overwhelming response was negative toward confidence scores,\u201d Mathur said. \u201cIf the AI says it\u2019s 92% confident, older adults want to know what that\u2019s based on.\u201d\u003C\/p\u003E\u003Cp\u003EThis is another example that Mathur said points to generational preferences.\u003C\/p\u003E\u003Cp\u003E\u201cThere\u2019s a lot of explainable AI research that shows younger people like to see numbers in explanations, and they also tend to rely too much on explanations that contain numerical confidence. Older adults are the opposite. It makes them trust it less.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EKnowing the Context\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EMathur said that AI agents interacting with older adults should serve a dual purpose. They should provide users with companionship and support independence while reducing the caretaking burden often placed on family members.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ESome studies have shown that engineers have tended to favor caretakers in the design of these tools. They prioritize daily tasks and routines, leaving some older adults to feel like they are merely a box to be checked.\u003C\/p\u003E\u003Cp\u003EShe discovered that in urgent situations, older users prefer the AI to be straightforward, while in casual settings, they desire more conversation.\u003C\/p\u003E\u003Cp\u003E\u201cHow people interact with technological systems is grounded in what the stakes of the situation are,\u201d she said. \u201cIf it had anything to do with their immediate sense of safety, they did not want conversational elaboration. They want the AI to be very direct and factual.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003ENot Just Checking Boxes\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EMathur said AI agents that interact with older adults are ideally constructed with a dual purpose. They should provide companionship and autonomy for the users while alleviating the burden of caretaking that is often placed on their family members.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ESome studies have shown that engineers have strayed toward favoring caretakers in the design of these tools. They prioritize daily tasks and routines, leaving some older adults to feel like they are a box to be checked.\u003C\/p\u003E\u003Cp\u003E\u201cThey\u2019re not being thought of as consumers,\u201d Mathur said. \u201cA lot of products are being made for them but not with them.\u201d\u003C\/p\u003E\u003Cp\u003EShe also said psychological well-being is one of the most important outcomes these tools should produce.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EShowing older adults that they are listened to can significantly help in gaining their trust. Some interviewees told Mathur they want agents who are deliberate about understanding their preferences and don\u2019t dismiss their questions.\u003C\/p\u003E\u003Cp\u003EMeeting these needs reduces the likelihood of protesting and creating conflict with family members.\u003C\/p\u003E\u003Cp\u003E\u201cIt highlights just how important well-designed explanations are,\u201d she said. \u201cWe must go beyond a transparency checklist.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EAn AI Caring study led by Georgia Tech researchers shows that older adults are more likely to trust conversational AI systems that provide them with clear explanations for their decision-making. The study also shows that including older adults more in the design process benefits their well-being and reduces the caretaking burden of family members\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A Georgia Tech study finds older adults are more likely to trust voice-activated AI systems when those systems clearly explain how and why they make decisions."}],"uid":"36530","created_gmt":"2026-03-31 14:01:07","changed_gmt":"2026-03-31 14:04:59","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2026-03-31T00:00:00-04:00","iso_date":"2026-03-31T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"679796":{"id":"679796","type":"image","title":"0A6A0355.jpg","body":null,"created":"1774965687","gmt_created":"2026-03-31 14:01:27","changed":"1774965687","gmt_changed":"2026-03-31 14:01:27","alt":"An older couple sitting on a couch as a man helps them use Amazon\u0027s Alexa","file":{"fid":"263999","name":"0A6A0355.jpg","image_path":"\/sites\/default\/files\/2026\/03\/31\/0A6A0355.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2026\/03\/31\/0A6A0355.jpg","mime":"image\/jpeg","size":171883,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2026\/03\/31\/0A6A0355.jpg?itok=t62aVqXD"}}},"media_ids":["679796"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"194606","name":"Artificial Intelligence"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"135","name":"Research"}],"keywords":[{"id":"192863","name":"go-ai"},{"id":"187915","name":"go-researchnews"},{"id":"9153","name":"Research Horizons"},{"id":"193860","name":"Artifical Intelligence"},{"id":"187812","name":"artificial intelligence (AI)"},{"id":"14342","name":"older adults"},{"id":"148721","name":"Amazon Alexa"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"689250":{"#nid":"689250","#data":{"type":"news","title":"Researchers Look to Bolster Technology Support for Menopause","body":[{"value":"\u003Cp\u003EWomen in need of supportive maternal and menstrual healthcare in patriarchal societies have increasingly found outlets for disclosure in online communities.\u003C\/p\u003E\u003Cp\u003EThat support, however, begins to disappear in these restrictive cultures once women reach menopause, according to new research from Georgia Tech\u003C\/p\u003E\u003Cp\u003ENaveena Karusala, an assistant professor in Georgia Tech\u2019s School of Interactive Computing, and master\u2019s student Umme Ammara are working toward improving existing technologies and designing new ones for a demographic they believe has been neglected.\u003C\/p\u003E\u003Cp\u003EKarusala and Ammara co-authored a paper based on a study they conducted with women in urban Pakistan experiencing menopause.\u003C\/p\u003E\u003Cp\u003E\u201cWomen\u2019s health is understudied in general, but menopause is more neglected than other women\u2019s health issues,\u201d Karusala said. \u201cOur choice to focus on menopause is motivated by expanding how we holistically think about women\u2019s well-being across their lifespan.\u201d\u003C\/p\u003E\u003Cp\u003EKarusala and Ammara will present their paper in April at the 2026 ACM Conference on Human Factors in Computing Systems (CHI) in Barcelona.\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EMasking Symptoms\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EMenopause is diagnosed after 12 consecutive months without a period, vaginal bleeding, or spotting. The transition to menopause, called perimenopause, usually happens over two to eight years.\u003C\/p\u003E\u003Cp\u003EHormone changes may cause symptoms such as irregular periods, vaginal dryness, hot flashes, night sweats, trouble sleeping, mood swings, and brain fog.\u003C\/p\u003E\u003Cp\u003EThese symptoms can be debilitating in some cases and affect daily life. However, Ammara said women are pressured to remain silent, maintain appearances, and regulate their emotions to meet social expectations.\u003C\/p\u003E\u003Cp\u003E\u201cUnderstanding menopause is important because a woman would be experiencing all these symptoms, and people will not understand those as actual symptoms,\u201d Ammara said. \u201cThere\u2019s been resistance to the idea of the medicalization of menopause. People don\u2019t view it as an illness, but as a life transition and something that happens naturally.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EFeeling Isolated\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EThe women interviewed by Karusala and Ammara either stayed at home full-time or were part of the workforce.\u003C\/p\u003E\u003Cp\u003EThe researchers discovered that trusted family members might be the only sources women who stay at home and do not work turn to for disclosure.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cWomen at home have the flexibility to take breaks or work at their own pace, so a lot of their experience is shaped by the emotional barriers they face,\u201d Ammara said.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cThat could come from their husbands and family members. Some are supportive and some are not. They might weaponize it and use that term against them, or they might dismiss what they\u2019re going through.\u201d\u003C\/p\u003E\u003Cp\u003EAmmara said it might be easier for women in the workforce to confide in their coworkers, but explaining to an employer that they need sick leave for menopause symptoms can be intimidating.\u003C\/p\u003E\u003Cp\u003EEven in online communities that have enabled women to anonymously share their health experiences, menopause is seldom discussed.\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003ERaising Awareness\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EKarusala and Ammara argue in their paper that a public health approach could be the most effective way to spark conversation about menopause in a patriarchal culture in which technology use varies.\u003C\/p\u003E\u003Cp\u003EThey said the challenge in implementing technologies geared toward menopause support is that the condition isn\u2019t well understood in public. Improving maternal health, for example, is easier to promote within these societies because of the general understanding that motherhood is important.\u003C\/p\u003E\u003Cp\u003E\u201cThere must be an existing infrastructure to build on,\u201d Karusala said. \u201cFor example, menstrual and maternal health are taught in schools and regularly discussed in primary care. Cultural and social meaning and importance are placed on motherhood.\u003C\/p\u003E\u003Cp\u003E\u201cA lot of that doesn\u2019t exist for menopause. Primary care doctors are unprepared to talk about menopause compared to other health issues.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EDesign Solutions\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EAmmara said that the most effective way for technologies to make an impact on women going through menopause is to directly address systemic power structures around women\u2019s health within Pakistani culture.\u003C\/p\u003E\u003Cp\u003EIt can start with the husbands.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cFraming the issue for husbands to understand menopause should be at the forefront of designing technology solutions,\u201d she said.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cIn Islamic contexts, we suggest using faith-based framings. This has been proposed for maternal health in prior works that draw on Islamic principles to engage expectant fathers in providing care and support. Framing it around religious responsibility to involve men in the journey can also be done for menopause.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech assistant professor Naveena Karusala and master\u0027s student Umme Ammara are researching how to improve existing technologies and design new ones to better support women experiencing menopause. Their work is based on a study conducted with women in urban Pakistan, where patriarchal social norms pressure women to stay silent about menopause symptoms and limit their ability to seek support, even in online communities that have otherwise helped women discuss other health issues\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech researchers are looking at how technology can better support women experiencing menopause in urban Pakistan, where patriarchal norms leave them largely isolated and without resources for managing their symptoms."}],"uid":"36530","created_gmt":"2026-03-31 12:09:13","changed_gmt":"2026-03-31 13:18:07","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2026-03-30T00:00:00-04:00","iso_date":"2026-03-30T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"679788":{"id":"679788","type":"image","title":"Ammara-Umme_86A2210.jpg","body":null,"created":"1774958961","gmt_created":"2026-03-31 12:09:21","changed":"1774958961","gmt_changed":"2026-03-31 12:09:21","alt":"Umme Ammar sits in a booth with laptop in front of her","file":{"fid":"263990","name":"Ammara-Umme_86A2210.jpg","image_path":"\/sites\/default\/files\/2026\/03\/31\/Ammara-Umme_86A2210.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2026\/03\/31\/Ammara-Umme_86A2210.jpg","mime":"image\/jpeg","size":95810,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2026\/03\/31\/Ammara-Umme_86A2210.jpg?itok=7jqYXbcn"}}},"media_ids":["679788"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"135","name":"Research"}],"keywords":[{"id":"8900","name":"women\u0027s history month"},{"id":"187915","name":"go-researchnews"},{"id":"9153","name":"Research Horizons"},{"id":"3543","name":"women\u0027s health"},{"id":"171911","name":"women of pakistan"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71891","name":"Health and Medicine"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:ndeen6@gatech.edu\u0022\u003ENathan Deen\u003C\/a\u003E\u003Cbr\u003ECollege of Computing\u003Cbr\u003EGeorgia Tech\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"689007":{"#nid":"689007","#data":{"type":"news","title":"New Mobile App Turns Phones into At-Home Fetal Heart Monitors","body":[{"value":"\u003Cdiv\u003E\u003Cp\u003EA new mobile app will soon put the ability to monitor a baby\u2019s prenatal heartbeat in the hands of pregnant women who may worry about their baby\u2019s health in between doctor\u2019s visits.\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cp\u003EStudies show that one in five pregnant women experiences \u003Ca href=\u0022https:\/\/theconversation.com\/perinatal-anxiety-one-in-five-women-experience-it-but-many-still-suffer-alone-before-or-after-childbirth-133667\u0022 rel=\u0022noreferrer noopener\u0022 target=\u0022_blank\u0022\u003Eperinatal anxiety\u003C\/a\u003E, which is characterized by intense negative thoughts about their pregnancy.\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cp\u003EDopFone turns any smartphone speaker into a Doppler radar by emitting a low-pitched ultrasound and detecting reflected signals of abdominal surface vibrations caused by a fetal heartbeat.\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.alexandertadams.com\/\u0022 rel=\u0022noreferrer noopener\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EAlex Adams\u003C\/strong\u003E\u003C\/a\u003E, an assistant professor in Georgia Tech\u2019s School of Interactive Computing, said he came up with the idea for DopFone as he and his wife, Elise, experienced two miscarriages. At the time, she couldn\u2019t reliably measure the fetal heart rate with a standard fetal Doppler monitor.\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cp\u003EThose experiences exposed gaps in the maternal healthcare process.\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cp\u003E\u201cThere are a lot of great devices in hospitals and clinics, but there\u2019s not much outside of those venues, even for high-risk pregnancies,\u201d Adams said. \u201cThis is about filling the gaps between checkups.\u201d\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.poojitagarg.com\/\u0022 rel=\u0022noreferrer noopener\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EPoojita Garg\u003C\/strong\u003E\u003C\/a\u003E joined Adams to work on DopFone while completing her master\u2019s degree at Georgia Tech. She is now pursuing her Ph.D. at the University of Washington and is co-advised by Professor Swetak Patel, who earned his Ph.D. from Georgia Tech in 2008.\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cp\u003EGarg is working with the University of Washington School of Medicine to conduct DopFone\u2019s first clinical trials.\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cp\u003EGarg tested DopFone on 23 patients and achieved a plus-minus of 4.9 beats per minute, well within the clinical standard range of eight beats per minute for reliable fetal heart rate measurement.\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cp\u003EAdams said it measured within two beats per minute in most cases, with an error rate of less than one percent.\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cp\u003EAbout one million pregnancies in the U.S. end in miscarriage, \u003Ca href=\u0022https:\/\/medicine.yale.edu\/news-article\/dr-harvey-kliman-study-finds-the-placenta-holds-answers-to-many-unexplained-pregnancy-losses\/\u0022 rel=\u0022noreferrer noopener\u0022 target=\u0022_blank\u0022\u003Eaccording to a study from the Yale School of Medicine\u003C\/a\u003E, and doctors know little about what causes them. Adams said that number is probably higher because many go unreported.\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cp\u003EAdams and Garg said it\u2019s unclear whether the innovation could reduce the number of miscarriages. However, consistent fetal heart rate data collection outside of the doctor\u2019s office could provide a better idea of what happens leading up to a miscarriage.\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cp\u003E\u201cFrom there, we can take preventative action,\u201d Adams said. \u201cIf nothing else, we can give a sense of comfort to those who may be worried.\u201d\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cp\u003E\u003Cstrong\u003EExpanding Access\u003C\/strong\u003E\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cp\u003EWhile couples can purchase portable fetal heart rate monitors, Adams and Garg see DopFone as a low-cost alternative for those who live in areas with limited or inaccessible healthcare systems.\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cp\u003E\u201cThere\u2019s a lot of potential for using it in what doctors like to call maternity deserts,\u201d Garg said. \u201cThese are areas where a pregnant person, at the time of delivery, would have to travel long distances to reach a hospital. This technology will be useful globally in underdeveloped areas of the world.\u201d\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cp\u003EThe researchers also mentioned that external add-ons and attachments aren\u2019t part of their design goals. They prefer to rely on the phone\u2019s built-in features to keep the technology accessible.\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cp\u003E\u201cThe real value is that 96% of America already has the technology in their pocket, along with 60% of the world\u2019s population,\u201d Adams said. \u201cHalf of the battle is having the right tools. The more we can get from what\u2019s already in the phone, the more we can guarantee people have access to it.\u201d\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cp\u003E\u003Cstrong\u003ENot a Substitute\u003C\/strong\u003E\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cp\u003ESome patients may feel a constant need to check their unborn child\u2019s heart rate, and Garg acknowledged that a tool like DopFone could increase that anxiety. She and Adams said a future version of the app will tell the parent if the heart rate is within a healthy range.\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cp\u003E\u201cThere\u2019s a lot of tradeoffs between a tool that could provide reassurance or create anxiety,\u201d she said. \u201cWe want the use of this tool to be recommended by a doctor and for doctors and their care teams to be kept in the loop.\u201d\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cp\u003EShe also said DopFone is not meant to replace anything that is done in a clinic.\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cp\u003E\u201cThere are devices that make the whole process possible at home, but this is something that should be done in a clinic, so that\u2019s the line we want to draw,\u201d she said.\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EDopFone uses smartphone speakers to emit a low-pitched ultrasound that detects reflected signals of abdominal surface vibrations caused by fetal cardiac activity.\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.alexandertadams.com\/\u0022\u003E\u003Cstrong\u003EAlex Adams\u003C\/strong\u003E\u003C\/a\u003E, an assistant professor in Georgia Tech\u2019s School of Interactive Computing, said he came up with the idea for DopFone as he and his wife, Elise, suffered through two miscarriages.\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.poojitagarg.com\/\u0022\u003E\u003Cstrong\u003EPoojita Garg\u003C\/strong\u003E\u003C\/a\u003E joined Adams to work on DopFone while completing her master\u2019s at Georgia Tech. She is now pursuing her Ph.D. at the University of Washington and is co-advised by Professor Swetak Patel, who earned his Ph.D. from Georgia Tech in 2008.\u003C\/p\u003E\u003Cp\u003EGarg is working with the University of Washington School of Medicine to conduct DopFone\u2019s first clinical trials.\u003C\/p\u003E\u003Cp\u003EGarg tested DopFone on 23 patients and achieved a plus-minus of 4.9 beats per minute, well within the clinical standard for reliable fetal heart rate measurement of plus-minus 8 beats per minute.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A new app will allow pregnant women to conduct an ultrasound and receive an accurate fetal heart rate from their mobile phones."}],"uid":"36530","created_gmt":"2026-03-18 13:23:19","changed_gmt":"2026-03-23 13:16:06","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2026-03-18T00:00:00-04:00","iso_date":"2026-03-18T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"679666":{"id":"679666","type":"image","title":"DopFone-PR-Photo-with-blur.jpg","body":null,"created":"1773840209","gmt_created":"2026-03-18 13:23:29","changed":"1773840209","gmt_changed":"2026-03-18 13:23:29","alt":"Woman holds mobile phone to the belly of a pregnant woman","file":{"fid":"263850","name":"DopFone-PR-Photo-with-blur.jpg","image_path":"\/sites\/default\/files\/2026\/03\/18\/DopFone-PR-Photo-with-blur.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2026\/03\/18\/DopFone-PR-Photo-with-blur.jpg","mime":"image\/jpeg","size":113510,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2026\/03\/18\/DopFone-PR-Photo-with-blur.jpg?itok=A5qhfUr7"}}},"media_ids":["679666"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"187915","name":"go-researchnews"},{"id":"9153","name":"Research Horizons"},{"id":"181431","name":"maternal"},{"id":"7677","name":"ultrasound"},{"id":"34741","name":"mobile app"},{"id":"29561","name":"pregnancy"},{"id":"190383","name":"pregnant women"},{"id":"168908","name":"smartphone"},{"id":"188420","name":"babies"},{"id":"178046","name":"fetal monitoring"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71891","name":"Health and Medicine"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"688391":{"#nid":"688391","#data":{"type":"news","title":"Robot Pollinator Could Produce More, Better Crops for Indoor Farms","body":[{"value":"\u003Cp\u003EA new robot could solve one of the biggest challenges facing indoor farmers: manual pollination.\u003C\/p\u003E\u003Cp\u003EIndoor farms, also known as vertical farms, are popular among agricultural researchers and are expanding across the agricultural industry. Some benefits they have over outdoor farms include:\u003C\/p\u003E\u003Cul\u003E\u003Cli\u003EYear-round production of food crops\u003C\/li\u003E\u003Cli\u003ELess water and land requirements\u003C\/li\u003E\u003Cli\u003ENot needing pesticides\u003C\/li\u003E\u003Cli\u003EReducing carbon emissions from shipping\u003C\/li\u003E\u003Cli\u003EReducing food waste\u003C\/li\u003E\u003C\/ul\u003E\u003Cp\u003EAdditionally,\u0026nbsp;\u003Ca href=\u0022https:\/\/www.agritecture.com\/blog\/2021\/7\/20\/5-ways-vertical-farming-is-improving-nutrition\u0022\u003E\u003Cstrong\u003Esome studies\u003C\/strong\u003E\u003C\/a\u003E indicate that indoor farms produce more nutritious food for urban communities.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EHowever, these farms are often inaccessible to birds, bees, and other natural pollinators, leaving the pollination process to humans. The tedious process must be completed by hand for each flower to ensure the indoor crop flourishes.\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/research.gatech.edu\/people\/ai-ping-hu\u0022\u003E\u003Cstrong\u003EAi-Ping Hu\u003C\/strong\u003E\u003C\/a\u003E, a principal research engineer at the Georgia Tech Research Institute (GTRI), has spent years exploring methods to efficiently pollinate flowering plants and food crops in indoor farms to find a way to efficiently pollinate flower plants and food crops in indoor farms.\u003C\/p\u003E\u003Cp\u003EHu,\u0026nbsp;\u003Ca href=\u0022https:\/\/research.gatech.edu\/people\/shreyas-kousik\u0022\u003E\u003Cstrong\u003EAssistant Professor Shreyas Kousik of the George W. Woodruff School of Mechanical Engineering\u003C\/strong\u003E\u003C\/a\u003E, and a rotating group of student interns have developed a robot prototype that may be up to the task.\u003C\/p\u003E\u003Cp\u003EThe robot can efficiently pollinate plants that have both male and female reproductive parts. These plants only require pollen to be transferred from one part to the other rather than externally from another flower.\u003C\/p\u003E\u003Cp\u003ENatural pollinators perform this task outdoors, but Hu said indoor farmers often use a paintbrush or electric tootbrush to ensure these flowers are pollinated.\u0026nbsp;\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EKnowing the Pose\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EAn early challenge the research team addressed was teaching the robot to identify the \u201cpose\u201d of each flower. Pose refers to a flower\u2019s orientation, shape, and symmetry. Knowing these details ensures precise delivery of the pollen to maximize reproductive success.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cIt\u2019s crucial to know exactly which way the flowers are facing,\u201d Hu said.\u003C\/p\u003E\u003Cp\u003E\u201cYou want to approach the flower from the front because that\u2019s where all the biological structures are. Knowing the pose tells you where the stem is. Our device grasps the stem and shakes it to dislodge the pollen.\u003C\/p\u003E\u003Cp\u003E\u201cEvery flower is going to have its own pose, and you need to know what that is within at least 10 degrees.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EComputer Vision Breakthrough\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003E\u003Cstrong\u003EHarsh Muriki\u003C\/strong\u003E is a robotics master\u2019s student at Georgia Tech\u2019s School of Interactive Computing, who used computer vision to solve the pose problem while interning for Hu and GTRI.\u003C\/p\u003E\u003Cp\u003EMuriki attached a camera to a FarmBot to capture images of strawberry plants from dozens of angles in a small garden in front of Georgia Tech\u2019s Food Processing Technology Building. The\u0026nbsp;\u003Ca href=\u0022https:\/\/farm.bot\/?srsltid=AfmBOoqh1Z8vSs3WflZisgw5DsOUSo8shD4VtY0Y8_VmVpVyt0Iwalxo\u0022\u003E\u003Cstrong\u003EFarmBot\u003C\/strong\u003E\u003C\/a\u003E is an XYZ-axis robot that waters and sprays pesticides on outdoor gardens, though it is not capable of pollination.\u003C\/p\u003E\u003Cp\u003E\u201cWe reconstruct the images of the flower into a 3D model and use a technique that converts the 3D model into multiple 2D images with depth information,\u201d Muriki said. \u201cThis enables us to send them to object detectors.\u201d\u003C\/p\u003E\u003Cp\u003EMuriki said he used a real-time object detection system called YOLO (You Only Look Once) to classify objects. YOLO is known for identifying and classifying objects in a single pass.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EVed Sengupta\u003C\/strong\u003E, a computer engineering major who interned with Muriki, fine-tuned the algorithms that converted 3D images into 2D.\u003C\/p\u003E\u003Cp\u003E\u201cThis was a crucial part of making robot pollination possible,\u201d Sengupta said. \u201cThere is a big gap between 3D and 2D image processing.\u003C\/p\u003E\u003Cp\u003E\u201cThere\u2019s not a lot of data on the internet for 3D object detection, but there\u2019s a ton for 2D. We were able to get great results from the converted images, and I think any sector of technology can take advantage of that.\u201d\u003C\/p\u003E\u003Cp\u003ESengupta, Muriki, and Hu co-authored a paper about their work that was accepted to the 2025 International Conference on Robotics and Automation (ICRA) in Atlanta.\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EMeasuring Success\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EThe pollination robot, built in Kousik\u2019s Safe Robotics Lab, is now in the prototype phase.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EHu said the robot can do more than pollinate. It can also analyze each flower to determine how well it was pollinated and whether the chances for reproduction are high.\u003C\/p\u003E\u003Cp\u003E\u201cIt has an additional capability of microscopic inspection,\u201d Hu said. \u201cIt\u2019s the first device we know of that provides visual feedback on how well a flower was pollinated.\u201d\u003C\/p\u003E\u003Cp\u003EFor more information about the robot, visit the\u0026nbsp;\u003Ca href=\u0022https:\/\/saferoboticslab.me.gatech.edu\/research\/towards-robotic-pollination\/\u0022\u003E\u003Cstrong\u003ESafe Robotics Lab project page\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EManual pollination is one of the biggest challenges for indoor farmers. These farms are often inaccessible to birds, bees, and other natural pollinators, leaving the pollination process to humans. The tedious process must be completed by hand for each flower to ensure the indoor crop flourishes.\u003C\/p\u003E\u003Cp\u003EA Georgia Tech research led by Ai-Ping Hu and Shreyas Kousik team is working to solve that. A robot they\u0027ve developed can efficiently pollinate plants that have both male and female reproductive parts. These plants only require pollen to be transferred from one part to the other rather than externally from another flower.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A research team that expands GTRI, the College of Engineering, and the College of Computing have developed a robot capable of pollinating flowers in indoor farms."}],"uid":"36530","created_gmt":"2026-02-19 18:58:12","changed_gmt":"2026-03-20 12:54:01","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2026-02-19T00:00:00-05:00","iso_date":"2026-02-19T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"679370":{"id":"679370","type":"image","title":"Harsh-Muriki_86A0006.jpg","body":null,"created":"1771527500","gmt_created":"2026-02-19 18:58:20","changed":"1771527500","gmt_changed":"2026-02-19 18:58:20","alt":"Harsh Muriki","file":{"fid":"263520","name":"Harsh-Muriki_86A0006.jpg","image_path":"\/sites\/default\/files\/2026\/02\/19\/Harsh-Muriki_86A0006.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2026\/02\/19\/Harsh-Muriki_86A0006.jpg","mime":"image\/jpeg","size":140654,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2026\/02\/19\/Harsh-Muriki_86A0006.jpg?itok=rd0rv1Yt"}}},"media_ids":["679370"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"194606","name":"Artificial Intelligence"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"145","name":"Engineering"},{"id":"135","name":"Research"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"9153","name":"Research Horizons"},{"id":"187991","name":"go-robotics"},{"id":"192863","name":"go-ai"},{"id":"11506","name":"computer vision"},{"id":"180840","name":"computer vision systems"},{"id":"669","name":"agriculture"},{"id":"194392","name":"AI in Agriculture"},{"id":"170254","name":"urban gardening"},{"id":"94111","name":"farming"},{"id":"14913","name":"urban farming"},{"id":"23911","name":"bees"},{"id":"6660","name":"flowers"},{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"193653","name":"Georgia Tech Research Institute"},{"id":"39521","name":"Robotics"}],"news_room_topics":[{"id":"71911","name":"Earth and Environment"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:ndeen6@gatech.edu\u0022\u003ENathan Deen\u003C\/a\u003E\u003Cbr\u003ECollege of Computing\u003Cbr\u003EGeorgia Tech\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"688478":{"#nid":"688478","#data":{"type":"news","title":"Student Getting Research Boost Through Google Ph.D. Fellowship","body":[{"value":"\u003Cp\u003EA Georgia Tech Ph.D. candidate is getting a boost to his research into developing more efficient multi-tasking artificial intelligence (AI) models without fine-tuning.\u003C\/p\u003E\u003Cp\u003EGeorgia Stoica is one of 38 Ph.D. students worldwide researching machine learning who were named a\u003Ca href=\u0022https:\/\/research.google\/programs-and-events\/phd-fellowship\/recipients\/\u0022\u003E\u003Cstrong\u003E 2025 Google Ph.D. Fellow\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003EStoica is designing AI training methods that bypass fine-tuning, which is the process of adapting a large pre-trained model to perform new tasks. Fine-tuning is one of the most common ways engineers update large-language models like ChatGPT, Gemini, and Claude to add new capabilities.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EIf an AI company wants to give a model a new capability, it could create a new model from scratch for that specific purpose. However, if the model already has relevant training and knowledge of the new task, fine-tuning is cheaper.\u003C\/p\u003E\u003Cp\u003EStoica argues that fine-tuning still uses large amounts of data, and that other methods can help models learn more effectively and efficiently.\u003C\/p\u003E\u003Cp\u003E\u201cFull fine-tuning yields strong performance, but it can be costly, and it risks catastrophic forgetting,\u201d Stoica said. \u201cMy research asks if we can extend a model\u2019s capabilities by imbuing it with the expertise of others, without fine-tuning?\u003C\/p\u003E\u003Cp\u003E\u201cReducing cost and improving efficiency is more important than ever. We have so many publicly available models that have been trained to solve a variety of tasks. It\u2019s redundant to train a new model from scratch. It\u2019s much more efficient to leverage the information that already exists to get a model up to speed.\u201d\u003C\/p\u003E\u003Cp\u003EStoica said the solution is a cost-effective method called model merging. This method combines two or more AI models into a single model, improving performance without fine-tuning.\u003C\/p\u003E\u003Cp\u003EOn a basic level, Stoica said an example would be combining a model that is efficient at classifying cats with one that works well at dogs.\u003C\/p\u003E\u003Cp\u003E\u201cMerging is cheap because you just take the parameters, the weights of your existing models, and combine them,\u201d he said. \u201cYou could take the average of the weights to create a new model, but that sometimes doesn\u2019t work. My work has aimed to rearrange the weights so they can communicate easily with each other.\u201d\u003C\/p\u003E\u003Cp\u003EThrough his Google fellowship, Stoica seeks to apply model merging to create a cutting-edge vision encoder. A vision encoder converts image or video data into numerical representations that computers can understand. This enables tasks such as image or facial recognition and generative image captioning.\u003C\/p\u003E\u003Cp\u003E\u201cI want to be at the frontier of the field, and Google is clearly part of that,\u201d Stoica said. \u201cThe vision encoder is very large-scale, and Google has the infrastructure to accommodate it.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Stoica is one of 38 Ph.D. students worldwide researching machine learning who were named a\u003Ca href=\u0022https:\/\/research.google\/programs-and-events\/phd-fellowship\/recipients\/\u0022\u003E\u003Cstrong\u003E 2025 Google Ph.D. Fellow\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003EStoica is designing AI training methods that bypass fine-tuning, which is the process of adapting a large pre-trained model to perform new tasks. Fine-tuning is one of the most common ways engineers update large-language models like ChatGPT, Gemini, and Claude to add new capabilities.\u0026nbsp;\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Stoica is one of 38 Ph.D. students worldwide researching machine learning who were named a 2025 Google Ph.D. Fellow."}],"uid":"36530","created_gmt":"2026-02-23 17:43:54","changed_gmt":"2026-03-20 12:53:05","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2026-02-23T00:00:00-05:00","iso_date":"2026-02-23T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"679394":{"id":"679394","type":"image","title":"IMG_2942-copy-2.jpg","body":null,"created":"1771868657","gmt_created":"2026-02-23 17:44:17","changed":"1771868657","gmt_changed":"2026-02-23 17:44:17","alt":"George Stoica","file":{"fid":"263553","name":"IMG_2942-copy-2.jpg","image_path":"\/sites\/default\/files\/2026\/02\/23\/IMG_2942-copy-2.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2026\/02\/23\/IMG_2942-copy-2.jpg","mime":"image\/jpeg","size":112361,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2026\/02\/23\/IMG_2942-copy-2.jpg?itok=KCVheh-u"}}},"media_ids":["679394"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"194606","name":"Artificial Intelligence"},{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"3165","name":"google"},{"id":"9143","name":"Graduate Research Fellowship"},{"id":"192863","name":"go-ai"},{"id":"9153","name":"Research Horizons"},{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"}],"news_room_topics":[{"id":"71871","name":"Campus and Community"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"688487":{"#nid":"688487","#data":{"type":"news","title":"New Study Could Show How TikTok\u2019s Algorithm Affects Youth Mental Health","body":[{"value":"\u003Cdiv\u003E\u003Cdiv\u003E\u003Cp\u003EMeta CEO Mark Zuckerberg\u0026nbsp;\u003Ca href=\u0022https:\/\/www.latimes.com\/california\/story\/2026-02-18\/mark-zuckerberg-tesimony-la-social-media-trial?utm_source=chatgpt.com\u0022\u003E\u003Cstrong\u003Etook the witness stand\u003C\/strong\u003E\u003C\/a\u003E last week in Los Angeles County Superior Court to defend his company from accusations that social media harms children.\u003C\/p\u003E\u003Cp\u003EA lawsuit filed by a 20-year-old plaintiff alleges Instagram and other social media apps are designed to make young users addicted to their platforms.\u003C\/p\u003E\u003Cp\u003EMeanwhile, social media experts believe the algorithms that drive content on these platforms play a role in hooking users and keeping them scrolling for extensive periods of time.\u003C\/p\u003E\u003Cp\u003EA new study led by Georgia Tech might confirm this suspicion.\u003C\/p\u003E\u003Cp\u003EUsing recently acquired data from more than 10,000 adolescent users,\u0026nbsp;\u003Ca href=\u0022http:\/\/www.munmund.net\/\u0022\u003E\u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E\u003C\/a\u003E will audit TikTok\u2019s recommendation algorithm and study its impact on young people\u2019s behavior and mental health.\u003C\/p\u003E\u003Cp\u003EDe Choudhury is leading a multi-institutional research team on a four-year, $1.7 million grant from the Huo Family Foundation.\u003C\/p\u003E\u003Cp\u003E\u201cWe hope to learn the different types of negative exposures that young people experience when using TikTok,\u201d De Choudhury said. \u201cThis can help us characterize what they\u2019re watching and build computational methods to understand the consumption behaviors of these participants and how they\u2019re affected by the algorithm.\u201d\u003C\/p\u003E\u003Cp\u003EDe Choudhury, a professor in Georgia Tech\u2019s School of Interactive Computing, is collaborating with Amy Orben, a professor at the University of Cambridge, and Homa Hosseinmardi, an assistant professor at UCLA, on the project.\u003C\/p\u003E\u003Cp\u003ESocial media platforms have become increasingly reluctant to share their data in recent years, posing a challenge for researchers like De Choudhury.\u003C\/p\u003E\u003Cp\u003E\u201cWe can\u2019t do the type of studies we did 10 years ago with X (formerly Twitter) because the API is much more restrictive,\u201d she said. \u201cThere are limited ways to programmatically access people\u2019s data now.\u003C\/p\u003E\u003Cp\u003E\u201cWe must go through a tedious, manual process to get around declining access to social media data. This data-gathering process is essential given the sensitive nature of mental health research. You want data that is shared with consent.\u201d\u003C\/p\u003E\u003Cp\u003EOrben collected TikTok data from more than 10,000 young people in the UK who consented to provide their personal data archives in accordance with the European Union\u2019s General Data Protection Regulation (GDPR).\u003C\/p\u003E\u003Cp\u003EThe collected data includes watch histories, which De Choudhury said distinguishes this research from other social media studies that focus on what users post.\u003C\/p\u003E\u003Cp\u003E\u201cWe don\u2019t understand passive social media consumption very well, so we hope to close that gap and learn what that looks like,\u201d she said. \u201cThat could complement or contrast what we know about people\u2019s active engagement on these platforms. Is what they\u2019re consuming directly related to what they\u2019re posting? How does passive consumption affect young people\u2019s mental health?\u201d\u003C\/p\u003E\u003Cp\u003EA clearer picture of how algorithm-based content affects young people could result in design interventions to minimize negative effects. De Choudhury said studying data from young people is critical because it\u2019s not too late to steer them away from unhealthy behavioral patterns.\u003C\/p\u003E\u003Cp\u003E\u201cSome of the earliest signs or symptoms of mental health conditions appear in adolescence,\u201d she said. \u201cIf appropriate care and support are provided, maybe it\u2019s possible to prevent these symptoms from becoming full-blown in the future.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EBeyond TikTok\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EWhat the research team learns about TikTok could also provide broader insight into other social media platforms.\u003C\/p\u003E\u003Cp\u003ETikTok has been influential in how social media platforms display video content. Competitors like Instagram and X modeled their video presentation after TikTok\u2019s, which can easily lead to doomscrolling.\u003C\/p\u003E\u003Cp\u003E\u201cOur hope is that our findings can be generalized, with the caveat the data we have is exclusively from TikTok,\u201d De Choudhury said. \u201cOther platforms have similar video-sharing and consumption features where the video automatically plays from one to the next. We hope what we learn from TikTok will be applicable to people\u2019s activities elsewhere, though it will require future work beyond this project to draw concrete conclusions.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003ESimulating Feeds with AI\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EDe Choudhury said an additional part of the study will be using artificial intelligence (AI) to simulate video feeds.\u003C\/p\u003E\u003Cp\u003EIn 2024, Hosseinmardi led a study at the University of Pennsylvania on YouTube\u2019s recommendation algorithm and used bots that either followed or ignored the recommendations.\u003C\/p\u003E\u003Cp\u003EDe Choudhury said they will use a similar method for TikTok.\u003C\/p\u003E\u003Cp\u003E\u201cThe feeds will be realistic but generated by AI to see the potential pathways to consumption rabbit holes,\u201d she said. \u201cThis should give us some insight into how algorithms influence the negative and positive exposures people might be having on TikTok.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EFoundation Expands Reach\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EBased in the UK and established in 2009, the Huo Family Foundation supports community education initiatives in the UK, the U.S., and China.\u003C\/p\u003E\u003Cp\u003EThe organization announced in January its launch of the Huo Family Foundation Science Programme.\u0026nbsp;\u003Ca href=\u0022https:\/\/huofamilyfoundation.org\/news\/updates\/huo-family-foundation-awards-17-6m-for-groundbreaking-research\/\u0022\u003E\u003Cstrong\u003EThe new program is committing $17.6 million to fund 20 new multi-year research grants\u003C\/strong\u003E\u003C\/a\u003E that explore the impact of digital technology on the brain development, social behavior, and mental health of young people.\u003C\/p\u003E\u003Cp\u003E\u201cDigital technology is profoundly shaping childhood and young adulthood, yet there is limited causal evidence of its effects,\u201d\u0026nbsp;said Yan Huo, founder of the Huo Family Foundation, in a press release.\u0026nbsp;\u201cWe are proud to support exceptional researchers advancing vital scientific understanding.\u201d\u003C\/p\u003E\u003C\/div\u003E\u003C\/div\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cdiv\u003E\u003Cdiv dir=\u0022ltr\u0022\u003E\u003Cp\u003ELed by Georgia Tech professor Munmun De Choudhury, a multi-institutional research team is launching a $1.7 million study to examine how TikTok\u2019s recommendation algorithm influences the mental health of adolescent users. The project focuses on passive consumption by analyzing the watch histories of over 10,000 young participants and using AI to simulate content \u0022rabbit holes.\u0022 By identifying patterns of negative exposure, the researchers aim to develop design interventions that can steer teenagers away from unhealthy behavioral patterns and support early mental health care.\u003C\/p\u003E\u003C\/div\u003E\u003C\/div\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A Georgia Tech-led research team is conducting a multi-year study using data from more than 10,000 adolescents to investigate how TikTok\u2019s recommendation algorithm and passive content consumption impact youth mental health."}],"uid":"36530","created_gmt":"2026-02-24 14:29:28","changed_gmt":"2026-03-20 12:52:52","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2026-02-24T00:00:00-05:00","iso_date":"2026-02-24T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"679406":{"id":"679406","type":"image","title":"208A9267-2.jpg","body":null,"created":"1771943377","gmt_created":"2026-02-24 14:29:37","changed":"1771943377","gmt_changed":"2026-02-24 14:29:37","alt":"Munmun De Choudhury","file":{"fid":"263567","name":"208A9267-2.jpg","image_path":"\/sites\/default\/files\/2026\/02\/24\/208A9267-2.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2026\/02\/24\/208A9267-2.jpg","mime":"image\/jpeg","size":104533,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2026\/02\/24\/208A9267-2.jpg?itok=3fEZjVVt"}}},"media_ids":["679406"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"194606","name":"Artificial Intelligence"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"143","name":"Digital Media and Entertainment"},{"id":"135","name":"Research"}],"keywords":[{"id":"187915","name":"go-researchnews"},{"id":"9153","name":"Research Horizons"},{"id":"167543","name":"social media"},{"id":"190947","name":"tiktok"},{"id":"10343","name":"mental health"},{"id":"10824","name":"Children And Adolescents"},{"id":"5660","name":"algorithms"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71901","name":"Society and Culture"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"688648":{"#nid":"688648","#data":{"type":"news","title":"New \u2018Touchable Sound\u2019 Museum Display Makes Data More Accessible","body":[{"value":"\u003Cp\u003EBlind and low vision (BLV) people may soon have access to and more easily understand scientific data in museum exhibits through new \u201ctouchable sound\u201d displays.\u003C\/p\u003E\u003Cp\u003EAssociate Professor Jessica Roberts and Ph.D. student Emily Amspoker of Georgia Tech\u2019s School of Interactive Computing are working with the \u003Ca href=\u0022https:\/\/gacoast.uga.edu\/\u0022\u003E\u003Cstrong\u003EUniversity of Georgia\u2019s Marine Extension and Georgia Sea Grant in Savannah\u003C\/strong\u003E\u003C\/a\u003E. Together, they\u2019ve developed a prototype display that uses sonification and texture to convey sea floor habitat information from \u003Ca href=\u0022https:\/\/graysreef.noaa.gov\/\u0022\u003E\u003Cstrong\u003EGray\u2019s Reef National Marine Sanctuary\u003C\/strong\u003E\u003C\/a\u003E off the coast of Georgia.\u003C\/p\u003E\u003Cp\u003ESonification is the process of translating data points into sound.\u003C\/p\u003E\u003Cp\u003EThe display functions as a map that BLV users can follow to learn about each habitat. It is made from a wooden board with laser-cut patterns engraved into the surface. Each pattern represents information about the four types of habitats found in Gray\u2019s Reef. Each pattern has a distinct sound that corresponds to a legend on the board, which provides an audio description of each habitat.\u003C\/p\u003E\u003Cp\u003EThe four habitats are:\u003C\/p\u003E\u003Cul\u003E\u003Cli\u003EFlat sand \u2014 smooth sandy seafloor with little topographic variation that provides habitat for burrowing organisms such as worms, clams, and sand dollars.\u003C\/li\u003E\u003Cli\u003ERippled sand \u2014 sandy bottom shaped into small wave-like ridges by currents and wave action; supports microhabitats of small invertebrates and attracts fish feeding on buried prey.\u003C\/li\u003E\u003Cli\u003ESparse live bottom \u2014 areas of exposed hard surfaces with scattered attached organisms like sponges, corals, and algae, offering structure and shelter for reef-associated fish and invertebrates.\u003C\/li\u003E\u003Cli\u003EDense live bottom \u2014 hard-bottom reef areas with abundant attached marine life, providing high biodiversity and offering food, and breeding sites for numerous species.\u003C\/li\u003E\u003C\/ul\u003E\u003Cp\u003EBy allowing learners to explore these habitats, the team hopes to emphasize the importance of protecting diverse ocean habitats.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cOur job was to figure out how we can use sounds and touch to represent each of the four habitat types so our visitors can explore the ocean without being able to see it,\u201d she said.\u003C\/p\u003E\u003Cp\u003ERoberts said the project is critical to advance understanding of how science and informal learning can be more inclusive to those who have difficulty processing visual data displays.\u003C\/p\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cp\u003E\u201cThis was particularly exciting to figure out how we could broaden accessibility to data sets because just like so much other scientific data, it\u2019s out there and available, but when it\u2019s presented to the public, it\u2019s usually in visual form,\u201d she said. \u201cThere are many open questions about how to do this well within a museum with complex scientific data. We\u2019re moving the needle on that, but there\u2019s a long way to go.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003ERight Combination\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EAmspoker and Roberts created three different versions of the prototype. One was sound-only, one was texture-only, and the other was a combination of sound and texture.\u003C\/p\u003E\u003Cp\u003E\u201cWe expected the multimodal version would work best,\u201d Amspoker said. \u201cWe found people used sound and texture in different ways when interacting with it. In cases where people relied on texture, it was still difficult to tell when they crossed the barrier from one texture to another. Sound was very useful in that case.\u201d\u003C\/p\u003E\u003Cp\u003EAmspoker said computer vision and an app she designed allow the technology to be deployed on any surface, whether a mobile device, a wooden board, or even a classroom floor. A camera set up above the display tracks the user\u2019s hand movements.\u003C\/p\u003E\u003Cp\u003E\u201cIt figures out where you are on the board, and then our code uses the location of your finger to decide what sound should play from the computer,\u201d she said. \u201cWhat\u2019s nice about our system is it only needs a computer and a webcam, and you can use whatever materials you have on hand for the map.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EBuilding on a Legacy\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003ERoberts said she is building on the work of a previous NSF-funded collaboration with Dr. Amy Bower, a senior scientist at the Woods Hole Oceanographic Institute in Massachusetts who is blind.\u003C\/p\u003E\u003Cp\u003EBower lost her vision in graduate school, but because of her lifelong interest in oceanography, she set out to create ways to learn about ocean data through sound.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EIn 2021, she launched the \u003Ca href=\u0022https:\/\/accessibleoceans.whoi.edu\/\u0022\u003E\u003Cstrong\u003EAccessible Oceans\u003C\/strong\u003E\u003C\/a\u003E project through the National Science Foundation\u2019s Advancing Informal STEM Learning program. The interdisciplinary team, including Roberts and collaborators Leslie Smith of Your Ocean Consulting and Jon Bellona of the University of Oregon, created auditory displays of sonified data for museums.\u003C\/p\u003E\u003Cp\u003EIn 2023, the team published \u003Ca href=\u0022https:\/\/tos.org\/oceanography\/article\/expanding-access-to-ocean-science-through-inclusively-designed-data-sonifications\u0022\u003E\u003Cstrong\u003Ean article in \u003C\/strong\u003E\u003Cem\u003E\u003Cstrong\u003EOceanography,\u003C\/strong\u003E\u003C\/em\u003E\u003Cstrong\u003E the official magazine of the Oeanography Society\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003E\u201cInformal learning environments are increasingly recognizing the importance of employing multiple modalities to engage all learners and are leveraging sound to enhance visitor experience,\u201d the authors wrote.\u003C\/p\u003E\u003Cp\u003E\u201cWhile sonic additions of music, soundscapes, and field recordings add qualitative value, there is a need to explore the potential of sound to facilitate engagement with quantitative information. Data sonification is a promising avenue for increasing accessibility to data within the museum context.\u201d\u003C\/p\u003E\u003C\/div\u003E\u003C\/div\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech researchers have created a prototype \u201ctouchable sound\u201d museum exhibit that helps blind and low-vision visitors explore scientific data by combining tactile maps with sonification of seafloor habitats. The display translates information about different ocean environments into distinctive textures and sounds so users can follow a physical map of Gray\u2019s Reef National Marine Sanctuary and hear data-driven audio cues. The team hopes this multimodal approach will make complex visual data more inclusive and broaden access to informal science learning.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech researchers have developed a prototype \u201ctouchable sound\u201d museum display that uses sonification and tactile maps to make complex scientific data about ocean habitats more accessible to blind and low-vision visitors."}],"uid":"36530","created_gmt":"2026-03-03 15:13:03","changed_gmt":"2026-03-20 12:52:09","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2026-03-03T00:00:00-05:00","iso_date":"2026-03-03T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"679503":{"id":"679503","type":"image","title":"2026-Jessica-Roberts-Reef-Data-Sonification-2.jpg","body":null,"created":"1772550793","gmt_created":"2026-03-03 15:13:13","changed":"1772550793","gmt_changed":"2026-03-03 15:13:13","alt":"Jessica Roberts","file":{"fid":"263675","name":"2026-Jessica-Roberts-Reef-Data-Sonification-2.jpg","image_path":"\/sites\/default\/files\/2026\/03\/03\/2026-Jessica-Roberts-Reef-Data-Sonification-2.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2026\/03\/03\/2026-Jessica-Roberts-Reef-Data-Sonification-2.jpg","mime":"image\/jpeg","size":118705,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2026\/03\/03\/2026-Jessica-Roberts-Reef-Data-Sonification-2.jpg?itok=UaqIj7yh"}}},"media_ids":["679503"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"360","name":"accessibility"},{"id":"194701","name":"go-resarchnews"},{"id":"9153","name":"Research Horizons"},{"id":"9092","name":"museums"},{"id":"181370","name":"oceanography"},{"id":"176552","name":"data sonification"},{"id":"1102","name":"blind"},{"id":"2751","name":"visually impaired"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"688916":{"#nid":"688916","#data":{"type":"news","title":" Undergrads Earn National Recognition for Computing Research","body":[{"value":"\u003Cp\u003ETwo Georgia Tech undergraduates are being recognized for their contributions to computing research.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ERyan\u0026nbsp;Punamiya\u003C\/strong\u003E\u0026nbsp;(CS 2025)\u0026nbsp;and \u003Cstrong\u003ESummer Abramson\u003C\/strong\u003E, a third-year\u0026nbsp;computational\u0026nbsp;media student, have been honored by the Computing Research Association (CRA) through its 2025\u20132026 \u003Ca href=\u0022https:\/\/cra.org\/about\/awards\/outstanding-undergraduate-researcher-award\/\u0022\u003E\u003Cstrong\u003EOutstanding Undergraduate Researcher Award (URA) program.\u0026nbsp;\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003EPunamiya\u0026nbsp;was named a runner-up for the prestigious award, while Abramson received an honorable mention among hundreds of applicants from universities across North America.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThe\u0026nbsp;\u003Ca href=\u0022https:\/\/cra.org\/about\/awards\/outstanding-undergraduate-researcher-award\/\u0022\u003E\u003Cstrong\u003ECRA Outstanding Undergraduate Researcher Award program\u003C\/strong\u003E\u003C\/a\u003E\u0026nbsp;recognized eight awardees in 2026, along with eight runners-up, nine finalists, and over 200 honorable mentions from thousands of applications.\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EAdvancing\u0026nbsp;Robotics Research\u0026nbsp;\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EPunamiya\u0026nbsp;knew early on that he\u0026nbsp;didn\u2019t\u0026nbsp;want to wait until starting his Ph.D. to do meaningful and impactful robotics research.\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EPunamiya\u0026nbsp;joined the Robot Learning and Reasoning Lab (RL2) directed by Assistant Professor\u0026nbsp;Danfei\u0026nbsp;Xu. While there, he contributed to the lab\u2019s Meta-sponsored\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/new-algorithm-teaches-robots-through-human-perspective\u0022\u003E\u003Cstrong\u003EEgoMimic\u003C\/strong\u003E\u003C\/a\u003E\u0026nbsp;project, which trains robots to perform human tasks using recordings captured by Meta\u2019s Project Aria research glasses.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EPunamiya\u0026nbsp;is\u0026nbsp;also the first author of a paper accepted to the 2025 Conference on Neural Information Processing Systems (NeurIPS),\u0026nbsp;one of the world\u2019s most prestigious artificial intelligence (AI) and machine learning conferences.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cRyan is the strongest undergraduate I\u0027ve worked with,\u201d Xu said, \u201cincluding students who went on to Stanford, Berkeley, and leadership roles in major tech companies.\u0026nbsp;He\u2019s\u0026nbsp;already\u0026nbsp;operating\u0026nbsp;at the level of a strong\u0026nbsp;third-year Ph.D.\u0026nbsp;student.\u201d\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EPunamiya\u0026nbsp;said it was a challenge to balance his undergraduate coursework with his research in Xu\u2019s lab.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cYou get out how much you put in,\u201d\u0026nbsp;he\u0026nbsp;said.\u0026nbsp;\u201cI built my class schedule to give myself as much time to do research as possible. It also boils down to having the right research mentors.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201c(Xu) never saw me as an\u0026nbsp;undergrad\u0026nbsp;who\u2019s\u0026nbsp;just there to do grunt work. I was\u0026nbsp;fortunate\u0026nbsp;he saw my curiosity and cultivated me as a researcher.\u0026nbsp;That\u2019s\u0026nbsp;really how\u0026nbsp;you get more\u0026nbsp;undergrads\u0026nbsp;motivated to research \u2014 giving them the chance to be independent and explore ideas of their own.\u201d\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EPunamiya\u0026nbsp;said his work in Xu\u2019s lab has already helped him identify the research areas he wants to focus on as he considers his next steps. He will continue developing generalized training models for robots using human data so they can perform tasks instantly upon deployment.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u0022The amount of data needed to train a robot is difficult to obtain even for top industry companies,\u0022 he said. \u0022We have embodied robot data available in billions of humans. With the advent of extended reality devices, we can get a scalable source of diverse interactions within environments.\u0022\u003C\/p\u003E\u003Cp\u003EPunamiya\u0026nbsp;graduated in December and recently started an internship at Nvidia. He mentioned he has been accepted into several Ph.D. programs, including Georgia Tech, and he is choosing where to continue his research.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cIt\u2019s the first time my research has been\u0026nbsp;acknowledged\u0026nbsp;externally by the robotics community,\u201d he said. \u201cIt\u2019s\u0026nbsp;good to\u0026nbsp;know\u0026nbsp;the problem\u0026nbsp;I\u2019m\u0026nbsp;working on is important, and that motivates me. Robotics is an exciting field. We are doing things now that two years ago were difficult to do.\u201d\u0026nbsp;\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EResearching Inclusion in Computing Education\u0026nbsp;\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EAbramson conducts research in the People-Agents Research for Computing Education (PARCE) Laboratory under the mentorship of\u0026nbsp;Pedro Guillermo Feij\u00f3o-Garc\u00eda, a faculty member\u0026nbsp;in the School of Computing Instruction. He and the Associate Dean for Undergraduate Education, Olufisayo Omojokun, nominated her for the award.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EHer work focuses on the intersection of computing education and human-AI interaction, where she\u2019s been exploring ways to create more equitable technology.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cThis is such a huge milestone, and I couldn\u0027t be prouder of Summer,\u201d Feij\u00f3o-Garc\u00eda said. \u201cMentoring her for almost two years has been an amazing experience.\u201d\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EAbramson has received the Georgia Tech President\u2019s Undergraduate Research Award (PURA) twice, which supports her research exploring how user-centered design curricula can help address attrition among women in computing.\u003C\/p\u003E\u003Cp\u003E\u201cI\u2019ve had the amazing opportunity to pursue research at the intersection of student identity, community belonging, and how we can build tools that support our diverse student population,\u201d Abramson said.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cDr. Pedro and I have a goal to build community through a human-first approach, and I could not be more grateful for his support and guidance in my own journey. The CRA highlights the best of what the computing discipline has to offer, and I am incredibly honored for our work to be recognized.\u201d\u003C\/p\u003E\u003Cp\u003EAbramson will spend the summer researching how user-centered design curricula can help promote confidence, belonging, and retention for women in computing.\u003C\/p\u003E\u003Cp\u003ENominees for the PURA program were recognized for contributing to multiple research projects, authoring or coauthoring papers, presenting at conferences, developing widely used software artifacts, and supporting their communities as teaching assistants, tutors, and mentors.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cem\u003ESchool of Computing Instruction Communications Officer Emily Smith contributed to this story.\u003C\/em\u003E\u003C\/p\u003E\u003Cp\u003E\u003Cem\u003EMain Photo: Ryan Punamiya works with a robot during the 2025 International Conference on Robotics and Automation in Atlanta. Photo by Terence Rushin\/College of Computing.\u003C\/em\u003E\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003E\u003Cstrong\u003ERyan\u0026nbsp;Punamiya\u003C\/strong\u003E\u0026nbsp;(CS 2025)\u0026nbsp;and \u003Cstrong\u003ESummer Abramson\u003C\/strong\u003E, a third-year\u0026nbsp;computational\u0026nbsp;media student, have been honored by the Computing Research Association (CRA) through its 2025\u20132026 \u003Ca href=\u0022https:\/\/cra.org\/about\/awards\/outstanding-undergraduate-researcher-award\/\u0022\u003E\u003Cstrong\u003EOutstanding Undergraduate Researcher Award (URA) program.\u0026nbsp;\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003EPunamiya\u0026nbsp;was named a runner-up for the prestigious award, while Abramson received an honorable mention among hundreds of applicants from universities across North America.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThe\u0026nbsp;\u003Ca href=\u0022https:\/\/cra.org\/about\/awards\/outstanding-undergraduate-researcher-award\/\u0022\u003E\u003Cstrong\u003ECRA Outstanding Undergraduate Researcher Award program\u003C\/strong\u003E\u003C\/a\u003E\u0026nbsp;recognized eight awardees in 2026, along with eight runners-up, nine finalists, and over 200 honorable mentions from thousands of applications.\u0026nbsp;\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Ryan Punamiya (CS 2025) and Summer Abramson, a third-year computational media student, have been honored by the Computing Research Association (CRA) through its 2025\u20132026 Outstanding Undergraduate Researcher Award (URA) program. "}],"uid":"36530","created_gmt":"2026-03-13 14:57:26","changed_gmt":"2026-03-20 12:51:21","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2026-03-13T00:00:00-04:00","iso_date":"2026-03-13T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"679613":{"id":"679613","type":"image","title":"ICRA-2025_P9A0421-Enhanced-NR.jpg","body":null,"created":"1773413856","gmt_created":"2026-03-13 14:57:36","changed":"1773413856","gmt_changed":"2026-03-13 14:57:36","alt":"Ryan Punamiya","file":{"fid":"263795","name":"ICRA-2025_P9A0421-Enhanced-NR.jpg","image_path":"\/sites\/default\/files\/2026\/03\/13\/ICRA-2025_P9A0421-Enhanced-NR.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2026\/03\/13\/ICRA-2025_P9A0421-Enhanced-NR.jpg","mime":"image\/jpeg","size":133995,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2026\/03\/13\/ICRA-2025_P9A0421-Enhanced-NR.jpg?itok=r8p0C5IW"}}},"media_ids":["679613"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"135","name":"Research"},{"id":"193158","name":"Student Competition Winners (academic, innovation, and research)"},{"id":"193157","name":"Student Honors and Achievements"},{"id":"8862","name":"Student Research"}],"keywords":[{"id":"187915","name":"go-researchnews"},{"id":"9153","name":"Research Horizons"},{"id":"101271","name":"Computing Research Association"},{"id":"22861","name":"undergraduate research awards"}],"core_research_areas":[],"news_room_topics":[{"id":"71871","name":"Campus and Community"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"687358":{"#nid":"687358","#data":{"type":"news","title":"New LLMs Could Provide Strength-based Job Coaching for Autistic People","body":[{"value":"\u003Cp\u003EPeople with autism seeking employment may soon have access to a new AI-based job-coaching tool thanks to a six-figure grant from the National Science Foundation (NSF).\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/jennifer-kim\u0022\u003E\u003Cstrong\u003EJennifer Kim\u003C\/strong\u003E\u003C\/a\u003E and\u0026nbsp;\u003Ca href=\u0022https:\/\/eilab.gatech.edu\/mark-riedl.html\u0022\u003E\u003Cstrong\u003EMark Riedl\u003C\/strong\u003E\u003C\/a\u003E recently received a $500,000 NSF grant to develop large language models (LLMs) that provide strength-based job coaching for autistic job seekers.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThe two Georgia Tech researchers work with\u0026nbsp;\u003Ca href=\u0022https:\/\/excel.gatech.edu\/excel-staff\/heather-dicks\u0022\u003E\u003Cstrong\u003EHeather Dicks\u003C\/strong\u003E\u003C\/a\u003E, a career development advisor in Georgia Tech\u2019s EXCEL program, and other nonprofit organizations to provide job-seeking resources to autistic people.\u003C\/p\u003E\u003Cp\u003EDicks said the average job search for people with autism can take three to six months in a good economy. It can take up to 18 months in a bad one. However, the new LLMs from Georgia Tech could help to reduce stress and fast-track these job seekers into employment.\u003C\/p\u003E\u003Cp\u003EKim is an assistant professor who specializes in human-computer interaction technology that benefits neurodivergent people. Riedl is a professor and an expert in the development of artificial intelligence (AI) and machine learning technologies.\u003C\/p\u003E\u003Cp\u003EThe team\u2019s goal is to identify job-search pain points and understand how job coaches create better employment prospects for their autistic clients.\u003C\/p\u003E\u003Cp\u003E\u201cLarge-language models have an opportunity to support this kind of work if we can have more data about each different individual strength,\u201d Kim said.\u003C\/p\u003E\u003Cp\u003E\u201cWe want to know what worked for them in specific settings at work, what didn\u2019t work, and what kind of accommodations can better help them. That includes how they should prepare for interviews, how they can better represent their skills, how they can address accommodations they need, and how to write a cover letter. It\u2019s a broad range.\u201d\u003C\/p\u003E\u003Cp\u003EDicks has advocated for neurodivergent people and helped them find employment for 20 years. She worked at the Center for the Visually Impaired in Atlanta before coming to Georgia Tech in 2017.\u003C\/p\u003E\u003Cp\u003EShe said most nonprofits that support neurodivergent people offer career development programs and many contract job coaches, but limited coach availability often leads to long waitlists. However, LLMs could fill this availability gap to address the immediate needs of job seekers who may not have access to a job coach.\u003C\/p\u003E\u003Cp\u003E\u201cThese organizations often run at a slow pace, and there\u2019s high turnover,\u201d Dicks said. \u201cAn AI tool could get the job seeker quicker support. Maybe they don\u2019t even need to wait on the government system.\u003C\/p\u003E\u003Cp\u003E\u201cIf they\u2019re on a waitlist, it can help the user put together a resume and practice general interview questions. When the job coach is ready to work with them, they\u2019re able to hit the ground running.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003ENailing the Interview\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EDicks said the job interview is one of the biggest challenges for people with autism.\u003C\/p\u003E\u003Cp\u003E\u201cThey have trouble picking up on visual and nonverbal cues \u2014 the tone of the interview, figuring out the nuances that a question is hinting at,\u201d she said. \u201cThey\u2019re not giving the warm and fuzzy vibes that allow them to connect on a personal level.\u201d\u003C\/p\u003E\u003Cp\u003EThat\u2019s why Kim wants the models to reflect a strength-based coaching approach. Strength-based coaching is particularly effective for individuals with autism. Many possess traits that employers value. These include:\u003C\/p\u003E\u003Cul\u003E\u003Cli\u003EClose attention to detail\u003C\/li\u003E\u003Cli\u003EStrong technical proficiency\u003C\/li\u003E\u003Cli\u003EUnique problem-solving perspectives\u003C\/li\u003E\u003C\/ul\u003E\u003Cp\u003E\u201cThe issue is that they don\u2019t know how these strengths can be applied in the workplace,\u201d Kim said. \u201cOnce they understand this, they can communicate with employers about their strengths and the accommodations employers should provide to the job seeker so they can successfully apply their skills at work.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EHandling Rejection\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EStill, Kim understands that candidates will need to handle rejection to make it through the search process. She envisions LLMs that help them refocus their energy and regain their confidence after being turned down.\u003C\/p\u003E\u003Cp\u003E\u201cWhen you get a lot of rejection emails, it\u2019s easy to feel you\u2019re not good enough,\u201d she said. \u201cBeing constantly reminded about your strengths and their prior successes can get them through the stressful job-seeking process.\u201d\u003C\/p\u003E\u003Cp\u003EDicks said the models should also be able to provide feedback so that candidates don\u2019t repeat mistakes.\u003C\/p\u003E\u003Cp\u003E\u201cIt can tell them what would\u2019ve been a better answer or a better way to say it,\u201d Dicks said. \u201cIt can also encourage them with reminders that you get 100 noes before you get a yes.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EYou\u2019re Hired, Now What?\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EDicks said the role of a job coach doesn\u2019t end the moment a client is hired. Government-contracted job coaches may work with their clients for up to 90 days after they start a new job to support their transition.\u003C\/p\u003E\u003Cp\u003EHowever, she said, sometimes that isn\u2019t enough. Many companies have probationary periods exceeding three months. Autistic individuals may struggle with on-the-job training or communicating what accommodations they need from their new employer.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThese are just a few gaps an AI tool can fill for these individuals after they\u2019re hired.\u003C\/p\u003E\u003Cp\u003E\u201cI could see these models evolving to being supportive at those critical junctures of the probationary period being over or the one-year job review or the annual evaluation that everyone dreads,\u201d she said.\u003C\/p\u003E\u003Cp\u003EDicks has an average caseload of 15 students, whom she assists in landing jobs and internships through the EXCEL program.\u003C\/p\u003E\u003Cp\u003EEXCEL provides a mentorship program for students with intellectual and developmental disabilities from the time they set foot on campus through graduation and beyond.\u003C\/p\u003E\u003Cp\u003EFor more information and to apply, visit EXCEL\u2019s\u0026nbsp;\u003Ca href=\u0022https:\/\/excel.gatech.edu\/home\u0022\u003E\u003Cstrong\u003Ewebsite\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech researchers are using an NSF grant to create new large-language models that help autistic job seekers understand their strengths and how to leverage them during the application process.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech researchers are using an NSF grant to create new large-language models that help autistic job seekers understand their strengths and how to leverage them during the application process."}],"uid":"36530","created_gmt":"2026-01-15 19:04:04","changed_gmt":"2026-01-22 13:41:09","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2026-01-15T00:00:00-05:00","iso_date":"2026-01-15T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"679012":{"id":"679012","type":"image","title":"Jennifer-Kim_86A4154-copy.jpg","body":null,"created":"1768503854","gmt_created":"2026-01-15 19:04:14","changed":"1768503854","gmt_changed":"2026-01-15 19:04:14","alt":"Jennifer Kim","file":{"fid":"263123","name":"Jennifer-Kim_86A4154-copy.jpg","image_path":"\/sites\/default\/files\/2026\/01\/15\/Jennifer-Kim_86A4154-copy.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2026\/01\/15\/Jennifer-Kim_86A4154-copy.jpg","mime":"image\/jpeg","size":71820,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2026\/01\/15\/Jennifer-Kim_86A4154-copy.jpg?itok=hbn_0e9T"}}},"media_ids":["679012"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"194606","name":"Artificial Intelligence"},{"id":"42901","name":"Community"},{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"6053","name":"Autism"},{"id":"191680","name":"neurodiverse"},{"id":"780","name":"employment"},{"id":"174112","name":"excel program"},{"id":"192863","name":"go-ai"},{"id":"187812","name":"artificial intelligence (AI)"},{"id":"193556","name":"large language models"},{"id":"7011","name":"NSF grant"},{"id":"6957","name":"Job Search"},{"id":"13786","name":"job search strategies"},{"id":"194701","name":"go-resarchnews"},{"id":"9153","name":"Research Horizons"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71901","name":"Society and Culture"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"686615":{"#nid":"686615","#data":{"type":"news","title":"Researchers Look to Maker Safer AI Through Google Awards","body":[{"value":"\u003Cp\u003EPeople seeking mental health support are increasingly turning to large language models (LLMs) for advice.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EHowever, most popular AI-powered chatbots are not trained to recognize when someone is in crisis. LLMs also cannot determine when to refer someone to a human specialist.\u003C\/p\u003E\u003Cp\u003ENew Georgia Tech research projects that address these issues may soon provide people seeking mental health support with safer experiences.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EGoogle has awarded research grants to three faculty members from the School of Interactive Computing to study artificial intelligence (AI), trust, safety, and security. The grants were among dozens awarded by the company to researchers across the country.\u003C\/p\u003E\u003Cp\u003EProfessor \u003Ca href=\u0022http:\/\/www.munmund.net\/\u0022\u003E\u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E\u003C\/a\u003E, Associate Professor \u003Ca href=\u0022https:\/\/sites.google.com\/view\/riarriaga\/home\u0022\u003E\u003Cstrong\u003ERosa Arriaga\u003C\/strong\u003E\u003C\/a\u003E, and Associate Professor \u003Ca href=\u0022https:\/\/aritter.github.io\/\u0022\u003E\u003Cstrong\u003EAlan Ritter\u003C\/strong\u003E\u003C\/a\u003E are among the recipients of the \u003Ca href=\u0022https:\/\/research.google\/programs-and-events\/google-academic-research-awards\/google-academic-research-award-program-recipients\/\u0022\u003E\u003Cstrong\u003E2025 Google Academic Research Awards\u003C\/strong\u003E\u003C\/a\u003E.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ETheir projects will explore questions like:\u003C\/p\u003E\u003Cul\u003E\u003Cli\u003EWhat harms could occur if people consult LLMs for mental health advice?\u003C\/li\u003E\u003Cli\u003EWhich groups are most at risk of receiving harmful guidance?\u003C\/li\u003E\u003Cli\u003EWhen should an LLM stop responding and refer someone to a human professional?\u003C\/li\u003E\u003C\/ul\u003E\u003Cp\u003EDe Choudhury and Arriaga will examine how LLMs might harm people seeking mental health care.\u003C\/p\u003E\u003Cp\u003EDe Choudhury\u2019s work focuses on spotting when chatbot conversations go wrong and lead users toward self-harm. She is also studying design changes that could prevent these situations.\u003C\/p\u003E\u003Cp\u003EHer project,\u0026nbsp;\u003Cem\u003EExiting Harmful Reliance: Identifying Crises \u0026amp; Care Escalation Needs\u003C\/em\u003E, is in partnership with Angel Hsing-Chi Hwang from the University of Southern California. Together, they will review real and synthetic chat transcripts with clinicians to find language patterns that signal risk.\u003C\/p\u003E\u003Cp\u003E\u201cA chatbot will always give a response and keep talking to you for however long you want,\u201d De Choudhury said. \u201cThat may not be a good thing for someone in crisis. We need to know when the right response is to stop and suggest talking to a human.\u201d\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EUnderstanding Risks for Low-Income Users\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EArriaga\u2019s project,\u0026nbsp;\u003Cem\u003EDull, Dirty, Dangerous: Investigating Trust of Digital Resources Among Low-SES Mental Health Care Seekers\u003C\/em\u003E, looks at how LLMs affect people with low socioeconomic status (SES).\u003C\/p\u003E\u003Cp\u003EDull, dirty, and dangerous is a phrase used to describe work that is well-suited for robot automation because they are repetitive, physically taxing, or hazardous for humans. Arriaga said she adapted these terms for her research to create a taxonomy of the harms AI can cause to people seeking mental health care.\u003C\/p\u003E\u003Cp\u003EArriaga also wants to label the trust factors that chatbots have that attract low-SES users to seek their advice, and how these may differ for adults and adolescents across contexts.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cWe know one of the reasons some users go to LLMs is because they aren\u2019t insured and can\u2019t afford a therapist,\u201d she said. \u201cLLMs are available 24-7. Maybe it doesn\u2019t start as a trust issue. Maybe it starts with availability.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cSome of these human-AI conversations that result in harmful mental health advice didn\u2019t begin on the topic of mental health. In one case, the person started going to the machine for help with homework.\u003C\/p\u003E\u003Cp\u003E\u201cThen this relationship evolved into personal matters. Should we constrain the system to limit itself to helping someone with their homework and not wander off that subject into mental health matters?\u201d\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EManaging Privacy Risks for Social Media\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003ERitter will use the Google award to advance research on social media privacy tools, including interactive AI agents that help people make more informed decisions about what they share online.\u003C\/p\u003E\u003Cp\u003EHis project, \u003Cem\u003EAI Tools to Help Users Make Informed Decisions About Online Information Sharing\u003C\/em\u003E, focuses on reducing privacy risks in both text and images by identifying when posts reveal more than users intend.\u003C\/p\u003E\u003Cp\u003E\u201cWe\u2019ve been developing methods to assess risks in text, and now we\u2019re extending that work to images,\u201d Ritter said. \u201cPeople post photos without realizing how easily they can be geolocated by advanced AI systems. A casual selfie near home might contain subtle cues about where you live, like a street sign, that reveal private details.\u201d\u003C\/p\u003E\u003Cp\u003EThe project aims to create AI agents that review content within user posts, flag elements that pose risk, and suggest safer alternatives. Ritter said he wants people to maintain control over their privacy without limiting freedom of expression.\u003C\/p\u003E\u003Cp\u003ERitter will deploy advanced reasoning models capable of probabilistic privacy estimation. These systems can infer how identifiable a piece of text might be or how likely an image is to reveal a user\u2019s location.\u003C\/p\u003E\u003Cp\u003EFor images, Ritter and his collaborators will use models that identify geolocatable features, allowing users to edit or hide them before posting.\u003C\/p\u003E\u003Cp\u003EFor more on Ritter\u2019s research,\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/new-large-language-model-can-protect-social-media-users-privacy\u0022\u003E\u003Cstrong\u003Eread how an LLM he co-developed protects the privacy of users on social media.\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EThree Georgia Tech faculty members from the School of Interactive Computing received Google Academic Research Awards to study how to make AI safer, focusing on minimizing harm to users seeking \u003Cstrong\u003Emental health support\u003C\/strong\u003E from large language models (LLMs) and improving \u003Cstrong\u003Esocial media privacy\u003C\/strong\u003E tools.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Three Georgia Tech faculty members received Google Academic Research Awards to study how to make AI safer."}],"uid":"36530","created_gmt":"2025-11-24 20:28:32","changed_gmt":"2026-01-09 13:38:21","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-11-24T00:00:00-05:00","iso_date":"2025-11-24T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"678716":{"id":"678716","type":"image","title":"437249_Google-Research-Award-Graphic.jpg","body":null,"created":"1764016128","gmt_created":"2025-11-24 20:28:48","changed":"1764016128","gmt_changed":"2025-11-24 20:28:48","alt":"Google Research Awards","file":{"fid":"262784","name":"437249_Google-Research-Award-Graphic.jpg","image_path":"\/sites\/default\/files\/2025\/11\/24\/437249_Google-Research-Award-Graphic.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/11\/24\/437249_Google-Research-Award-Graphic.jpg","mime":"image\/jpeg","size":120957,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/11\/24\/437249_Google-Research-Award-Graphic.jpg?itok=QmSwvwkp"}}},"media_ids":["678716"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"194606","name":"Artificial Intelligence"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"135","name":"Research"}],"keywords":[{"id":"194701","name":"go-resarchnews"},{"id":"9153","name":"Research Horizons"},{"id":"192863","name":"go-ai"},{"id":"193860","name":"Artifical Intelligence"},{"id":"187812","name":"artificial intelligence (AI)"},{"id":"192524","name":"ChatGPT"},{"id":"184554","name":"Google Research Award"},{"id":"167007","name":"health \u0026 well-being"},{"id":"10343","name":"mental health"},{"id":"169137","name":"chatbot"},{"id":"167543","name":"social media"},{"id":"114791","name":"Data Privacy"}],"core_research_areas":[],"news_room_topics":[{"id":"71901","name":"Society and Culture"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"686884":{"#nid":"686884","#data":{"type":"news","title":"Students Collaborating with Nonprofit to Reduce Bird Collisions with Buildings","body":[{"value":"\u003Cp\u003EIn 2015, before the cleaning crews hit the sidewalks of downtown Atlanta and before scavenger animals arose to snag an easy meal, Adam Betuel would venture into the darkness of the early mornings to look for birds.\u003C\/p\u003E\u003Cp\u003ESome were still alive, but most of the birds were dead. They were all too easy to find.\u003C\/p\u003E\u003Cp\u003E\u201cI knew birds hit buildings, but I didn\u2019t know much more about the issue at that time, and I was surprised how easily I just found birds,\u201d Betuel said.\u003C\/p\u003E\u003Cp\u003EBirds flying into windows aren\u2019t isolated events. Environmentalists estimate between 365 million and one billion birds die each year from colliding with structures in the U.S. \u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cThat statistic is hard for most people to comprehend,\u201d Betuel said. \u201cWhen you think about the millions of homes we have and these high-rise buildings, and if each one is killing a few a year, that number can get big pretty quick.\u201d\u003C\/p\u003E\u003Cp\u003EBetuel is the executive director of\u0026nbsp;\u003Ca href=\u0022https:\/\/www.birdsgeorgia.org\/mission-and-programs.html\u0022\u003E\u003Cstrong\u003EBirds Georgia\u003C\/strong\u003E\u003C\/a\u003E, a nonprofit affiliate of the Audubon network that leads bird conservation efforts in Georgia. For 10 years, volunteers from the organization have combed Atlanta\u2019s streets, collecting bird specimens.\u003C\/p\u003E\u003Cp\u003EBirds Georgia launched Project Safe Flight in 2015 to reduce bird building-collision mortality through data collection. Through legislation, the group aims to make building construction bird-friendly and reduce light pollution.\u003C\/p\u003E\u003Cp\u003EEnvironmentalists who study the issue have ranked Atlanta, which sits squarely on a migration route, as the fourth-most dangerous city for birds during fall migration. It is the ninth-most dangerous city during spring migration.\u003C\/p\u003E\u003Cp\u003EThe number of bird deaths from collisions in Atlanta and across the state remains unknown. However, new data tools developed by student researchers in the College of Computing at Georgia Tech are helping Birds Georgia get a clearer picture of the issue.\u003C\/p\u003E\u003Cp\u003E\u201cWe\u2019ve been working with different folks at Georgia Tech for years now, but it\u2019s really picked up lately,\u201d Betuel said. \u201cThere\u2019s a lot of momentum and interest on campus to try to make the city safer for birds.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EPushing for Policy\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/abooneportfolio.com\/\u0022\u003E\u003Cstrong\u003EAshley Boone\u003C\/strong\u003E\u003C\/a\u003E, a Ph.D. student in human-centered computing in Tech\u2019s School of Interactive Computing, has led the student effort to help Birds Georgia organize its data.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EBoone said organizing data and knowing how to use it is critical to spark conversations about adopting legislation.\u003C\/p\u003E\u003Cp\u003E\u201cWe often see a gap between data collection and data advocacy,\u201d she said. \u201cBirds Georgia has done an amazing job of tracking collisions in Atlanta over the last 10 years. My goal is to understand the role technology can play in making data useful for policy change.\u201d\u003C\/p\u003E\u003Cp\u003EUser-interface tools designed by computer science undergraduate students James Kemerait and Ian Wood have\u0026nbsp;ramped\u0026nbsp;up that process. One tool converts data input into visualizations optimized for social media, while another consolidates the data collected by volunteers and external sources.\u003C\/p\u003E\u003Cp\u003EBoone said the desired legislation would mirror policies implemented by New York City. Those policies require the use of bird-safe materials \u2014 like window film with patterned designs that break up reflections \u2014 in new buildings and buildings undergoing significant renovations.\u0026nbsp;\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EWhat Can Residents Do?\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EResidents, whose homes account for about 40% of bird collision deaths in the U.S., can also make an impact.\u003C\/p\u003E\u003Cp\u003E\u201cHouseholds are an underexamined cause of bird collisions,\u201d Boone said. \u201cWe focus on the big buildings because it\u2019s easier to convince one manager of a large building to use bird-safe materials, and it\u2019s easier for a policy to address a commercial building. But the sheer volume of residential buildings in the U.S. has a tremendous impact on the number of collisions.\u201d\u003C\/p\u003E\u003Cp\u003ESteps that homeowners can take include:\u003C\/p\u003E\u003Cul\u003E\u003Cli\u003EBuying bird-safe film or making do-it-yourself versions of it to put on windows.\u003C\/li\u003E\u003Cli\u003EPlacing attractive objects like birdhouses and birdfeeders very close or very far away from windows.\u003C\/li\u003E\u003Cli\u003ETurning off lights after 9 p.m. on the busiest migration nights of the year.\u003C\/li\u003E\u003C\/ul\u003E\u003Cp\u003EBetuel said millions of birds can fly over Atlanta on a single night during migration, and they are attracted to the city lights.\u003C\/p\u003E\u003Cp\u003E\u201cThey\u2019ll come into urban centers and collide with an illuminated building, or maybe they overnight somewhere that isn\u2019t safe,\u201d he said. \u201cThe next day, they\u2019re surrounded by glass, and birds don\u2019t understand reflection.\u201d\u003C\/p\u003E\u003Cp\u003EResidents can visit the Birds Georgia website to sign up for the\u0026nbsp;\u003Ca href=\u0022https:\/\/www.birdsgeorgia.org\/lights-out-georgia.html\u0022\u003E\u003Cstrong\u003ELights Out Pledge\u003C\/strong\u003E\u003C\/a\u003E. Those who sign up will receive a text on the 10 busiest migratory nights of the year, and they will be asked to turn their lights off early.\u003C\/p\u003E\u003Cp\u003EThe tools provided by Georgia Tech gave Birds Georgia insight into the number of bird species affected by collisions \u2014 more than 140, according to Betuel.\u003C\/p\u003E\u003Cp\u003EBetuel said that when the organization reaches an estimate of bird collisions, he hopes the number will raise alarms and turn people\u2019s attention to the ecological impact.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cAll these birds being lost results in fewer birds to eat pest insects, fewer birds to pollinate flowers, fewer birds to disperse seeds \u2014 all the ecological functions that we need, that they\u2019re doing in the background that most people aren\u2019t keen to,\u201d he said. \u201cIf this decline in bird life continues to happen, at some point, there will be issues with our ecosystems functioning as they always have.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EAtlanta is one of the country\u0027s deadliest cities for migratory birds. Human-centered computing students in Georgia Tech\u2019s School of Interactive Computing are helping Birds Georgia organize its data to better understand how to reduce the likelihood of birds flying into tall buildings..\u003C\/p\u003E\u003Cp\u003E\u003Cbr\u003E\u0026nbsp;\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Interactive computing students are developing new data tools to reduce bird\/building strikes in Atlanta, which is among the country\u0027s deadliest cities for migratory birds."}],"uid":"32045","created_gmt":"2025-12-12 22:04:38","changed_gmt":"2026-01-09 13:35:54","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-12-12T00:00:00-05:00","iso_date":"2025-12-12T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"678838":{"id":"678838","type":"image","title":"Georgia Tech human-centered computing Ph.D. student Ashley Boone is building data tools to reduce the likelihood of birds flying into buildings.","body":null,"created":"1765577088","gmt_created":"2025-12-12 22:04:48","changed":"1765577088","gmt_changed":"2025-12-12 22:04:48","alt":"Georgia Tech human-centered computing Ph.D. student Ashley Boone is building data tools to reduce the likelihood of birds flying into buildings.","file":{"fid":"262927","name":"Ashley-Boone_86A1373-copy.jpg","image_path":"\/sites\/default\/files\/2025\/12\/12\/Ashley-Boone_86A1373-copy.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/12\/12\/Ashley-Boone_86A1373-copy.jpg","mime":"image\/jpeg","size":66310,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/12\/12\/Ashley-Boone_86A1373-copy.jpg?itok=iPD3xf3i"}}},"media_ids":["678838"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"142","name":"City Planning, Transportation, and Urban Growth"},{"id":"42901","name":"Community"}],"keywords":[{"id":"187915","name":"go-researchnews"},{"id":"10199","name":"Daily Digest"},{"id":"181991","name":"Georgia Tech News Center"}],"core_research_areas":[],"news_room_topics":[{"id":"71911","name":"Earth and Environment"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen, Communications Officer I\u003C\/p\u003E\u003Cp\u003EGeorgia Tech School of Interactive Computing\u003C\/p\u003E\u003Cp\u003Endeen6@gatech.edu\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"686517":{"#nid":"686517","#data":{"type":"news","title":"Ph.D. Student Making Digital Maps That Blind People Can Hear","body":[{"value":"\u003Cp\u003E\u201cMap region. Graphic clickable. Blank.\u201d\u003C\/p\u003E\u003Cp\u003EThat\u2019s usually the only information \u003Ca href=\u0022https:\/\/brandonkeithbiggs.com\/\u0022\u003E\u003Cstrong\u003EBrandon Biggs\u003C\/strong\u003E\u003C\/a\u003E receives from digital maps.\u003C\/p\u003E\u003Cp\u003EBiggs is a human-centered computing Ph.D. student in Georgia Tech\u2019s School of Interactive Computing. He is almost totally blind due to Leber\u2019s Congenital Amaurosis (LCA), a rare degenerative eye disorder affecting about one in 40,000 people.\u003C\/p\u003E\u003Cp\u003EBased on his experience, Biggs argues that most digital maps aren\u2019t accessible to people who are blind. Even worse, he said, the needs of the blind are usually overlooked.\u003C\/p\u003E\u003Cp\u003E\u201cWhen I started research on maps, I had never viewed a weather, campus, or building map, so I didn\u2019t realize the amount of information maps contain,\u201d Biggs said. \u201cHow do you represent shapes, orientation, and layout through audio and translate that into a geographic map?\u201d\u003C\/p\u003E\u003Cp\u003ETo answer these questions, Biggs founded \u003Ca href=\u0022https:\/\/xrnavigation.io\/\u0022\u003E\u003Cstrong\u003EXRNavigation\u003C\/strong\u003E\u003C\/a\u003E, a company focused on developing accessible digital tools. Its flagship product, Audiom, is a cross-sensory map that people can see and hear through text.\u003C\/p\u003E\u003Cp\u003E\u201cSighted people view about 300 maps per year, while blind people view fewer than one,\u201d he said. \u201cBlind people don\u2019t view maps; it\u2019s not part of their lives.\u003C\/p\u003E\u003Cp\u003E\u201cI want to ensure that for blind users, digital maps are no longer just \u2018blank.\u2019\u0026nbsp; They receive the information they need to know to navigate in this world and become more autonomous.\u201d\u003C\/p\u003E\u003Cp\u003EOrganizations that need to include accessible maps in their digital spaces can integrate Audiom into their website or app.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EGeorgia Tech recently became one such organization and used Audiom to introduce the first fully accessible digital campus map.\u003C\/p\u003E\u003Cp\u003EProfessor \u003Cstrong\u003EBruce Walker\u003C\/strong\u003E advises Biggs in Walker\u2019s \u003Ca href=\u0022http:\/\/sonify.psych.gatech.edu\/~walkerb\/\u0022\u003E\u003Cstrong\u003ESonification Lab\u003C\/strong\u003E\u003C\/a\u003E, which designs auditory displays for technologies.\u003C\/p\u003E\u003Cp\u003E\u201cBrandon has the perfect and unique blend of technical skills, research savvy, innovativeness, lived experience, and never-stop attitude to tackle this problem while impacting and improving many lives,\u201d Walker said.\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EDefining Accessibility\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EBiggs said most maps limit accessibility features to turn-by-turn directions, tables, or other kinds of alternative text that disregard spatial information. The ability to communicate spatial information distinguishes Audiom.\u003C\/p\u003E\u003Cp\u003E\u201cAccording to Web Content Accessibility Guidelines (WCAG), all non-text content \u2014 like maps \u2014 must include a text alternative with an equivalent purpose,\u201d Biggs said. \u201cBut what does \u2018equivalent purpose\u2019 mean for geographic maps?\u003C\/p\u003E\u003Cp\u003E\u201cWe argue that every single map, regardless of what it\u2019s showing, communicates general spatialized information and relationships.\u201d\u003C\/p\u003E\u003Cp\u003EAudiom also prioritizes the information that\u2019s most important to blind users, including sidewalks and buildings.\u003C\/p\u003E\u003Cp\u003E\u201cThere\u2019s a lot of information blind people just don\u2019t get on maps but desperately need,\u201d he said. \u201cThey couldn\u2019t care less about the roads. They might need the road name, but they really need the sidewalks.\u003C\/p\u003E\u003Cp\u003E\u201cIf a blind person made a map, they might not even add the roads. And then they would add in the location of doorways, a critical detail that sighted people completely leave out.\u201d\u003C\/p\u003E\u003Cp\u003EBiggs\u2019s work is already gaining national recognition. XRNavigation was recently one of three companies selected by the Global Accessibility Awareness Day (GAAD) Foundation for a 2025 Gaady Award, which honors work being done to make digital technologies more accessible.\u003C\/p\u003E\u003Cp\u003EPast and present winners of \u003Ca href=\u0022https:\/\/gaad.foundation\/what-we-do\/gaadys\u0022\u003E\u003Cstrong\u003EGaady Awards \u003C\/strong\u003E\u003C\/a\u003Erange from tech startups to major brands like T-Mobile.\u003C\/p\u003E\u003Cp\u003EBiggs will accept the award during a banquet on Thursday in San Francisco.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EBrandon Biggs, a Georgia Tech Ph.D. student who is nearly blind, developed \u003Cstrong\u003EAudiom\u003C\/strong\u003E, a cross-sensory digital map that lets blind users navigate spatial information through audio. Biggs\u0027s tool, which Georgia Tech now uses for its campus map, emphasizes spatial relationships like sidewalks and buildings and gives organizations a way to integrate accessible, auditory maps into their own platforms.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A Georgia Tech Ph.D. student who is nearly blind has developed Audiom, a cross-sensory digital map that translates spatial and geographic information into audio so that blind users can \u201chear\u201d maps."}],"uid":"36530","created_gmt":"2025-11-18 19:26:48","changed_gmt":"2025-11-18 19:30:42","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-11-18T00:00:00-05:00","iso_date":"2025-11-18T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"678659":{"id":"678659","type":"image","title":"Brandon-Biggs_86A9112-copy_5.jpg","body":null,"created":"1763494016","gmt_created":"2025-11-18 19:26:56","changed":"1763494016","gmt_changed":"2025-11-18 19:26:56","alt":"Brandon Biggs","file":{"fid":"262718","name":"Brandon-Biggs_86A9112-copy_5.jpg","image_path":"\/sites\/default\/files\/2025\/11\/18\/Brandon-Biggs_86A9112-copy_5.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/11\/18\/Brandon-Biggs_86A9112-copy_5.jpg","mime":"image\/jpeg","size":138423,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/11\/18\/Brandon-Biggs_86A9112-copy_5.jpg?itok=lC8FCRD0"}}},"media_ids":["678659"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"42901","name":"Community"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"129","name":"Institute and Campus"}],"keywords":[{"id":"187915","name":"go-researchnews"},{"id":"9153","name":"Research Horizons"},{"id":"360","name":"accessibility"},{"id":"172442","name":"Disabilites"},{"id":"47091","name":"maps"},{"id":"194036","name":"blindness"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"686467":{"#nid":"686467","#data":{"type":"news","title":"Researchers Find Opportunities for 311 Chatbots to Foster Community Engagement","body":[{"value":"\u003Cp\u003E311 chatbots make it easier for people to report issues to their local government without long wait times on the phone. However, a new study finds that the technology might inhibit civic engagement.\u003C\/p\u003E\u003Cp\u003E311 systems allow residents to report potholes, broken fire hydrants, and other municipal issues. In recent years, the use of artificial intelligence (AI) to provide 311 services to community residents has boomed across city and state governments. This includes an artificial virtual assistant (AVA) developed by third-party vendors for \u003Ca href=\u0022https:\/\/www.atlantaga.gov\/government\/departments\/customer-service-atl311\/atl311-chatbot\u0022\u003E\u003Cstrong\u003Ethe City of Atlanta\u003C\/strong\u003E\u003C\/a\u003E in 2023.\u003C\/p\u003E\u003Cp\u003EThrough survey data, researchers from Tech\u2019s School of Interactive Computing found that many residents are generally positive about 311 chatbots. In addition to eliminating long wait times over the phone, they also offer residents quick answers to permit applications, waste collection, and other frequently asked questions.\u003C\/p\u003E\u003Cp\u003EHowever, the study, which was conducted in Atlanta, indicates that 311 chatbots could be causing residents to feel isolated from public officials and less aware of what\u2019s happening in their community.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EJieyu Zhou\u003C\/strong\u003E, a Ph.D. student in the School of IC, said it doesn\u2019t have to be that way.\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EUniting Communities\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EZhou and her advisor, Assistant Professor \u003Ca href=\u0022https:\/\/chrismaclellan.com\/\u0022\u003E\u003Cstrong\u003EChristopher MacLellan\u003C\/strong\u003E\u003C\/a\u003E, published a paper at the 2025 ACM Designing Interactive Systems (DIS) Conference that focuses on improving public service chatbot design and amplifying their civic impact. They collaborated with Professor \u003Ca href=\u0022https:\/\/www.carldisalvo.com\/\u0022\u003E\u003Cstrong\u003ECarl DiSalvo\u003C\/strong\u003E\u003C\/a\u003E, Associate Professor \u003Ca href=\u0022http:\/\/lynndombrowski.com\/\u0022\u003E\u003Cstrong\u003ELynn Dombrowsk\u003C\/strong\u003E\u003C\/a\u003Ei, and graduate students \u003Cstrong\u003ERui Shen\u003C\/strong\u003E and \u003Ca href=\u0022https:\/\/yueyu1030.github.io\/\u0022\u003E\u003Cstrong\u003EYue You\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003EZhou said 311 chatbots have the potential to be agents that drive community organization and improve quality of life.\u003C\/p\u003E\u003Cp\u003E\u201cCurrent chatbots risk isolating users in their own experience,\u201d Zhou said. \u201cIn the 311 system, people tend to report their own individual issues but lose a sense of what is happening in their broader community.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cPeople are very positive about these tools, but I think there\u2019s an opportunity as we envision what civic chatbots could be. It\u2019s important for us to emphasize that social element \u2014 engaging people\u0026nbsp;within the community and connecting them with government representatives, community organizers, and other community members.\u201d\u003C\/p\u003E\u003Cp\u003EZhou and MacLellan said 311 chatbots can leave users wondering if others in their communities share their concerns.\u003C\/p\u003E\u003Cp\u003E\u201cIf people are at a town hall meeting, they can get a sense of whether the problems they are experiencing are shared by others,\u201d Zhou said. \u201cWe can\u2019t do that with a chatbot. It\u2019s like an isolated room, and we\u2019re trying to open the doors and the windows.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EAdding a Human Touch\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EIn their paper, the researchers note that one of the biggest criticisms of 311 chatbots is they can\u2019t replace interpersonal interaction.\u003C\/p\u003E\u003Cp\u003EUnlike chatbots, people working in local government offices are likely to:\u003C\/p\u003E\u003Cul\u003E\u003Cli\u003EHave direct knowledge of issues\u003C\/li\u003E\u003Cli\u003EProvide appropriate referrals\u003C\/li\u003E\u003Cli\u003EEmpathize with the resident\u2019s concerns\u003C\/li\u003E\u003C\/ul\u003E\u003Cp\u003EMacLellan said residents are likely to grow frustrated with a chatbot when reporting issues that require this level of contextual knowledge.\u003C\/p\u003E\u003Cp\u003EOne person in the researchers\u2019 survey noted that the chatbot they used didn\u2019t understand that their report was about a sidewalk issue, not a street issue.\u003C\/p\u003E\u003Cp\u003E\u201cExplaining such a situation to a human representative is straightforward,\u201d MacLellan said. \u201cHowever, when the issue being raised does not fall within any of the categories the chatbot is built to address, it often misinterprets the query and offers information that isn\u2019t helpful.\u201d\u003C\/p\u003E\u003Cp\u003EThe researchers offer some design suggestions that can help chatbots foster community engagement and improve community well-being:\u003C\/p\u003E\u003Cul\u003E\u003Cli\u003EEscalation. Regarding the sidewalk report, the chatbot did not offer a way to escalate the query to a human who could resolve it. Zhou said that this is a feature that chatbots should have but often lack.\u003C\/li\u003E\u003Cli\u003ETransparency. Chatbots could provide details about recent and frequently reported community issues. They should inform users early in the call process about known problems to help avoid an overload of user complaints.\u003C\/li\u003E\u003Cli\u003EEducation. Chatbots can keep users updated about what\u2019s happening in their communities.\u003C\/li\u003E\u003Cli\u003ECollective action. Chatbots can help communities organize and gather ideas to address challenges and solve problems.\u003C\/li\u003E\u003C\/ul\u003E\u003Cp\u003E\u201cGovernment agencies may focus mainly on fixing individual issues,\u201d Zhou said, \u201cBut recognizing community-level patterns can inspire collective creativity. For example, one participant suggested that if many people report a broken swing at a playground, it could spark an initiative to design a new playground together\u2014going far beyond just fixing it.\u201d\u003C\/p\u003E\u003Cp\u003EThese are just a few examples of things, the researchers argue, that 311 services were originally designed to achieve.\u003C\/p\u003E\u003Cp\u003E\u201cCommunities were already collaborating on identifying and reporting issues,\u201d Zhou said. \u201cThese chatbots should reflect the original intentions and collaboration practices of the communities they serve.\u003C\/p\u003E\u003Cp\u003E\u201cOur research suggests we can increase the positive impact of civic chatbots by including social aspects within the design of the system, connecting people, and building a community view.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EResearchers at the Georgia Institute of Technology found that while 311-style chatbots simplify the process of reporting municipal issues and reduce wait times, users can feel isolated from their community and less connected to broader civic awareness. They recommend redesigning these systems to include transparency about collective issues, provide pathways for human escalation, and support community-level action.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"AI-powered 311 chatbots may unitentionally reduce residents\u0027 sense of connection within their community."}],"uid":"36530","created_gmt":"2025-11-14 20:30:41","changed_gmt":"2025-11-14 20:35:50","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-11-14T00:00:00-05:00","iso_date":"2025-11-14T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"678639":{"id":"678639","type":"image","title":"Jieyu-Zhou_86A8161-Enhanced-NR.jpg","body":null,"created":"1763152260","gmt_created":"2025-11-14 20:31:00","changed":"1763152260","gmt_changed":"2025-11-14 20:31:00","alt":"Jieyu Zhou","file":{"fid":"262697","name":"Jieyu-Zhou_86A8161-Enhanced-NR.jpg","image_path":"\/sites\/default\/files\/2025\/11\/14\/Jieyu-Zhou_86A8161-Enhanced-NR.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/11\/14\/Jieyu-Zhou_86A8161-Enhanced-NR.jpg","mime":"image\/jpeg","size":134034,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/11\/14\/Jieyu-Zhou_86A8161-Enhanced-NR.jpg?itok=909Uit6L"}}},"media_ids":["678639"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"194606","name":"Artificial Intelligence"},{"id":"142","name":"City Planning, Transportation, and Urban Growth"},{"id":"42901","name":"Community"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"135","name":"Research"},{"id":"8862","name":"Student Research"}],"keywords":[{"id":"192863","name":"go-ai"},{"id":"187812","name":"artificial intelligence (AI)"},{"id":"188776","name":"go-research"},{"id":"187915","name":"go-researchnews"},{"id":"9153","name":"Research Horizons"},{"id":"169137","name":"chatbot"},{"id":"189306","name":"public service technology"},{"id":"1134","name":"City of Atlanta"},{"id":"188933","name":"Atlanta community."},{"id":"10614","name":"community organizing"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"686466":{"#nid":"686466","#data":{"type":"news","title":"Professor Earns Test-of-Time Award at AI and Computer Gaming Conference","body":[{"value":"\u003Cp\u003EOne of the top conferences for AI and computer games is recognizing a School of Interactive Computing professor with its first-ever test-of-time award.\u003C\/p\u003E\u003Cp\u003EAt its event this week in Alberta, Canada, the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE) is honoring Professor Mark Riedl. The award also honors University of Utah Professor and Division of Games Chair Michael Young, Riedl\u2019s Ph.D. advisor.\u003C\/p\u003E\u003Cp\u003ERiedl studied under Young at North Carolina State University.\u003C\/p\u003E\u003Cp\u003ETheir 2005 paper, \u003Cem\u003EFrom Linear Story Generation to Branching Story Graphs\u003C\/em\u003E, highlighted the challenges of using AI to create interactive gaming narratives in which user actions influence the story\u2019s progression.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EIn 2005, computer game systems that supported linear, non-branching games were widely used. Riedl introduced an innovative mathematical formula for interactive stories ranging from choose-your-own-adventure novels to modern computer games.\u003C\/p\u003E\u003Cp\u003E\u201cWe didn\u2019t use the term \u2018generative AI\u2019 back then, but I was working on AI for the generation of creative artifacts,\u201d Riedl said. \u201cThis was before we had practical deep learning or large language models.\u003C\/p\u003E\u003Cp\u003E\u201cOne of the reasons this paper is still relevant 20 years later is that it didn\u2019t just present a technology, it attempted to provide a framework for solving a grand challenge in AI.\u201d\u003C\/p\u003E\u003Cp\u003EThat challenge is still ongoing, Riedl said. Game designers continue to struggle with balancing story coherence against the amount of narrative control afforded to users.\u003C\/p\u003E\u003Cp\u003E\u201cWhen users exercise a high degree of control within the environment, it is likely that their actions will change the state of the world in ways that may interfere with the causal dependencies between actions as intended within a storyline,\u201d Riedl and Young wrote in the paper.\u003C\/p\u003E\u003Cp\u003E\u201cNarrative mediation makes linear narratives interactive. The question is: Is the expressive power of narrative mediation at least as powerful as the story graph representation?\u201d\u003C\/p\u003E\u003Cp\u003EAIIDE is being held this week at the University of Alberta in Edmonton, Alberta. Riedl will receive the award on Wednesday.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EProfessor Mark Riedl was honored with the first-ever test-of-time award by the AIIDE conference. The award recognizes their influential 2005 paper \u003Cem\u003EFrom Linear Story Generation to Branching Story Graphs\u003C\/em\u003E, which addressed the challenge of using AI to create interactive, non-linear narratives in computer games. The paper introduced a mathematical framework that remains relevant today.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Professor Mark Riedl received the first-ever test-of-time award from the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE)."}],"uid":"36530","created_gmt":"2025-11-14 20:21:03","changed_gmt":"2025-11-14 20:24:32","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-11-12T00:00:00-05:00","iso_date":"2025-11-12T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"678638":{"id":"678638","type":"image","title":"Summit-on-Responsible-Computing--AI--Society_86A8505.jpg","body":null,"created":"1763151672","gmt_created":"2025-11-14 20:21:12","changed":"1763151672","gmt_changed":"2025-11-14 20:21:12","alt":"Mark Riedl","file":{"fid":"262696","name":"Summit-on-Responsible-Computing--AI--Society_86A8505.jpg","image_path":"\/sites\/default\/files\/2025\/11\/14\/Summit-on-Responsible-Computing--AI--Society_86A8505.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/11\/14\/Summit-on-Responsible-Computing--AI--Society_86A8505.jpg","mime":"image\/jpeg","size":82088,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/11\/14\/Summit-on-Responsible-Computing--AI--Society_86A8505.jpg?itok=m3SKeUcr"}}},"media_ids":["678638"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"194606","name":"Artificial Intelligence"},{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"192863","name":"go-ai"},{"id":"187812","name":"artificial intelligence (AI)"},{"id":"170453","name":"Test of Time Award"},{"id":"2356","name":"gaming"},{"id":"2450","name":"computer games"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"686422":{"#nid":"686422","#data":{"type":"news","title":"Ph.D. Student\u2019s Framework Used to Bolster Nvidia\u2019s Cosmos Predict-2 Model","body":[{"value":"\u003Cp\u003EA new deep learning architectural framework could boost the development and deployment efficiency of autonomous vehicles and humanoid robots. The framework will lower training costs and reduce the amount of real-world data needed for training.\u003C\/p\u003E\u003Cp\u003EWorld foundation models (WFMs) enable physical AI systems to learn and operate within\u0026nbsp;synthetic worlds created by generative artificial intelligence (genAI). For example, these models use predictive capabilities to generate up to 30 seconds of video that accurately reflects the real world.\u003C\/p\u003E\u003Cp\u003EThe new framework, developed by a Georgia Tech researcher, enhances the processing speed of the neural networks that simulate these real-world environments from text, images, or video inputs.\u003C\/p\u003E\u003Cp\u003EThe neural networks that make up the architectures of large language models like ChatGPT and visual models like Sora process contextual information using the \u201cattention mechanism.\u201d\u003C\/p\u003E\u003Cp\u003EAttention refers to a model\u2019s ability to focus on the most relevant parts of input.\u003C\/p\u003E\u003Cp\u003EThe Neighborhood Attention Extension (NATTEN) allows models that require GPUs or high-performance computing systems to process information and generate outputs more efficiently.\u003C\/p\u003E\u003Cp\u003EProcessing speeds can increase by up to 2.6 times, said \u003Ca href=\u0022https:\/\/alihassanijr.com\/\u0022\u003E\u003Cstrong\u003EAli Hassani\u003C\/strong\u003E\u003C\/a\u003E, a Ph.D. student in the School of Interactive Computing and the creator of NATTEN. Hassani is advised by Associate Professor \u003Ca href=\u0022https:\/\/www.humphreyshi.com\/\u0022\u003E\u003Cstrong\u003EHumphrey Shi\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003EHassani is also a research scientist at Nvidia, where he introduced NATTEN to \u003Ca href=\u0022https:\/\/www.nvidia.com\/en-us\/ai\/cosmos\/\u0022\u003E\u003Cstrong\u003ECosmos\u003C\/strong\u003E\u003C\/a\u003E \u2014 a family of WFMs the company uses to train robots, autonomous vehicles, and other physical AI applications.\u003C\/p\u003E\u003Cp\u003E\u201cYou can map just about anything from a prompt or an image or any combination of frames from an existing video to predict future videos,\u201d Hassani said. \u201cInstead of generating words with an LLM, you\u2019re generating a world.\u003C\/p\u003E\u003Cp\u003E\u201cUnlike LLMs that generate a single token at a time, these models are compute-heavy. They generate many images \u2014 often hundreds of frames at a time \u2014 so the models put a lot of work on the GPU. NATTEN lets us decrease some of that work and proportionately accelerate the model.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech Ph.D. student Ali Hassani developed the Neighborhood Attention Extension (NATTEN), a deep learning architectural framework that is being integrated into Nvidia\u0027s Cosmos Predict-2 world foundation model. NATTEN enhances the processing speed of neural networks that simulate real-world environments for physical AI systems, which are used to train autonomous vehicles and humanoid robots.\u0026nbsp;\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A new deep learning architectural framework, Neighborhood Attention Extension (NATTEN), is being used by Nvidia to  increase the processing speed of their Cosmos Predict-2 Model for training autonomous vehicles and humanoid robots."}],"uid":"36530","created_gmt":"2025-11-13 21:13:58","changed_gmt":"2025-11-13 21:14:58","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-11-03T00:00:00-05:00","iso_date":"2025-11-03T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"678621":{"id":"678621","type":"image","title":"2X6A3487.jpg","body":null,"created":"1763068473","gmt_created":"2025-11-13 21:14:33","changed":"1763068473","gmt_changed":"2025-11-13 21:14:33","alt":"Humprhey Shi and Ali Hassani","file":{"fid":"262676","name":"2X6A3487.jpg","image_path":"\/sites\/default\/files\/2025\/11\/13\/2X6A3487.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/11\/13\/2X6A3487.jpg","mime":"image\/jpeg","size":93105,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/11\/13\/2X6A3487.jpg?itok=axfoqv8i"}}},"media_ids":["678621"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"194609","name":"Industry"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"192863","name":"go-ai"},{"id":"193860","name":"Artifical Intelligence"},{"id":"194701","name":"go-resarchnews"},{"id":"9153","name":"Research Horizons"},{"id":"14549","name":"nvidia"},{"id":"191138","name":"artificial neural networks"},{"id":"97281","name":"autonomous vehicles"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"685920":{"#nid":"685920","#data":{"type":"news","title":"Microsoft Removing Support for Windows 10 Could Increase E-Waste, Cybersecurity Threats","body":[{"value":"\u003Cp\u003EWhen Microsoft announced it was\u003Ca href=\u0022https:\/\/support.microsoft.com\/en-us\/windows\/windows-10-support-has-ended-on-october-14-2025-2ca8b313-1946-43d3-b55c-2b95b107f281\u0022\u003E\u003Cstrong\u003E ending support for Windows 10 last week\u003C\/strong\u003E\u003C\/a\u003E, about 40 percent of all Windows users faced limited options.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EWhile some of those users can upgrade to Windows 11, hundreds of millions of devices don\u2019t meet the technical requirements.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThose users might be wondering what else they can do besides throwing away their current device and buying a new one or risking running outdated software on it.\u003C\/p\u003E\u003Cp\u003EThe tech conglomerate faced backlash from environmental and cybersecurity experts after informing Windows users that it would cease providing updates for Windows 10.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThese experts have warned that rendering hundreds of millions of devices practically useless will worsen the ever-growing problem with electronic waste (e-waste) and leave users who can\u0027t upgrade vulnerable to cybersecurity threats.\u003C\/p\u003E\u003Cp\u003EResearchers from Georgia Tech\u2019s School of Interactive Computing (SIC) and School of Cybersecurity and Privacy (SCP) echo those concerns.\u003C\/p\u003E\u003Cp\u003EForcing users to replace their devices means that\u0026nbsp;\u003Ca href=\u0022https:\/\/www.itpro.com\/software\/windows\/windows-10-end-of-life-could-prompt-torrent-of-e-waste-as-240-million-devices-set-for-scrapheap\u0022\u003E\u003Cstrong\u003Eup to 240 million old devices, according to one analysis\u003C\/strong\u003E\u003C\/a\u003E, will inevitably end up in landfills.\u003C\/p\u003E\u003Cp\u003E\u201cThe problem of e-waste raises the question of why and how these technologies become obsolete,\u201d said \u003Ca href=\u0022https:\/\/lincindy.com\/\u0022\u003E\u003Cstrong\u003ECindy Lin\u003C\/strong\u003E\u003C\/a\u003E, a Stephen Fleming Early Career Assistant Professor in SIC.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ELin studies data structures and environmental governance in Southeast Asia and the U.S.\u003C\/p\u003E\u003Cp\u003E\u201cScholarship in human-computer interaction (HCI) on repair reveals that many of these technologies suffer from planned obsolescence,\u201d she said. \u201cThis means that companies have designed products with a short lifespan, increasing consumption and waste simultaneously.\u201d\u003C\/p\u003E\u003Cp\u003EWhen e-waste is dumped in landfills, the organic materials within devices decompose, producing methane, a potent greenhouse gas. And with every discarded device comes the need to produce new ones. The raw materials of these devices are mined, refined, and processed, consuming enormous amounts of energy through the burning of fossil fuels.\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EThe Problem With Hackers\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EThough Microsoft said it will continue to provide Windows 10 security updates for one year, users are still being pressured to upgrade. By this time next year, if users still haven\u2019t upgraded to Windows 11, they can expect to become easy targets for cyber criminals.\u003C\/p\u003E\u003Cp\u003EFor example, users could receive phishing emails claiming to be from Microsoft about security updates from hackers pretending to be Microsoft.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cThe cybersecurity implications are very serious because new vulnerabilities of Windows 10 will go unpatched for a large part of the user base of this system,\u201d said \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/mustaque-ahamad\u0022\u003E\u003Cstrong\u003EMustaque Ahamad\u003C\/strong\u003E\u003C\/a\u003E, Regents\u2019 Entrepreneur Professor and interim chair of SCP.\u003C\/p\u003E\u003Cp\u003E\u201cThese users will become targets of hackers and cyber criminals who will be able to exploit these vulnerabilities. This will make these machines more prone to attacks such as ransomware and data exfiltration.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EWhat Can Users Do?\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EBuying a new device typically costs around $300 at the low end, while some gaming computers can exceed $2,500.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/josiahhester.com\/\u0022\u003E\u003Cstrong\u003EJosiah Hester\u003C\/strong\u003E\u003C\/a\u003E, an associate professor in the School of IC who researches computing and sustainability, said users who want to avoid discarding their devices can install Linux Mint, a free universal operating system.\u003C\/p\u003E\u003Cp\u003E\u201cI would hope that instead of discarding, people might see this as an opportunity to go into a more open ecosystem like Linux Mint, which was designed for Windows users,\u201d Hester said.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cSo much perfectly good hardware is obsolesced by force, when users are more than willing to give it a second life, either through ending support on the software side, subscription services that require certain versions of an OS, or even building the hardware or low-level functions that reduce the autonomy of device owners.\u201d\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ELinux Mint is open source and offers its own suite of software products, including a word processor. It also has a built-in security system. It requires 2GB of RAM, 20GB of disk space, and 1024x768 resolution to operate.\u003C\/p\u003E\u003Cp\u003EOn a systemic level, Lin and Hester said people can support organizations that advocate for right to repair and legislation that protects consumers from planned obsolescence.\u003C\/p\u003E\u003Cp\u003E\u201cHCI studies of informal economies of improvisation and repair have demonstrated that technologies have a longer lifecycle if we have access to expertise on how to repair them without facing penalties such as copyright violations,\u201d Lin said.\u003C\/p\u003E\u003Cp\u003E\u201cThe ongoing right-to-repair movement in the US shows promise in making technology repairable and, in turn, more sustainable.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EMicrosoft\u0027s decision to end support for Windows 10 will leave hundreds of millions of devices unable to meet the requirements for upgrading to Windows 11. Experts in Georgia Tech\u0027s College of Computing warn this policy will heavily contribute to the e-waste crisis and expose users to cybersecurity threats from unpatched vulnerabilities.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Microsoft\u0027s decision to end support for Windows 10 could lead to a massive increase in e-waste and expose users who can\u0027t upgrade to greater cybersecurity threats"}],"uid":"36530","created_gmt":"2025-10-22 16:16:36","changed_gmt":"2025-10-22 18:24:13","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-10-22T00:00:00-04:00","iso_date":"2025-10-22T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"678421":{"id":"678421","type":"image","title":"ChatGPT-Image-Oct-21--2025--02_44_30-PM.png","body":null,"created":"1761149813","gmt_created":"2025-10-22 16:16:53","changed":"1761149813","gmt_changed":"2025-10-22 16:16:53","alt":"Windows device with a landfill in background","file":{"fid":"262444","name":"ChatGPT-Image-Oct-21--2025--02_44_30-PM.png","image_path":"\/sites\/default\/files\/2025\/10\/22\/ChatGPT-Image-Oct-21--2025--02_44_30-PM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/10\/22\/ChatGPT-Image-Oct-21--2025--02_44_30-PM.png","mime":"image\/png","size":830520,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/10\/22\/ChatGPT-Image-Oct-21--2025--02_44_30-PM.png?itok=etchtugo"}}},"media_ids":["678421"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"335","name":"Microsoft"},{"id":"173448","name":"windows10"},{"id":"187915","name":"go-researchnews"},{"id":"9153","name":"Research Horizons"},{"id":"114261","name":"landfill"},{"id":"10647","name":"e-waste"},{"id":"1404","name":"Cybersecurity"},{"id":"181815","name":"Hackers"},{"id":"8111","name":"phishing"},{"id":"831","name":"climate change"}],"core_research_areas":[{"id":"145171","name":"Cybersecurity"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:ndeen6@gatech.edu\u0022\u003ENathan Deen\u003C\/a\u003E\u003Cbr\u003ECollege of Computing\u003Cbr\u003EGeorgia Tech\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"685441":{"#nid":"685441","#data":{"type":"news","title":"School of IC Honors Decorated Professor with Namesake Award","body":[{"value":"\u003Cp\u003EOne word comes up more often than others when describing John Stasko \u2014 kindness.\u003C\/p\u003E\u003Cp\u003EStasko achieved a great deal during his 36 years as a professor at Georgia Tech and made significant contributions to data visualization research and innovations. He is a \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/regents-professor-named-acm-fellow\u0022\u003E\u003Cstrong\u003EFellow of the ACM\u003C\/strong\u003E\u003C\/a\u003E and IEEE and received the \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/awards-roundup-regents-professor-earns-ieee-vgtc-lifetime-achievement-award\u0022\u003E\u003Cstrong\u003EIEEE Visualization and Graphics Technical Community Lifetime Achievement Award\u003C\/strong\u003E\u003C\/a\u003E.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EIn all those years, none of his students or colleagues could recall a moment when he didn\u2019t demonstrate kindness.\u003C\/p\u003E\u003Cp\u003E\u201cHe supported me in fleshing out my ideas into a Ph.D. dissertation,\u201d said \u003Cstrong\u003EDean Jerding\u003C\/strong\u003E (CS Ph.D. 1997), one of Stasko\u2019s former students. \u201cHe was always calm and communicated any criticism in a very positive way. He never said I had a dumb idea. He was always encouraging, and he redirected you with his input.\u201d\u003C\/p\u003E\u003Cp\u003EThe School of Interactive Computing bid farewell to Stasko on Thursday, following his official retirement in July.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EDuring the event, \u003Cstrong\u003EShaowen Bardzell\u003C\/strong\u003E, School of IC chair and professor, announced the establishment of the John Stasko Award for Teaching Excellence in Stasko\u2019s honor. Bardzell said the award will be given each year to as many as \u201ctwo faculty members in the School of Interactive Computing whose teaching and mentoring channel John\u2019s passion and care for our students.\u201d\u003C\/p\u003E\u003Cp\u003E\u201cYou can be effective while being nice, and you can be heard while being quiet and thoughtful,\u201d said \u003Cstrong\u003EKeith Edwards\u003C\/strong\u003E, a professor in the School of IC who was one of Stasko\u2019s first students. \u201cHe\u2019s the same even-keeled, thoughtful person as he was when I first knew him. He\u2019s very generous. If it hadn\u2019t been for John, I think there\u2019s a chance I would\u2019ve fallen through the cracks when I was looking for an advisor at Georgia Tech. I\u2019m very fortunate he took me on.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003ENew College, New Blood\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EStasko came to Georgia Tech in 1989 fresh off completing his Ph.D. in computer science at Brown University. That was a year before the establishment of the College of Computing at Georgia Tech. The computer science program was administered by the School of Information and Computer Science, which was housed in the College of Sciences.\u003C\/p\u003E\u003Cp\u003E\u201cIt was exciting because we were igniting computer science at Georgia Tech, and there were a lot of young faculty like me who were brand new, right out of college,\u201d Stasko said. \u201cThere was this spirit of working together and wanting to make something great here.\u201d\u003C\/p\u003E\u003Cp\u003EStasko said when the College of Computing was established in 1990, Georgia Tech ranked outside the top 20 of U.S. News and World Report\u2019s computer science program rankings.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EMany new faculty members like Stasko were interested in data visualization, computer graphics, and human-computer interaction. Georgia Tech quickly bolstered its computer science reputation by positioning itself at the forefront of those emerging fields with the creation of the Graphics, Visualization, and Usability (GVU) Center.\u003C\/p\u003E\u003Cp\u003E\u201cA lot of the top five to 10 schools like Stanford, MIT, and Berkeley were very strong in the traditional subareas of computer science,\u201d Stasko said. \u201cI think it helped us to develop a strength in HCI, graphics and visualization. We were one of the earliest to embrace those, so it made it easier for us to shine. U.S. News and World Report had a new sub-ranking called Graphics and HCI, and we were ranked No. 1 very early on. That really helped us.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EGrowing as a Mentor\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EStasko credits \u003Cstrong\u003EJim\u003C\/strong\u003E \u003Cstrong\u003EFoley\u003C\/strong\u003E, the first director of GVU, who now has a scholarship named in his honor for outstanding graduate students, as the model for how to conduct oneself as a teacher.\u003C\/p\u003E\u003Cp\u003E\u201cJim was the most wonderful mentor I could\u2019ve had,\u201d Stasko said. \u201cHe was a famous professor, and everyone in computer science around the country knew him, but he was always so humble, and he would meet all the new junior faculty and want to help us get going. He allowed us to shine.\u201d\u003C\/p\u003E\u003Cp\u003EStasko became most well-known for his research, particularly for his invention of Jigsaw in 2007. Jigsaw is a visualization algorithm that can create a visual index of a large document collection.\u003C\/p\u003E\u003Cp\u003E\u201cIt could help an analyst see the story that\u2019s spread across 1,500 different documents about a police case, for example,\u201d he said. \u201cOr maybe they were reviews of a product that you wanted to learn about, or which car or which TV you should buy without having to read 1,500 reviews. We used early machine learning methods to analyze the text and created a suite of different visualizations communicating that analysis.\u201d\u003C\/p\u003E\u003Cp\u003EIn addition to his research, Stasko taught an intro to JavaScript course for 20 years to thousands of Tech students. Though it wasn\u2019t required of him to teach it, he said he enjoyed interacting with incoming first-year students because it \u201chelped keep me feeling young.\u201d\u003C\/p\u003E\u003Cp\u003EIn 2007, Stasko joined the faculty of the newly created School of Interactive Computing. He served as the interim chair of the school from 2021 to 2022, and he was also named Regents\u2019 Professor by the University System of Georgia in 2021.\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003ELeaving a Legacy\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EToday, the College of Computing has the \u003Ca href=\u0022https:\/\/news.gatech.edu\/news\/2025\/09\/23\/georgia-tech-secures-multiple-no-1-rankings?utm_source=newsletter\u0026amp;utm_medium=email\u0026amp;utm_content=Multiple%20Programs%20Named%20No.%201%20in%20US%20News%20Rankings\u0026amp;utm_campaign=Daily%20Digest%20-%20Sept.%2023%2C%202025\u0022\u003E\u003Cstrong\u003ENo. 5 undergraduate and No. 6 graduate computer science program\u003C\/strong\u003E\u003C\/a\u003E in the U.S. and is the largest college on Georgia Tech\u2019s campus.\u003C\/p\u003E\u003Cp\u003E\u201cI\u2019m not sure any other CS program in the country has had that kind of jump like we have had over the past 35 years,\u201d Stasko said. \u201cThe higher you go, the harder it is to jump even one spot.\u003C\/p\u003E\u003Cp\u003E\u201cI think we knew that (the College) was going to grow and that was part of the plan. I\u2019m not sure I would\u2019ve envisioned we\u2019d ever be 150 to 200 faculty in the college, but we could all see computer science was going to be a crucial part of society going forward.\u201d\u003C\/p\u003E\u003Cp\u003EStasko will continue to be a part of the School of IC as Professor Emeritus. His final student, Alexander Bendeck, finishes his Ph.D. in 2026.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EBendeck will be the 25th student Stasko has advised and graduated over his career. He said he never had the funding to run a large lab, but that allowed him to invest in the students he took under his wing.\u003C\/p\u003E\u003Cp\u003E\u201cI often found some unconventional Ph.D. students,\u201d Stasko said. \u201cSome of my early students started in very different areas of computer science. I\u2019ve looked for diamonds in the rough.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cI see some of them now with their families and they make me feel old because they have kids who are in college now. But they\u2019ve done well. I think half of my students have gone into academia, and the other half into industry. I\u2019m very proud in all that they\u2019ve achieved, both personally and professionally.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EProfessor \u003Cstrong\u003EJohn Stasko\u003C\/strong\u003E retired after a distinguished 36-year career at Georgia Tech, during which he was a key figure in the rise of the College of Computing and made significant contributions to data visualization. Stasko was widely celebrated by students and colleagues for his kindness, humility, and thoughtful mentorship. To honor his contributions and spirit, the School of Interactive Computing established the \u003Cstrong\u003EJohn Stasko Award for Teaching Excellence\u003C\/strong\u003E, an annual award for faculty members who embody his passion and dedication to students.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech\u0027s School of Interactive Computing established the John Stasko Award for Teaching Excellence to honor the decorated professor for his 36-year career marked by significant contributions to data visualization and a legacy of kindness."}],"uid":"36530","created_gmt":"2025-10-01 17:40:15","changed_gmt":"2025-10-09 01:31:07","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-09-24T00:00:00-04:00","iso_date":"2025-09-24T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"678236":{"id":"678236","type":"image","title":"IMG_4583.jpg","body":null,"created":"1759340427","gmt_created":"2025-10-01 17:40:27","changed":"1759340427","gmt_changed":"2025-10-01 17:40:27","alt":"John Stasko","file":{"fid":"262234","name":"IMG_4583.jpg","image_path":"\/sites\/default\/files\/2025\/10\/01\/IMG_4583.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/10\/01\/IMG_4583.jpg","mime":"image\/jpeg","size":102941,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/10\/01\/IMG_4583.jpg?itok=_d9HzgWm"}}},"media_ids":["678236"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"133","name":"Special Events and Guest Speakers"}],"keywords":[{"id":"38921","name":"data visualization"},{"id":"194701","name":"go-resarchnews"},{"id":"40191","name":"faculty retirement"}],"core_research_areas":[],"news_room_topics":[{"id":"71871","name":"Campus and Community"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"685444":{"#nid":"685444","#data":{"type":"news","title":"Once-in-a-Decade Conference Spotlights Interactive Computing Researchers","body":[{"value":"\u003Cp\u003EThree School of Interactive Computing researchers were chosen for paper presentations at one of the most selective and unique computing conferences in the world.\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/aarhus2025.org\/\u0022\u003E\u003Cstrong\u003EThe Aarhus Conference\u003C\/strong\u003E\u003C\/a\u003E, hosted by Aarhus University in Denmark, has been held every decade since 1975, addressing the most urgent and vital issues in computing worldwide.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThe latest conference, titled Computing (X) Crisis, took place in August and featured presentations, critiques, and workshops that explored computing\u2019s influence on the human condition in a world filled with crises.\u003C\/p\u003E\u003Cp\u003EAssistant Professor\u0026nbsp;\u003Ca href=\u0022https:\/\/lincindy.com\/\u0022\u003E\u003Cstrong\u003ECindy Lin\u003C\/strong\u003E\u003C\/a\u003E, Associate Professor\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/lynn-dombrowski\u0022\u003E\u003Cstrong\u003ELynn Dombrowski\u003C\/strong\u003E\u003C\/a\u003E, and School of Interactive Computing Professor and Chair\u0026nbsp;\u003Ca href=\u0022https:\/\/shaowenbardzell.com\/\u0022\u003E\u003Cstrong\u003EShaowen Bardzell\u003C\/strong\u003E\u003C\/a\u003E authored the paper \u003Cem\u003EWhose, Which, and What Crisis? A Critical Analysis of Crisis in Computing Supply Chains.\u0026nbsp;\u003C\/em\u003EIt was one of only 15 papers selected by conference organizers.\u003C\/p\u003E\u003Cp\u003EIn the paper, in which Lin is credited as the lead author, the researchers advance a theoretical framework for understanding crises that impact the computing supply chain.\u003C\/p\u003E\u003Cp\u003EBardzell, who served as program chair of the 2015 Aarhus Conference, approached Dombrowski and Lin about collaborating on a paper submission. Bardzell said the conference gets more than 100 submissions and has a minuscule acceptance rate.\u003C\/p\u003E\u003Cp\u003E\u201cI knew I was going to go no matter what because I enjoyed it so much 10 years ago,\u201d Bardzell said. \u201cI was fortunate to come together with Lynn and Cindy. We spent six months reading, thinking, and debating together every week, and it was a pleasure to write it together.\u201d\u003C\/p\u003E\u003Cp\u003EThe authors identified common themes in areas they were already researching and examined how these themes affected the computing supply chain.\u003C\/p\u003E\u003Cp\u003E\u201cWe wanted to think about what this word means in relation to computing,\u201d Dombrowski said. \u201cWho gets to take advantage of a crisis, or who can construct a crisis in relation to computing? What\u2019s not being talked about when we use that word?\u201d\u003C\/p\u003E\u003Cp\u003ELin is studying the rise of data centers and their impact on the environment and consumers. Dombrowski is an expert on the labor market and unjust labor practices. Bardzell has conducted extensive research on how chip manufacturing affects farming and agriculture in her homeland of Taiwan.\u003C\/p\u003E\u003Cp\u003E\u201cWe don\u2019t often think about computing research as intergenerational colleagues working together,\u201d Lin said. \u201cI feel like the three of us represent very interesting generations of computing research that\u2019s tied to critically thinking about the social and political aspects of computing. Each of us has different ways of thinking about those things.\u201d\u003C\/p\u003E\u003Cp\u003EIn the paper, the three authors discuss the concept of \u201cagainst crisis thinking,\u201d which emphasizes that crises affecting the computing supply chain aren\u2019t self-evident phenomena. Human-computer interaction scholars, they say, should pay more attention to how the word \u201ccrisis\u201d is introduced into public discourse and how it can be exploited by powerful actors and impact marginalized communities.\u003C\/p\u003E\u003Cp\u003E\u201cSome players get to declare what is a crisis and whom it affects,\u201d Lin said. \u201cThey create solutions to resolve the crisis, but they might not address what a chronic experience of a crisis may be.\u201d\u003C\/p\u003E\u003Cp\u003EAlthough Bardzell said she considers it an honor to present at a conference that is so selective and is held only once a decade, she was encouraged to be among researchers dedicated to solving pressing societal and planetary issues.\u003C\/p\u003E\u003Cp\u003E\u201cAcademia can appear as a cutthroat environment where you\u2019re trying to establish your brand and be known for XYZ,\u201d Bardzell said. \u201cAt Aarhus, there was a strong sense of community and working alongside each other, and we\u2019re better because of the people who work alongside us.\u201d\u003C\/p\u003E\u003Cp\u003ELin agreed and said that participating in Aarhus is different from the annual conferences where the researchers normally submit papers.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cThere\u2019s something special about reflecting every 10 years,\u201d Lin said. \u201cIt shows how much has changed but also how much things have remained the same.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EThree researchers from Georgia Tech\u0027s School of Interactive Computing (IC)\u2014Assistant Professor \u003Cstrong\u003ECindy Lin\u003C\/strong\u003E, Associate Professor \u003Cstrong\u003ELynn Dombrowski\u003C\/strong\u003E, and Professor and Chair \u003Cstrong\u003EShaowen Bardzell\u003C\/strong\u003E\u2014were selected to present their paper at the highly selective, once-in-a-decade \u003Cstrong\u003EAarhus Conference\u003C\/strong\u003E in Denmark. Their paper, \u003Cem\u003EWhose, Which, and What Crisis? A Critical Analysis of Crisis in Computing Supply Chains\u003C\/em\u003E, was one of only fifteen chosen and focuses on a theoretical framework for understanding crises in computing supply chains. The co-authors, who represent different generations of computing research, urge human-computer interaction scholars to examine how the word \u0022crisis\u0022 is introduced and potentially exploited by powerful actors in public discourse.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Three researchers from Georgia Tech\u0027s School of Interactive Computing (IC)\u2014Cindy Lin, Lynn Dombrowski, and Shaowen Bardzell\u2014were selected to present their paper at the highly selective Aarhus Conference in Denmark."}],"uid":"36530","created_gmt":"2025-10-01 17:49:13","changed_gmt":"2025-10-09 01:30:45","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-10-01T00:00:00-04:00","iso_date":"2025-10-01T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"678239":{"id":"678239","type":"image","title":"Summit-on-Responsible-Computing--AI--and-Society_86A0003-Enhanced-NR.jpg","body":null,"created":"1759340964","gmt_created":"2025-10-01 17:49:24","changed":"1759340964","gmt_changed":"2025-10-01 17:49:24","alt":"Cindy Lin","file":{"fid":"262237","name":"Summit-on-Responsible-Computing--AI--and-Society_86A0003-Enhanced-NR.jpg","image_path":"\/sites\/default\/files\/2025\/10\/01\/Summit-on-Responsible-Computing--AI--and-Society_86A0003-Enhanced-NR.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/10\/01\/Summit-on-Responsible-Computing--AI--and-Society_86A0003-Enhanced-NR.jpg","mime":"image\/jpeg","size":101748,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/10\/01\/Summit-on-Responsible-Computing--AI--and-Society_86A0003-Enhanced-NR.jpg?itok=9aEBvRCD"}}},"media_ids":["678239"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"187915","name":"go-researchnews"},{"id":"7896","name":"crisis"},{"id":"831","name":"climate change"},{"id":"88241","name":"labor market"},{"id":"669","name":"agriculture"},{"id":"94111","name":"farming"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"685002":{"#nid":"685002","#data":{"type":"news","title":"Two IC Faculty Receive NSF CAREER for Robotics and AR\/VR Initiatives","body":[{"value":"\u003Cp\u003EPractice may not make perfect for robots, but new machine learning models from Georgia Tech are allowing them to improve their skillsets to more effectively assist humans in the real world.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/faculty.cc.gatech.edu\/~danfei\/\u0022\u003E\u003Cstrong\u003EDanfei Xu\u003C\/strong\u003E\u003C\/a\u003E, an assistant professor in \u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003E\u003Cstrong\u003EGeorgia Tech\u2019s School of Interactive Computing\u003C\/strong\u003E\u003C\/a\u003E, is introducing new models that provide robots with \u201con-the-job\u201d training.\u003C\/p\u003E\u003Cp\u003EThe National Science Foundation (NSF) awarded Xu its CAREER award given to early career faculty. The award will enable Xu to expand his research and refine his models, which could accelerate the process of robot deployment and alleviate manufacturers from the burden of achieving perfection.\u003C\/p\u003E\u003Cp\u003E\u201cThe main problem we\u2019re trying to tackle is how to allow robots to learn on the job,\u201d Xu said. \u201cHow should it self-improve based on the performance or the new requirements or new user preferences in each home or working environment? You cannot expect a robot manufacturer to program all of that.\u003C\/p\u003E\u003Cp\u003E\u201cThe challenging thing about robotics is that the robot must get feedback from the physical environment. It must try to solve a problem to understand the limits of its abilities so it can decide how to improve its own performance.\u201d\u003C\/p\u003E\u003Cp\u003EAs with humans, Xu views practice as the most effective way for a robot to improve a skill. His models train the robot to identify the point at which it failed in its task performance.\u003C\/p\u003E\u003Cp\u003E\u201cIt identifies that skill and sets up an environment where it can practice,\u201d he said. \u201cIf it needs to improve opening a drawer, it will navigate itself to the drawer and practice opening it.\u201d\u003C\/p\u003E\u003Cp\u003EThe models allow the robot to split tasks into smaller parts and evaluate its own skill level using reward functions. Cooking dinner, for example, can be divided into steps like turning on the stove and opening the fridge, which are necessary to achieve the overall goal.\u003C\/p\u003E\u003Cp\u003E\u201cPlanning is a complex problem because you must predict what\u2019s going to happen in the physical world,\u201d Xu said. \u201cWe use machine learning techniques that our group has developed over the past two years, using generated models to generate positive futures. They\u2019re very good at modeling long-horizon phenomena.\u003C\/p\u003E\u003Cp\u003E\u201cThe robot knows when it\u2019s failed because there\u2019s a value that tells it how well it performed the task and whether it received its reward. While we don\u2019t know how to tell the robot why it failed, we have ways for it to improve its skills based on that measurement.\u201d\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EOne of the biggest barriers that keeps many robots from being made available for public use is the pressure on manufacturers to make the robot as close to perfect as possible at deployment. Xu said it\u2019s more practical to accept that robots will have learning gaps that need to be filled and to implement more efficient real-world learning models.\u003C\/p\u003E\u003Cp\u003E\u201cWe work under the pressure of getting everything correct before deployment,\u201d he said. \u201cWe need to meet the basic safety requirements, but in terms of competence, it is difficult to get that perfect at deployment. This takes some of the pressure off because it will be able to self-adapt.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EVirtual Workspace for Data Workers\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/ivi.cc.gatech.edu\/people.html\u0022\u003E\u003Cstrong\u003EYalong Yang\u003C\/strong\u003E\u003C\/a\u003E, another assistant professor in the School of IC, also received the NSF CAREER Award for a research proposal that will design augmented and virtual reality (AR\/VR) workspaces for data workers.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cIn 10 years, I envision everyone will use AR\/VR in their office, and it will replace their laptop or their monitor,\u201d Yang said.\u003C\/p\u003E\u003Cp\u003EYang said he is also working with Google on the project and using Google Gemini to bring conventional applications to immersive space, with data tools being the most complicated systems to re-design for immersive environments.\u003C\/p\u003E\u003Cp\u003EThe immersive workspace and interface will also enable teams of data workers to collaborate and share their data in real-time.\u003C\/p\u003E\u003Cp\u003E\u201cI want to support the end-to-end process,\u201d Yang said. \u201cWe have visualization tools for data, but it\u2019s not enough. Data science is a pipeline \u2014 from collecting data to processing, visualizing, modeling and then communicating. If you only support one, people will need to switch to other platforms for the other steps.\u201d\u003C\/p\u003E\u003Cp\u003EYang also noted that prior research has shown that VR can enhance cognitive abilities, such as memory and attention and support multitasking. The results of his project could lead to maximizing worker efficiency without them feeling strained.\u003C\/p\u003E\u003Cp\u003E\u201cWe all have a cognitive limit in our working memory. Using AR\/VR can increase those limits and process more information. We can expand people\u2019s spatial ability to help them build a better mental model of the data presented to them.\u201d\u003C\/p\u003E\u003Cp\u003EYang was also recently named a \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/tiktok-photoshop-generative-ai-could-bring-millions-apps-3d-reality\u0022\u003E\u003Cstrong\u003E2025 Google Research Scholar\u003C\/strong\u003E\u003C\/a\u003E as he seeks to build a new artificial intelligence (AI) tool that converts mobile apps into 3D immersive environments.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003ETwo assistant professors in Georgia Tech\u2019s School of Interactive Computing \u2014 Danfei Xu and Yalong Yang \u2014 have each won NSF CAREER Awards for their respective research in robotics and AR\/VR initiatives. Xu\u2019s work will develop machine learning models that let robots learn \u201con the job,\u201d adapting from feedback and failure in real-world environments rather than being perfectly preprogrammed. Yang\u2019s project aims to build immersive AR\/VR workspaces to support data workers across the full data pipeline, including a collaboration with Google to bring conventional apps into immersive environments.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Two Georgia Tech professors, Danfei Xu and Yalong Yang, have received the prestigious NSF CAREER award for their research in robotics, which focuses on teaching robots to self-improve, and in augmented and virtual reality (AR\/VR), which aims to create imm"}],"uid":"36530","created_gmt":"2025-09-17 18:24:23","changed_gmt":"2025-09-17 18:28:51","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-09-17T00:00:00-04:00","iso_date":"2025-09-17T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"678055":{"id":"678055","type":"image","title":"ICRA-2025_86A9079-Enhanced-NR.jpg","body":null,"created":"1758133475","gmt_created":"2025-09-17 18:24:35","changed":"1758133475","gmt_changed":"2025-09-17 18:24:35","alt":"Danfei Xu","file":{"fid":"262033","name":"ICRA-2025_86A9079-Enhanced-NR.jpg","image_path":"\/sites\/default\/files\/2025\/09\/17\/ICRA-2025_86A9079-Enhanced-NR.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/09\/17\/ICRA-2025_86A9079-Enhanced-NR.jpg","mime":"image\/jpeg","size":132463,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/09\/17\/ICRA-2025_86A9079-Enhanced-NR.jpg?itok=Dt9A0bu8"}}},"media_ids":["678055"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"191934","name":"National Science Foundation (NSF)"},{"id":"7842","name":"NSF CAREER Award"},{"id":"188776","name":"go-research"},{"id":"9153","name":"Research Horizons"},{"id":"145251","name":"virtual reality"},{"id":"1597","name":"Augmented Reality"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"684209":{"#nid":"684209","#data":{"type":"news","title":"Atlanta Youth to Design \u2018Future of Paper\u2019 Exhibit at Papermaking Museum","body":[{"value":"\u003Cp\u003EA new educational initiative is set to teach Atlanta high school students how to create electronics, wearable devices, and other technologies that are built on paper and craft materials.\u003C\/p\u003E\u003Cp\u003EWorkshops hosted by the \u003Ca href=\u0022https:\/\/paper.gatech.edu\/visit-0\u0022\u003E\u003Cstrong\u003ERobert C. Williams Museum of Papermaking\u003C\/strong\u003E\u003C\/a\u003E and led by Georgia Tech Assistant Professor \u003Ca href=\u0022https:\/\/id.gatech.edu\/people\/hyunjoo-oh\u0022\u003E\u003Cstrong\u003EHyunJoo Oh\u003C\/strong\u003E\u003C\/a\u003E will introduce about 60 students from Atlanta Public Schools to paper-based electronics through hands-on workshops.\u003C\/p\u003E\u003Cp\u003EThe Williams Museum will open an exhibit titled \u201cThe Future of Paper\u201d that displays designs created in the workshop alongside visionary examples of paper-based technologies from Georgia Tech researchers.\u003C\/p\u003E\u003Cp\u003EThe exhibit, funded by the National Science Foundation, is slated to open to the public in 2027.\u003C\/p\u003E\u003Cp\u003EOh is a researcher with joint appointments in the \u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003E\u003Cstrong\u003ESchool of Interactive Computing\u003C\/strong\u003E\u003C\/a\u003E and the \u003Ca href=\u0022https:\/\/id.gatech.edu\/\u0022\u003E\u003Cstrong\u003ESchool of Industrial Design.\u003C\/strong\u003E\u003C\/a\u003EShe leads the \u003Ca href=\u0022https:\/\/www.codecraft.group\/\u0022\u003E\u003Cstrong\u003EComputational Design and Craft (CoDe Craft) Group\u003C\/strong\u003E\u003C\/a\u003E at Georgia Tech, where her team integrates everyday craft materials with computing to support creative exploration.\u003C\/p\u003E\u003Cp\u003EOh believes paper could be widely used to support prototyping printed circuit boards (PCBs) as a sustainable alternative to silicon. While silicon is the most prominent material used by technology companies to build computer chips, it isn\u2019t biodegradable. And it can be harmful to the environment and contribute to e-waste.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EPaper, however, provides an eco-friendly platform for printing conductive traces and mounting small electronic components. With the expansion of printed electronic tools and techniques, paper and similar materials have become more popular among technologists who develop sensing technologies and wearable devices.\u003C\/p\u003E\u003Cp\u003E\u201cIt\u2019s widely available and accessible,\u201d Oh said. \u201cI can\u2019t think of anything more affordable and approachable that young makers and the broader maker community can use for circuits than paper.\u003C\/p\u003E\u003Cp\u003E\u201cPrinted electronics traditionally required expensive equipment, but with recent innovation in materials science, conductive materials such as conductive pens and paint available in local arts and crafts stores can be used to build circuits on paper. We can also print circuits using a regular office inkjet printer with silver ink.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EShared Vision\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EShortly after arriving at Georgia Tech in 2019, Oh knew she had to develop a project that would let her partner with the Williams Museum.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cI was captivated by the museum\u2019s space and its celebration of paper,\u201d she said. \u201cI wanted a collaboration that would integrate technology in a way that complemented and respected the museum\u2019s existing beauty.\u201d\u003C\/p\u003E\u003Cp\u003EMuseum director Virginia Howell said the project was a perfect match for the museum, which has documented the history of papermaking since it was founded in 1939 by the Massachusetts Institute of Technology. Georgia Tech became the new home of the museum in 2003.\u003C\/p\u003E\u003Cp\u003EWith more than 100,000 objects in its collection \u2014 some dating back as far as 2,000 years ago \u2014 the museum is unique, Howell said. Most papermaking museums are typically located at an historic mill, but the Williams Museum covers the history of papermaking.\u003C\/p\u003E\u003Cp\u003EHowell said that before she met Oh, she had been looking for an exhibit that would display the possible future of papermaking.\u003C\/p\u003E\u003Cp\u003E\u201cWe do the past of paper fantastically well, and we do the present of paper well through our changing exhibitions,\u201d Howell said. \u201cThe future of paper is something we haven\u2019t spent a lot of time interpreting.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003ECrafting the Future\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EOh and Howell agree that young people will shape that future. Oh said paper is commonly linked to art in the education sphere. As the material\u2019s use in technology increases, however, it can funnel the interests of students toward engineering and computing.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EIncorporating paper and craft materials can invite more students to explore engineering and computing concepts. After all, a circuit board created on paper isn\u2019t so different from one built on a silicon PCB, Oh said.\u003C\/p\u003E\u003Cp\u003E\u201cThis approach can excite the kind of students who usually feel disconnected from electronics and computing,\u201d she said. \u201cIt gives those who only see themselves as creative or artistic a way to enjoy technology and resonate with it.\u003C\/p\u003E\u003Cp\u003E\u201cUsually when I work with young students, especially girls, if I start with something technical, their interest wanes. But when I present those same ideas through art using familiar materials like paper, they become more engaged and confident. That\u2019s when they start to flourish.\u201d\u003C\/p\u003E\u003Cp\u003EOh and Howell will hold three rounds of 10-week workshops for the students \u2014 spring 2026, fall 2026, and spring 2027. The best designs from those workshops will be displayed in the exhibit.\u003C\/p\u003E\u003Cp\u003E\u201cThey\u2019ll feel more comfortable with computing and engineering as an introductory experience,\u201d Howell said. \u201cWhen they successfully build on it and realize they did this on a sheet of paper, it\u2019s exciting to think what they\u2019ll do when they get more sophisticated tools and access.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EA new educational initiative, funded by the National Science Foundation, will teach Atlanta high school students how to create paper-based electronic devices. The workshops, led by Georgia Tech Assistant Professor HyunJoo Oh, will be hosted at the Robert C. Williams Museum of Papermaking. The workshops will culminate in a public exhibition of their work in 2027.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A new Georgia Tech education initiative will teach Atlanta high school students to design paper-based electronics, with their creations to be featured in an exhibit at the Robert C. Williams Museum of Papermaking."}],"uid":"36530","created_gmt":"2025-08-27 15:43:18","changed_gmt":"2025-08-28 16:18:26","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-08-27T00:00:00-04:00","iso_date":"2025-08-27T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"677819":{"id":"677819","type":"image","title":"Hyunjoo-Oh_86A9064-Enhanced-NR.jpg","body":null,"created":"1756309437","gmt_created":"2025-08-27 15:43:57","changed":"1756309437","gmt_changed":"2025-08-27 15:43:57","alt":"HyunJoo Oh","file":{"fid":"261760","name":"Hyunjoo-Oh_86A9064-Enhanced-NR.jpg","image_path":"\/sites\/default\/files\/2025\/08\/27\/Hyunjoo-Oh_86A9064-Enhanced-NR.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/08\/27\/Hyunjoo-Oh_86A9064-Enhanced-NR.jpg","mime":"image\/jpeg","size":130876,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/08\/27\/Hyunjoo-Oh_86A9064-Enhanced-NR.jpg?itok=noERIW_h"}}},"media_ids":["677819"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"42941","name":"Art Research"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"179356","name":"Industrial Design"}],"keywords":[{"id":"194701","name":"go-resarchnews"},{"id":"9153","name":"Research Horizons"},{"id":"138041","name":"Robert C Williams paper making museum"},{"id":"38451","name":"georgia tech school of industrial design"},{"id":"181210","name":"ic-ubicomp-and-wearable"},{"id":"64711","name":"eco-friendly"},{"id":"167355","name":"silicon"},{"id":"7571","name":"PCB"},{"id":"93791","name":"Renewable Bioproducts Institute"},{"id":"191934","name":"National Science Foundation (NSF)"}],"core_research_areas":[{"id":"39451","name":"Electronics and Nanotechnology"},{"id":"39471","name":"Materials"},{"id":"39501","name":"People and Technology"},{"id":"39491","name":"Renewable Bioproducts"},{"id":"194566","name":"Sustainable Systems"}],"news_room_topics":[{"id":"71871","name":"Campus and Community"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"684029":{"#nid":"684029","#data":{"type":"news","title":"Youth Look to Transform Communities Through Civic Technologies","body":[{"value":"\u003Cp\u003EYoung people in Atlanta and Boston will be able to lead efforts to improve their communities through new civic technologies supported by Georgia Tech, Northeastern University, and Massachusetts Institute of Technology researchers.\u003C\/p\u003E\u003Cp\u003EWith the help of a $1.25 million grant from the National Science Foundation, the three institutions seek to increase youth input into policymaking and encourage youth-led community organizing.\u003C\/p\u003E\u003Cp\u003EYouth-designed civic technologies are an effective way to engage youth with their communities, said Andrea Parker, an associate professor in Georgia Tech\u2019s School of Interactive Computing.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EExamples of civic technologies are public data initiatives, citizen science projects, public issue reporting platforms, and digital voting platforms. Parker said the perspectives of young people are often neglected in the design of such technologies.\u003C\/p\u003E\u003Cp\u003E\u201cWe don\u2019t know much about what community issues are important to youth because we haven\u2019t asked them,\u201d she said. \u201cWhat is their vision for community well-being, and what do they want to address through civic technology?\u201d\u003C\/p\u003E\u003Cp\u003EParker is the lead principal investigator (PI) on the project that will engage youth from low socio-economic communities in Atlanta and Boston. She said the youth will decide what technologies will be created, but they could include a mobile app or a publicly accessible platform.\u003C\/p\u003E\u003Cp\u003E\u201cWe\u2019re interested in studying how technologies can help youth become more civically engaged in their communities and build social connection, trust, and belonging amongst neighbors,\u201d she said.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cYouth in lower-income neighborhoods face increased threats to their mental health. Socially cohesive communities can counteract those barriers and are essential for youth well-being.\u201d\u003C\/p\u003E\u003Cp\u003EParker added that impoverished communities often have less social cohesion compare to wealthier areas. Higher-income neighborhoods often have more access to resources that support social cohesion and civic engagement.\u0026nbsp;\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EBacked by Data\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EBrooke Foucault Welles, co-PI, professor, and interim dean at Northeastern\u2019s College of Media, Arts and Design, said she\u2019s interested in seeing which issues the youths from both Atlanta and Boston will address through their design process. Studying and working with youth across these geographic settings will help the team identify how civic technology can best support youth in varied neighborhood contexts.\u003C\/p\u003E\u003Cp\u003EThe project will also advance data literacy among young people as they collect and study data to support the new technologies. Welles said data-centered advocacy increases young people\u2019s chances of being heard by elder community members.\u003C\/p\u003E\u003Cp\u003E\u201cEmpowering young people to use data when they\u2019re making their arguments about what matters to them and to their communities is the point of this project,\u201d she said. \u201cIt makes their arguments more compelling if they can present data to the adult members of their communities about what\u2019s going on.\u201d\u003C\/p\u003E\u003Cp\u003EThe project\u2019s reach could expand beyond Atlanta and Boston.\u003C\/p\u003E\u003Cp\u003EOnce the technologies are designed, the researchers will package them and make them publicly available as a toolkit.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EIf successful, the project could drive a movement toward more collective organizing to ensure the youth perspective gets factored into community decision-making.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cThey\u2019re a vital part of our communities, and they\u2019re the ones for whom our decisions have the biggest impact,\u201d Welles said. \u201cThese are the times when they\u2019re forming their own civic identities, so engaging them in civic life has long ripple effects. We create more active and thoughtful citizens when we engage young people with civic life.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech, Northeastern University, and MIT are partnering on a $1.25 million National Science Foundation project to help young people in underserved communities design civic technologies that address local challenges. The initiative will work with youth in Atlanta and Boston to create tools such as mobile apps and data platforms that promote civic engagement and community improvement. The project centers youth vocies in the design process to empower them to \u0026nbsp;take an active role in shaping their communities.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Through a $1.25 million NSF Grant, Georgia Tech, Northeastern University, and MIT are empowering youth from underserved Atlanta and Boston communities to lead community transformation and bolster civice engagement."}],"uid":"36530","created_gmt":"2025-08-21 12:12:57","changed_gmt":"2025-08-21 12:18:53","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-08-20T00:00:00-04:00","iso_date":"2025-08-20T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"677759":{"id":"677759","type":"image","title":"Andrea-Parker_86A1007.jpg","body":null,"created":"1755778471","gmt_created":"2025-08-21 12:14:31","changed":"1755778471","gmt_changed":"2025-08-21 12:14:31","alt":"Andrea Parker","file":{"fid":"261694","name":"Andrea-Parker_86A1007.jpg","image_path":"\/sites\/default\/files\/2025\/08\/21\/Andrea-Parker_86A1007.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/08\/21\/Andrea-Parker_86A1007.jpg","mime":"image\/jpeg","size":90186,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/08\/21\/Andrea-Parker_86A1007.jpg?itok=SAk_7gbr"}}},"media_ids":["677759"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"42901","name":"Community"},{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"40351","name":"civic engagement"},{"id":"175125","name":"civic tech"},{"id":"75261","name":"Youth"},{"id":"188933","name":"Atlanta community."},{"id":"194701","name":"go-resarchnews"},{"id":"9153","name":"Research Horizons"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"683581":{"#nid":"683581","#data":{"type":"news","title":"From TikTok to Photoshop: Generative AI Could Bring Millions of Apps Into 3D Reality","body":[{"value":"\u003Cp\u003EThe idea of people experiencing their favorite mobile apps as immersive 3D environments took a step closer to reality with a new Google-funded research iniative at Georgia Tech.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EA new approach proposed by Tech researcher Yalong Yang uses generative artificial intelligence (GenAI) technologies to convert almost any mobile or web-based app into a 3D environment.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThat includes application software programs from Microsoft and Adobe as well as any social media (Tiktok), entertainment (Spotify), banking (PayPal), or food service app (Uber Eats) and everything in between.\u003C\/p\u003E\u003Cp\u003EYang aims to make the 3D environments compatible with augmented and virtual reality (AR\/VR) headsets and smart glasses. He believes his research could be a breakthrough in spatial computing and change how humans interact with their favorite apps and computer systems in general.\u003C\/p\u003E\u003Cp\u003E\u201cWe\u2019ll be able to turn around and see things we want, and we can grab them and put them together,\u201d said Yang, an assistant professor in the School of Interactive Computing. \u201cWe\u2019ll no longer use a mouse to scroll or the keyboard to type, but we can do more things like physical navigation.\u201d\u003C\/p\u003E\u003Cp\u003EYang\u2019s proposal recently earned him recognition as a 2025 Google Research Scholar. Along with converting popular social apps, his platform will be able to instantly render Photoshop, MS Office, and other workplace applications in 3D for AR\/VR devices.\u003C\/p\u003E\u003Cp\u003E\u201cWe have so many applications installed in our machines to complete all the various types of work we do,\u201d he said. \u201cWe use Photoshop for photo editing, Premiere Pro for video editing, Word for writing documents. We want to create an AR\/VR ecosystem that has all these things available in one interface with all apps working cohesively to support multitasking.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EFilling The Gap With AI\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EJust as Google\u2019s Veo and Open AI\u2019s Sora use generative-AI to create video clips, Yang believes it can be used to create interactive, immersive environments for any Android or Apple app.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cA critical gap in AR\/VR is that we do not have all those existing applications, and redesigning all those apps will take forever,\u201d he said. \u201cIt\u2019s urgent that we have a complete ecosystem in VR to enable us to do the work we need to do. Instead of recreating everything from scratch, we need a way to convert these applications into immersive formats.\u201d\u003C\/p\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cp\u003EThe Google Play Store boasts 3.5 million apps for Android devices, while the Apple Store includes 1.8 million apps for iOS users.\u003C\/p\u003E\u003Cp\u003EMeanwhile, there are fewer than 10,000 apps available on the latest Meta Quest 3 headset, leaving a gap of millions of apps that will need 3D conversion.\u003C\/p\u003E\u003Cp\u003E\u201cWe envision a one-click app, and the (Android Package Kit) file output will be a Meta APK that you can install on your MetaQuest 3,\u201d he said.\u003C\/p\u003E\u003Cp\u003EYang said major tech companies like Apple have the resources to redesign their apps into 3D formats. However, small- to mid-sized companies that have created apps either do not have that ability or would take years to do so.\u003C\/p\u003E\u003Cp\u003EThat\u2019s where generative-AI can help. Yang plans to use it to convert source code from web-based and mobile apps into WebXR.\u003C\/p\u003E\u003Cp\u003EWebXR is a set of application programming interfaces (APIs) that enables developers to create AR\/VR experiences within web browsers.\u003C\/p\u003E\u003Cp\u003E\u201cWe start with web-based content,\u201d he said. \u201cA lot of things are already based on the web, so we want to convert that user interface into Web XR.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EBuilding New Worlds\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EThe process for converting mobile apps would be similar.\u003C\/p\u003E\u003Cp\u003E\u201cAndroid uses an XML description file to define its user-interface (UI) elements. It\u2019s very much like HTML on a web page. We believe we can use that as our input and map the elements to their desired location in a 3D environment. AI is great at translating one language to another \u2014 JavaScript to C-sharp, for example \u2014 so that can help us in this process.\u201d\u003C\/p\u003E\u003Cp\u003EIf generative-AI can create environments, the next step would be to create a seamless user experience.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cIn a normal desktop or mobile application, we can only see one thing at a time, and it\u2019s the same for a lot of VR headsets with one application occupying everything. To live in a multi-task environment, we can\u2019t just focus on one thing because we need to keep switching our tasks, so how do we break all the elements down and let them float around and create a spatial view of them surrounding the user?\u201d\u003C\/p\u003E\u003Cp\u003EAlong with Assistant Professor Cindy Xiong, Yang is one of two researchers in the School of IC to be named a 2025 Google Research Scholar.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EFour researchers from the College of Competing have received the award. The other two are Ryan Shandler from the School of Cybersecurity and Privacy and Victor Fung from the School of Computational Science and Engineering.\u003C\/p\u003E\u003Cdiv\u003E\u003Ch4\u003E\u003Cstrong\u003EReent Storie\u003C\/strong\u003E\u003C\/h4\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EA new Google-funded research project at Georgia Tech, led by Assistant Professor Yalong Yang, is using generative AI to convert existing mobile and web apps into 3D environments. This initiative aims to bridge the \u0022critical gap\u0022 in AR\/VR ecosystems by allowing millions of apps to be adapted for headsets without a lengthy redesign process. The goal is to create a seamless, multitasking environment where all apps can work cohesively in a single interface, transitioning from traditional mouse and keyboard interactions to physical navigation.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A new Google-funded research project at Georgia Tech is using generative AI to convert millions of existing mobile and web apps into 3D experiences for augmented and virtual reality."}],"uid":"36530","created_gmt":"2025-08-06 14:17:28","changed_gmt":"2025-08-06 14:23:34","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-08-06T00:00:00-04:00","iso_date":"2025-08-06T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"677592":{"id":"677592","type":"image","title":"AdobeStock_628967696_Editorial_Use_Only.jpeg","body":null,"created":"1754489856","gmt_created":"2025-08-06 14:17:36","changed":"1754489856","gmt_changed":"2025-08-06 14:17:36","alt":"apps","file":{"fid":"261505","name":"AdobeStock_628967696_Editorial_Use_Only.jpeg","image_path":"\/sites\/default\/files\/2025\/08\/06\/AdobeStock_628967696_Editorial_Use_Only.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/08\/06\/AdobeStock_628967696_Editorial_Use_Only.jpeg","mime":"image\/jpeg","size":113784,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/08\/06\/AdobeStock_628967696_Editorial_Use_Only.jpeg?itok=11V_kbBq"}}},"media_ids":["677592"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"135","name":"Research"}],"keywords":[{"id":"194701","name":"go-resarchnews"},{"id":"192863","name":"go-ai"},{"id":"9153","name":"Research Horizons"},{"id":"187812","name":"artificial intelligence (AI)"},{"id":"192390","name":"generative AI"},{"id":"1597","name":"Augmented Reality"},{"id":"145251","name":"virtual reality"},{"id":"34741","name":"mobile app"},{"id":"167543","name":"social media"},{"id":"190091","name":"Google AI"},{"id":"184554","name":"Google Research Award"},{"id":"172013","name":"Faculty Awards and Honors"},{"id":"77571","name":"3D"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"683240":{"#nid":"683240","#data":{"type":"news","title":"New Dataset Makes Health Chatbots Like Google\u0027s MedGemma More Mindful of African Contexts","body":[{"value":"\u003Cp\u003EA groundbreaking new medical dataset is poised to revolutionize healthcare in Africa by improving chatbots\u2019 understanding of the continent\u2019s most pressing medical issues and increasing their awareness of accessible treatment options.\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/afrimedqa.com\/\u0022\u003E\u003Cstrong\u003EAfriMed-QA\u003C\/strong\u003E\u003C\/a\u003E, developed by researchers from Georgia Tech and Google, could reduce the burden on African healthcare systems.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThe researchers said people in need of medical care file into overcrowded clinics and hospitals and face excruciatingly long waits with no guarantee of admission or quality treatment. There aren\u2019t enough trained healthcare professionals available to meet the demand.\u003C\/p\u003E\u003Cp\u003ESome healthcare question-answer chatbots have been introduced to treat those in need. However, the researchers said there\u2019s no transparent or standardized way to test or verify their effectiveness and safety.\u003C\/p\u003E\u003Cp\u003EThe dataset will enable technologists and researchers to develop more robust and accessible healthcare chatbots tailored to the unique experiences and challenges of Africa.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EOne such new tool is Google\u2019s\u0026nbsp;\u003Ca href=\u0022https:\/\/medgemma.org\/\u0022\u003E\u003Cstrong\u003EMedGemma\u003C\/strong\u003E\u003C\/a\u003E, a large-language model (LLM) designed to process medical text and images. AfriMed-QA was used for training and evaluation purposes.\u003C\/p\u003E\u003Cp\u003EAfriMed-QA stands as the most extensive dataset that evaluates LLM capabilities across various facets of African healthcare. It contains 15,000 question-answer pairs culled from over 60 medical schools across 16 countries and covering numerous medical specialties, disease conditions, and geographical challenges.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ETobi Olatunji and Charles Nimo co-developed AfriMed-QA and co-authored a paper about the dataset that will be presented at the\u0026nbsp;\u003Ca href=\u0022https:\/\/2025.aclweb.org\/\u0022\u003E\u003Cstrong\u003EAssociation for Computational Linguistics (ACL)\u003C\/strong\u003E\u003C\/a\u003E conference next week in Vienna.\u003C\/p\u003E\u003Cp\u003EOlatunji is a graduate of Georgia Tech\u2019s\u0026nbsp;\u003Ca href=\u0022https:\/\/omscs.gatech.edu\/\u0022\u003E\u003Cstrong\u003EOnline Master of Science in Computer Science (OMSCS) program\u003C\/strong\u003E\u003C\/a\u003E and holds a Doctor of Medicine from the College of Medicine at the University of Ibadan in Nigeria. Nimo is a Ph.D. student in Tech\u2019s School of Interactive Computing, where he is advised by School of IC professors \u003Ca href=\u0022https:\/\/mikeb.inta.gatech.edu\/\u0022\u003E\u003Cstrong\u003EMichael Best\u003C\/strong\u003E\u003C\/a\u003E and \u003Ca href=\u0022https:\/\/www.irfanessa.gatech.edu\/\u0022\u003E\u003Cstrong\u003EIrfan Essa\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EFocus on Africa\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003ENimo, Olatunji, and their collaborators created AfriMed-QA as a response to MedQA, a large-scale question-answer dataset that tests the medical proficiency of all major LLMs. That includes Google\u2019s Gemini, OpenAI\u2019s ChatGPT, and Anthropic\u2019s Claude, among others.\u003C\/p\u003E\u003Cp\u003EHowever, because MedQA is trained solely on the U.S. Medical License Exams, Nimo said it is not adequate to serve patients in underdeveloped African countries nor the Global South at-large.\u003C\/p\u003E\u003Cp\u003E\u201cAfriMed-QA has the contextualized and localized understanding of African medical institutions that you don\u2019t get from Med-QA,\u201d Nimo said. \u201cThere are specific diseases and local challenges in our dataset that you wouldn\u0027t find in any U.S.-based dataset.\u201d\u003C\/p\u003E\u003Cp\u003EOlatunji said one problem African users may encounter using LLMs trained on MedQA is that they may advise unfeasible treatments or unaffordable prescription drugs.\u003C\/p\u003E\u003Cp\u003E\u201cYou consider the types of drugs, diagnostics, procedures, or therapies that exist in the U.S. that are quite advanced. These treatments are much more accessible, for example in the US, and Europe,\u201d Olatunji said. \u201cBut in Africa, they\u2019re too expensive and many times unavailable. They may cost over $100,000, and many people have no health insurance. Why recommend such treatments to someone who can\u2019t obtain them?\u201d\u003C\/p\u003E\u003Cp\u003EAnother problem may be that the LLM doesn\u2019t take a medical condition seriously if it isn\u2019t predominant in the U.S.\u003C\/p\u003E\u003Cp\u003E\u201cWe tested many of these models, for example, on how they would manage sickle-cell disease signs and symptoms, and they focused on other \u201cmore likely\u201d causes and did not rank or consider sickle cell high enough as a possible cause,\u201d he said. \u201cThey, for example, don\u2019t consider sickle-cell as important as anemia and cancer because sickle-cell is less prevalent in the U.S.\u201d\u003C\/p\u003E\u003Cp\u003EIn addition to sickle-cell disease, Olatunji said some of the healthcare issues facing Africa that can be improved through AfriMed-QA include:\u003C\/p\u003E\u003Cul\u003E\u003Cli\u003EHIV treatment and prevention\u003C\/li\u003E\u003Cli\u003EPoor maternal healthcare\u003C\/li\u003E\u003Cli\u003EWidespread malaria cases\u003C\/li\u003E\u003Cli\u003EPhysician shortage\u003C\/li\u003E\u003Cli\u003EClinician productivity and operational efficiency\u003C\/li\u003E\u003C\/ul\u003E\u003Ch4\u003E\u003Cstrong\u003EGoogle Partnership\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EMercy Asiedu, senior author of the AfriMed-QA paper and research scientist at Google Research, has dedicated her career to improving healthcare in Africa. Her work began as a Ph.D. student at Duke University, where she invented the Callascope, a groundbreaking non-invasive tool for gynecological examinations\u003C\/p\u003E\u003Cp\u003EWith her current focus on democratizing healthcare through artificial intelligence (AI), Asiedu, who is from Ghana, helped create a research consortium to develop the dataset. The consortium consists of Georgia Tech, Google, Intron, Bio-RAMP Research Labs, the University of Cape Coast, the Federation of African Medical Students Association, and Sisonkebiotik.\u003C\/p\u003E\u003Cp\u003ESisonkebiotik is an organization of researchers that drives healthcare initiatives to advance data science, machine learning, and AI in Africa.\u003C\/p\u003E\u003Cp\u003EOlatunji leads the Bio-RAMP Research Lab, a community of healthcare and AI researchers, and he is the founder and CEO of Intron, which develops natural-language processing technologies for African communities.\u003C\/p\u003E\u003Cp\u003EIn May, Google released MedGemma, which uses both the MedQA and Afri-MedQA datasets to form a more globally accessible healthcare chatbot. MedGemma has several versions, including 4-billion and 27-billion parameter models, which support multimodal inputs that combine images and text.\u003C\/p\u003E\u003Cp\u003E\u201cWe are proud the latest medical-focused LLM from Google, MedGemma, leverages AfriMed-QA and improves performance in African contexts,\u201d Asiedu said.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cWe started by asking how we could reduce the burden on Africa\u2019s healthcare systems. If we can get these large-language models to be as good as experts and make them more localized with geo-contextualization, then there\u2019s the potential to task-shift to that.\u201d\u003C\/p\u003E\u003Cp\u003EThe project is supported by the\u0026nbsp;\u003Ca href=\u0022https:\/\/www.gatesfoundation.org\/\u0022\u003E\u003Cstrong\u003EGates Foundation\u003C\/strong\u003E\u003C\/a\u003E and\u0026nbsp;\u003Ca href=\u0022https:\/\/www.path.org\/\u0022\u003E\u003Cstrong\u003EPATH\u003C\/strong\u003E\u003C\/a\u003E, a nonprofit that improves healthcare in developing countries.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EResearchers introduced a new dataset aimed at improving health chatbots like Google\u0027s MedGemma by better accounting for cultural, linguistic, and contextual factors specific to African settings.\u0026nbsp;\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A new dataset, AfriMed-QA, was created by researchers at Georgia Tech and Google to improve health chatbots like Google\u0027s MedGemma, making them more aware of African healthcare realities."}],"uid":"36530","created_gmt":"2025-07-23 15:32:10","changed_gmt":"2025-07-23 16:34:15","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-07-23T00:00:00-04:00","iso_date":"2025-07-23T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"677474":{"id":"677474","type":"image","title":"AdobeStock_181202044.jpeg","body":null,"created":"1753284749","gmt_created":"2025-07-23 15:32:29","changed":"1753284749","gmt_changed":"2025-07-23 15:32:29","alt":"AfriMed-QA","file":{"fid":"261376","name":"AdobeStock_181202044.jpeg","image_path":"\/sites\/default\/files\/2025\/07\/23\/AdobeStock_181202044.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/07\/23\/AdobeStock_181202044.jpeg","mime":"image\/jpeg","size":95803,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/07\/23\/AdobeStock_181202044.jpeg?itok=s52m9aW9"}}},"media_ids":["677474"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"187915","name":"go-researchnews"},{"id":"9153","name":"Research Horizons"},{"id":"192863","name":"go-ai"},{"id":"193860","name":"Artifical Intelligence"},{"id":"187812","name":"artificial intelligence (AI)"},{"id":"194391","name":"AI in Healthcare"},{"id":"184331","name":"access to healthcare"},{"id":"1724","name":"african"},{"id":"169137","name":"chatbot"},{"id":"193556","name":"large language models"},{"id":"190091","name":"Google AI"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"682404":{"#nid":"682404","#data":{"type":"news","title":"Researchers Say Stress \u201cSweet Spot\u201d Can Improve Remote Operators\u0027 Performance","body":[{"value":"\u003Cp\u003EMilitary drone pilots, disaster search and rescue teams, and astronauts stationed on the International Space Station are often required to remotely control robots while maintaining their concentration for hours at a time.\u003C\/p\u003E\u003Cp\u003EGeorgia Tech roboticists are attempting to identify the most stressful periods that human teleoperators experience while performing tasks remotely. A novel study provides new insights into determining when a teleoperator needs to operate at a high level of focus and which parts of the task can be delegated to robot automation.\u003C\/p\u003E\u003Cp\u003ESchool of Interactive Computing Associate Professor \u003Cstrong\u003EMatthew\u003C\/strong\u003E \u003Cstrong\u003EGombolay\u003C\/strong\u003E calls it the \u201csweet spot\u201d of human ingenuity and robotic precision. Gombolay and students from his \u003Ca href=\u0022https:\/\/core-robotics.gatech.edu\/\u0022\u003E\u003Cstrong\u003ECORE Robotics Lab\u003C\/strong\u003E\u003C\/a\u003Econducted a novel study that measures stress and workload on human teleoperators.\u003C\/p\u003E\u003Cp\u003EGombolay said it can inform military officials on how to strategically implement task automation and maximize human teleoperator performance.\u003C\/p\u003E\u003Cp\u003EHumans continue to hand over more tasks to robots to perform, but Gombolay said that some functions will still require human input and oversight for the foreseeable future.\u003C\/p\u003E\u003Cp\u003ESpecific applications, such as space exploration, commercial and military aviation, disaster relief, and search and rescue, pose substantial safety concerns. Astronauts stationed on the International Space Station, for example, manually control robots that bring in supplies, move cargo, and make structural repairs.\u003C\/p\u003E\u003Cp\u003E\u201cIt\u2019s brutal from a psychological perspective,\u201d Gombolay said.\u003C\/p\u003E\u003Cp\u003EThe question often asked about automating a task in these fields is, at what point can a robot be trusted more than a human?\u003C\/p\u003E\u003Cp\u003EA recent paper by Gombolay and his current and former students \u2014 \u003Cstrong\u003ESam\u003C\/strong\u003E \u003Cstrong\u003EYi\u003C\/strong\u003E \u003Cstrong\u003ETing\u003C\/strong\u003E, \u003Cstrong\u003EErin\u003C\/strong\u003E \u003Cstrong\u003EHedlund\u003C\/strong\u003E-\u003Cstrong\u003EBotti\u003C\/strong\u003E, and \u003Cstrong\u003EManisha\u003C\/strong\u003E \u003Cstrong\u003ENatarajan\u003C\/strong\u003E \u2014 sheds new light on the debate. The paper was published in the IEEE Robotics and Automation Letters and will be presented at the International Conference on Robotics and Automation in Atlanta.\u003C\/p\u003E\u003Cp\u003EThe NASA-funded study can identify which aspects of tedious, time-consuming tasks can be automated and which require human supervision. If roboticists can pinpoint the elements of a task that cause the least stress, they can automate these components and enable humans to oversee the more challenging aspects.\u003C\/p\u003E\u003Cp\u003E\u201cIf we\u2019re talking about repetitive tasks, robots do better with that, so if you can automate it, you should,\u201d said Ting, a former grad student and lead author of the paper. \u201cI don\u2019t think humans enjoy doing repetitive tasks. We can move toward a better future with automation.\u201d\u003C\/p\u003E\u003Cp\u003EMilitary officials, for example, could measure the stress of remote drone pilots and know which times during a pilot\u2019s shift require the highest level of attention.\u003C\/p\u003E\u003Cp\u003E\u201cWe can get a sense of how stressed you are and create models of how divided your attention is and the performance rate of the tasks you\u2019re doing,\u201d Gombolay said.\u003C\/p\u003E\u003Cp\u003E\u201cIt can be a low-stress or high-stress situation depending on the stakes and what\u2019s going on with you personally. Are you well-caffeinated? Well-rested? Is there stress from home you\u2019re bringing with you to the workplace? The goal is to predict how good your task performance will be. If it indicates it might be poor, we may need to outsource work to other people or create a safe space for the operator to destress.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EThe Stress Test\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EFor their study, the researchers cut a small river-shaped path into a medium-density fiberboard. The exercise required the 24 participants to use a remote robotic arm to navigate through the path from one end to the other without touching the edges.\u003C\/p\u003E\u003Cp\u003EThe experiment grew more challenging as new stress conditions and workload requirements were introduced. The changing conditions required the test participants to multitask to complete the assignment.\u003C\/p\u003E\u003Cp\u003EGombolay said the study supports the Yerkes-Dodson Law, which states that moderate levels of stress increase human performance.\u003C\/p\u003E\u003Cp\u003EThe experiment showed that operators felt overwhelmed and performed poorly when multitasking was introduced. Too much stress led to poor performance, but a moderate amount of stress induced more engagement and enhanced teleoperator focus.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ETing said finding that ideal stress zone can lead to a higher performance rating.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cYou would think the more stressed you are, the more your performance decreases,\u201d Ting said. \u201cMost people didn\u2019t react that way. As stress increased, performance increased, but when you increased workload and gave them more to do, that\u2019s when you started seeing deteriorating performance.\u201d\u003C\/p\u003E\u003Cp\u003EGombolay said no stress can be just as detrimental as too much stress. Performing a task without stress tends to cause teleoperators to become disinterested, especially if it is repetitive and time-consuming.\u003C\/p\u003E\u003Cp\u003E\u201cNo stress led to complacency,\u201d Gombolay said. \u201cThey weren\u2019t as engaged in completing the task.\u003C\/p\u003E\u003Cp\u003E\u201cIf your excitement is too low, you get so bored you can\u2019t muster the cognitive energy to reason about robot operation problems.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EThe Human Factor\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003ERoboticists have made significant leaps in recent years to remove teleoperators from the equation. Still, Gombolay said it\u2019s too early to tell whether robots can be trusted with any task that a human can perform.\u003C\/p\u003E\u003Cp\u003E\u201cWe\u2019re a long way from full autonomy,\u201d he said. \u201cThere\u2019s a lot that robots still can\u2019t do without a human operator. Search and rescue operations, if a building collapses, we don\u2019t have much training data for robots to go through rubble by themselves to rescue people. There are ethical needs for humans to be able to supervise or take direct control of robots.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EResearchers at Georgia Tech are exploring the relationship between stress levels and the performance of remote robot operators. They found a moderate level of of stress can enhance performance and keep operators engaged and focused.\u0026nbsp;\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech researchers say there\u0027s a \u0022sweet spot\u0022 of stress that can enhance performance of remote robot operators such as drone pilots and astronauts."}],"uid":"36530","created_gmt":"2025-05-15 13:08:48","changed_gmt":"2025-07-15 15:05:39","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-05-13T00:00:00-04:00","iso_date":"2025-05-13T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"147","name":"Military Technology"},{"id":"135","name":"Research"},{"id":"152","name":"Robotics"},{"id":"8862","name":"Student Research"}],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"683022":{"#nid":"683022","#data":{"type":"news","title":"Two Former Marines Secure Funding for Research \u0027That Improves Lives\u0027","body":[{"value":"\u003Cp\u003EGilberto Moreno and Eric Santacruz once supported military units on the frontlines of combat. Now they assist Georgia Tech faculty who work at the forefront of research.\u003C\/p\u003E\u003Cp\u003EAs Marines, Moreno and Santacruz cultivated expertise in precision and mission-critical support for on-the-ground forces. That experience helps them streamline the administrative process of the School of Interactive Computing as they secure research grants that improve people\u2019s lives.\u003C\/p\u003E\u003Cp\u003EThe two work as faculty support coordinators in the School of IC. They first met in the Marines in 2019 while assigned to the Personnel Retrieval and Processing Company of the 4th Marine Logistics Group in Smyrna, Ga.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EMoreno is still in the Navy reserves and holds the rank of petty officer. Santacruz held the rank of sergeant and was the administration chief when Moreno joined the company. He was discharged in 2022.\u003C\/p\u003E\u003Cp\u003EThe Personnel Retrieval and Processing Company is responsible for the recovery, processing, and preparation of the bodies of fallen service members. The unit, which has detachments domestically and overseas, handles the mortuary affairs, documentation, transportation, and the processing of remains and personal effects.\u003C\/p\u003E\u003Cp\u003EMoreno and Santacruz were responsible for coordinating travel and deployments, as well as processing legal and medical documents.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EBefore Smyrna, they gained administrative experience working in foreign nations and conflict zones.\u003C\/p\u003E\u003Cp\u003EMoreno joined the Marines out of high school in 2010. After a stint at Marine Corps Air Station in Jacksonville, N.C., he was assigned to the administrative staff of the U.S. Embassy in Abu Dhabi, United Arab Emirates. He then transferred to Camp Pendleton in California before being assigned to Combat Logistics Regiment 27, 2nd Marine Logistics Group.\u003C\/p\u003E\u003Cp\u003ESantacruz enlisted in 2014 and was also assigned to the Combat and Logistics Regiment 27. In 2016, he deployed on a six-month tour in Djibouti, where he supported combat operations and civilian evacuation efforts in nearby conflict zones.\u003C\/p\u003E\u003Cp\u003EIn 2021, Moreno decided to join the reserves and pursue a professional career in administration. He immediately received a call back after submitting his application to Georgia Tech.\u003C\/p\u003E\u003Cp\u003ESince they still lived in Atlanta, Moreno and Santacruz kept in touch with each other. When Moreno heard Santacruz had left the Marines, he called him and encouraged him to apply to Georgia Tech.\u003C\/p\u003E\u003Cp\u003E\u201cWe still keep up with other friends who were stationed with us,\u201d Moreno said. \u201cThe brotherhood doesn\u2019t end in the service.\u201d\u003C\/p\u003E\u003Cp\u003EAs faculty support coordinators, they process all the necessary paperwork for grant applications to government organizations that fund research, such as the National Science Foundation (NSF). They also coordinate travel for faculty and students to various conferences and events.\u003C\/p\u003E\u003Cp\u003EMoreno and Santacruz said they enjoyed working behind the scenes in the Marines knowing everything they did was critical to the success of the units they supported.\u003C\/p\u003E\u003Cp\u003EThey brought that mission-first mindset with them to Georgia Tech.\u003C\/p\u003E\u003Cp\u003E\u201cThe most rewarding thing is being able to see the fruits of our work,\u201d Santacruz said. \u201cWhen Dean (Vivek) Sikar sends emails congratulating students and faculty, we see those names, and we\u2019re the ones who got that spend authorization for them. You see the stuff you\u2019re working for and you know it\u2019s changing something either at Tech or even globally.\u201d\u003C\/p\u003E\u003Cp\u003EMoreno said Georgia Tech encourages work-life balance, and the School of Interactive Computing staff supports him when he\u2019s required to fulfill his duties in the reserves. He left the School for seven months on active-duty orders over 2023 and 2024 at the Navy Reserve Center in Marietta.\u003C\/p\u003E\u003Cp\u003EHe said he never had to worry about his job at Tech while he was gone.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cI love that Georgia Tech gives me the ability to balance both,\u201d he said.\u003C\/p\u003E\u003Cp\u003EHe also said he enjoys taking on challenges that arise during the day.\u003C\/p\u003E\u003Cp\u003E\u201cWe always joke that every day is different here in Interactive Computing,\u201d Moreno said. \u201cThere\u2019s always a different challenge, a different scenario.\u003C\/p\u003E\u003Cp\u003E\u201cThere\u2019s more flexibility here. There are a lot of ways to get something done. You can have conversations about different ideas.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGilberto Moreno and Eric Santacruz apply their expertise in streamlining complex processes for military units into securing research grants for School of Interactive Computing faculty. They both enjoy working behind the scenes and value the work-life balance that Georgia Tech offers.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Former U.S. Marines Gilberto\u202fMoreno and Eric\u202fSantacruz secure funding for School of Interactive Computing research that enhances people\u2019s lives."}],"uid":"36530","created_gmt":"2025-07-07 14:09:38","changed_gmt":"2025-07-07 14:26:51","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-07-07T00:00:00-04:00","iso_date":"2025-07-07T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"677343":{"id":"677343","type":"image","title":"Eric-Santacruz---Gilberto-Moreno_86A9226-Enhanced-NR.jpg","body":null,"created":"1751897449","gmt_created":"2025-07-07 14:10:49","changed":"1751897449","gmt_changed":"2025-07-07 14:10:49","alt":"IC Staff","file":{"fid":"261235","name":"Eric-Santacruz---Gilberto-Moreno_86A9226-Enhanced-NR.jpg","image_path":"\/sites\/default\/files\/2025\/07\/07\/Eric-Santacruz---Gilberto-Moreno_86A9226-Enhanced-NR.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/07\/07\/Eric-Santacruz---Gilberto-Moreno_86A9226-Enhanced-NR.jpg","mime":"image\/jpeg","size":143991,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/07\/07\/Eric-Santacruz---Gilberto-Moreno_86A9226-Enhanced-NR.jpg?itok=wR2TB66x"}}},"media_ids":["677343"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"135","name":"Research"},{"id":"194612","name":"Workforce Development"}],"keywords":[{"id":"187915","name":"go-researchnews"},{"id":"9153","name":"Research Horizons"},{"id":"191071","name":"Employee Experience"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"682890":{"#nid":"682890","#data":{"type":"news","title":"Tech Researchers Tabbed to Build AI Systems for Medical Robots in South Korea","body":[{"value":"\u003Cp\u003EOverwhelmed doctors and nurses struggling to provide adequate patient care in South Korea are getting support from Georgia Tech and Korean-based researchers through an AI-powered robotic medical assistant.\u003C\/p\u003E\u003Cp\u003ETop South Korean research institutes have enlisted Georgia Tech researchers \u003Cstrong\u003ESehoon\u003C\/strong\u003E \u003Cstrong\u003EHa\u003C\/strong\u003E and \u003Cstrong\u003EJennifer G.\u003C\/strong\u003E \u003Cstrong\u003EKim\u003C\/strong\u003E to develop artificial intelligence (AI) to help the humanoid assistant navigate hospitals and interact with doctors, nurses, and patients.\u003C\/p\u003E\u003Cp\u003EHa and Kim will partner with Neuromeka, a South Korean robotics company, on a five-year, 10 billion won (about $7.2 million US) grant from the South Korean government. Georgia Tech will receive about $1.8 million of the grant.\u003C\/p\u003E\u003Cp\u003EHa and Kim, assistant professors in the School of Interactive Computing, will lead Tech\u2019s efforts and also work with researchers from the Korea Advanced Institute of Science and Technology and the Electronics and Telecommunications Research Institute.\u003C\/p\u003E\u003Cp\u003ENeuromeka has built industrial robots since its founding in 2013 and recently decided to expand into humanoid service robots.\u003C\/p\u003E\u003Cp\u003ELee, the group leader of the humanoid medical assistant project, said he fielded partnership requests from many academic researchers. Ha and Kim stood out as an ideal match because of their robotics, AI, and human-computer interaction expertise.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EFor Ha, the project is an opportunity to test navigation and control algorithms he\u2019s developed through research that earned him the National Science Foundation CAREER Award. Ha combines computer simulation and real-world training data to make robots more deployable in high-stress, chaotic environments.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cDr. Ha has everything we want to put into our system, including his navigation policies,\u201d Lee said. \u201cHe works with robots and AI, and there weren\u2019t many candidates in that space. We needed a collaborator who can create the software and has experience running it on robots.\u201d\u003C\/p\u003E\u003Cp\u003EHa said he is already considering how his algorithms could scale beyond hospitals and become a universal means of robot navigation in unstructured real-world environments.\u003C\/p\u003E\u003Cp\u003E\u201cFor now, we\u2019re focusing on a customized navigation model for Korean environments, but there are ways to transfer the data set to different environments, such as the U.S. or European healthcare systems,\u201d Ha said.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cThe final product can be deployed to other systems and industries. It can help industrial workers at factories, retail stores, any place where workers can get overwhelmed by a high volume of tasks.\u201d\u003C\/p\u003E\u003Cp\u003EKim will focus on making the robot\u2019s design and interaction features more human. She\u2019ll develop a large-language model (LLM) AI system to communicate with patients, nurses, and doctors. She\u2019ll also develop an app that will allow users to input their commands and queries.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cThis project is not just about controlling robots, which is why Dr. Kim\u2019s expertise in human-computer interaction design through natural language was essential.,\u201d Lee said.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EKim is interviewing stakeholders from three South Korean hospitals to identify service and care pain points. The issues she\u2019s identified so far relate to doctor-patient communication, a lack of emotional support for patients, and an excessive number of small tasks that consume nurses\u2019 time.\u003C\/p\u003E\u003Cp\u003E\u201cOur goal is to develop this robot in a very human-centered way,\u201d she said. \u201cOne way is to give patients a way to communicate about the quality of their care and how the robot can support their emotional well-being.\u003C\/p\u003E\u003Cp\u003E\u201cWe found that patients often hesitate to ask busy nurses for small things like getting a cup of water. We believe this is an area a robot can support.\u201d\u003C\/p\u003E\u003Cp\u003EThe robot\u2019s hardware will be built in Korea, while Ha and Kim will develop the software in the U.S.\u003C\/p\u003E\u003Cp\u003EJong-hoon Park, CEO of Neuromeka, said in a press release the goal is to have a commercialized product as soon as possible.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cThrough this project, we will solve problems that existing collaborative robots could not,\u201d Park said. \u201cWe expect the medical AI humanoid robot technology being developed will contribute to reducing the daily work burden of medical and healthcare workers in the field.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech researchers Sehoon Ha and Jennifer Kim are working with South Korean institutions to create an AI-powered medical assistant robot. This five-year project, funded by a $7.2 million grant from the South Korean government, aims to alleviate the workload of healthcare professionals in South Korea by enabling the robot to navigate hospitals and interact with staff and patients.\u0026nbsp;\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech researchers are collaborating with South Korean research institutes on a five-year grant to develop an AI-powered humanoid medical assistant to help doctors and nurses in South Korea."}],"uid":"36530","created_gmt":"2025-06-25 19:49:57","changed_gmt":"2025-06-25 19:55:15","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-06-25T00:00:00-04:00","iso_date":"2025-06-25T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"677282":{"id":"677282","type":"image","title":"IMG_4499-copy.jpg","body":"\u003Cp\u003E\u003Cem\u003ESchool of Interactive Computing Assistant Professor Sehoon Ha, Neuromeka researchers Joonho Lee and Yunho Kim, School of IC Assistant Professor Jennifer Kim, and Electronics and Telecommunications Research Institute researcher Dongyeop Kang, are collaborating to develop a medical assistant robot to support doctors and nurses in Korea. Photo by Nathan Deen\/College of Computing.\u003C\/em\u003E\u003C\/p\u003E","created":"1750881009","gmt_created":"2025-06-25 19:50:09","changed":"1750881009","gmt_changed":"2025-06-25 19:50:09","alt":"Researchers","file":{"fid":"261166","name":"IMG_4499-copy.jpg","image_path":"\/sites\/default\/files\/2025\/06\/25\/IMG_4499-copy.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/06\/25\/IMG_4499-copy.jpg","mime":"image\/jpeg","size":126414,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/06\/25\/IMG_4499-copy.jpg?itok=v92OOgVu"}}},"media_ids":["677282"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"194606","name":"Artificial Intelligence"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"192863","name":"go-ai"},{"id":"187812","name":"artificial intelligence (AI)"},{"id":"9153","name":"Research Horizons"},{"id":"187915","name":"go-researchnews"},{"id":"78681","name":"medical robotics"},{"id":"194391","name":"AI in Healthcare"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"682761":{"#nid":"682761","#data":{"type":"news","title":"Georgia Tech Team Takes Second Place at ICRA Robot Teleoperation Contest","body":[{"value":"\u003Cp\u003EAn algorithmic breakthrough from School of Interactive Computing researchers that\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/new-algorithm-teaches-robots-through-human-perspective\u0022\u003E\u003Cstrong\u003Eearned a Meta partnership\u003C\/strong\u003E\u003C\/a\u003Edrew more attention at the IEEE International Conference on Robotics and Automation (ICRA).\u003C\/p\u003E\u003Cp\u003EMeta announced in February its partnership with the labs of professors\u0026nbsp;\u003Ca href=\u0022https:\/\/faculty.cc.gatech.edu\/~danfei\/\u0022\u003E\u003Cstrong\u003EDanfei Xu\u003C\/strong\u003E\u003C\/a\u003E and\u0026nbsp;\u003Ca href=\u0022https:\/\/faculty.cc.gatech.edu\/~judy\/\u0022\u003E\u003Cstrong\u003EJudy Hoffman\u003C\/strong\u003E\u003C\/a\u003E on a novel computer vision-based algorithm called EgoMimic. It enables robots to learn new skills by imitating human tasks from first-person video footage captured by Meta\u2019s Aria smart glasses.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EXu\u2019s\u0026nbsp;\u003Ca href=\u0022https:\/\/rl2.cc.gatech.edu\/\u0022\u003E\u003Cstrong\u003ERobot Learning and Reasoning Lab (RL2)\u003C\/strong\u003E\u003C\/a\u003E displayed EgoMimic in action at ICRA May 19-23 at the World Congress Center in Atlanta.\u003C\/p\u003E\u003Cp\u003ELawrence Zhu, Pranav Kuppili, and Patcharapong \u201cElmo\u201d Aphiwetsa \u2014 students from Xu\u2019s lab \u2014 used Egomimic to compete in a robot teleoperation contest at ICRA. The team finished second in the event titled What Bimanual Teleoperation and Learning from Demonstration Can Do Today, earning a $10,000 cash prize.\u003C\/p\u003E\u003Cp\u003ETeams were challenged to perform tasks by remotely controlling a robot gripper. The robot had to fold a tablecloth, open a vacuum-sealed container, place an object into the container, and then reseal it in succession without any errors.\u003C\/p\u003E\u003Cp\u003ETeams completed the tasks as many times as possible in 30 minutes, earning points for each successful attempt.\u003C\/p\u003E\u003Cp\u003EThe competition also offered different challenge levels that increased the points awarded. Teams could directly operate the robot with a full workstation view and receive one point for each task completion. Or, as the RL2 team chose, teams could opt for the second challenge level.\u003C\/p\u003E\u003Cp\u003EThe second level required an operator to control the task with no view of the workstation except for what was provided to through a video feed. The RL2 team completed the task seven times and received double points for the challenge level.\u003C\/p\u003E\u003Cp\u003EThe third challenge level required teams to operate remotely from another location. At this level, teams could earn four times the number of points for each successful task completed. The fourth level challenged teams to deploy an algorithm for task performance and awarded eight points for each completion.\u003C\/p\u003E\u003Cp\u003EUsing two of Meta\u2019s Quest wireless controllers, Zhu controlled the robot under the direction of Aphiwetsa, while Kuppili monitored the coding from his laptop.\u003C\/p\u003E\u003Cp\u003E\u201cIt\u2019s physically difficult to teleoperate for half an hour,\u201d Zhu said. \u201cMy hands were shaking from holding the controllers in the air for that long.\u201d\u003C\/p\u003E\u003Cp\u003EBeing in constant communication with Aphiwetsa helped him stay focused throughout the contest.\u003C\/p\u003E\u003Cp\u003E\u201cI helped him strategize the teleoperation and noticed he could skip some of the steps in the folding,\u201d Aphiwetsa said. \u201cThere were many ways to do it, so I just told him what he could fix and how to do it faster.\u201d\u003C\/p\u003E\u003Cp\u003EZhu said he and his team had intended to tackle the fourth challenge level with the EgoMimic algorithm. However, due to unexpected time constraints, they decided to switch to the second level the day before the competition due to unexpected time constraints.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cI think we realized the day before the competition training the robot on our model would take a huge amount of time,\u201d Zhu said. \u201cWe decided to go for the teleoperation and started practicing.\u201d\u003C\/p\u003E\u003Cp\u003EHe said the team wants to tackle the highest challenge level and use a training model for next year\u2019s ICRA competition in Vienna, Austria.\u003C\/p\u003E\u003Cp\u003EICRA is the world\u2019s largest robotics conference, and\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/georgia-tech-leads-robotics-world-converges-atlanta-icra-2025\u0022\u003E\u003Cstrong\u003EAtlanta hosted the event\u003C\/strong\u003E\u003C\/a\u003E for the third time in its history, drawing a record-breaking attendance of over 7,000.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EStudents from Georgia Tech\u0027s Robot Learning and Reasoning Lab earned second place and a $10,000 cash prize in a robot teleoperation contest at the 2025 International Conference on Robotics and Automation in Atlanta. The RL2 lab announced a partnership with Meta in February on a novel computer vision-based algorithm called EgoMimic. It enables robots to learn new skills by imitating human tasks from first-person video footage captured by Meta\u2019s Aria smart glasses.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A Georgia Tech team earned second place in the ICRA Robot Teleoperation Contest for their EgoMimic algorithm, which allows robots to learn skills by mimicking human tasks from first-person video."}],"uid":"36530","created_gmt":"2025-06-11 15:24:42","changed_gmt":"2025-06-12 11:52:56","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-06-11T00:00:00-04:00","iso_date":"2025-06-11T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"677223":{"id":"677223","type":"image","title":"IMG_4291-2-copy.jpg","body":null,"created":"1749729142","gmt_created":"2025-06-12 11:52:22","changed":"1749729142","gmt_changed":"2025-06-12 11:52:22","alt":"ICRA","file":{"fid":"261102","name":"IMG_4291-2-copy.jpg","image_path":"\/sites\/default\/files\/2025\/06\/12\/IMG_4291-2-copy.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/06\/12\/IMG_4291-2-copy.jpg","mime":"image\/jpeg","size":151809,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/06\/12\/IMG_4291-2-copy.jpg?itok=Ag2Xn9Oj"}}},"media_ids":["677223"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"152","name":"Robotics"},{"id":"193158","name":"Student Competition Winners (academic, innovation, and research)"}],"keywords":[{"id":"181920","name":"cc-research; ic-ai-ml; ic-robotics"},{"id":"187812","name":"artificial intelligence (AI)"},{"id":"192863","name":"go-ai"},{"id":"187915","name":"go-researchnews"},{"id":"9153","name":"Research Horizons"},{"id":"167585","name":"student competition"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"682569":{"#nid":"682569","#data":{"type":"news","title":"Ph.D. Student Fills Violence Data Gaps Through Technology","body":[{"value":"\u003Cp\u003EAfter\u0026nbsp;\u003Ca href=\u0022https:\/\/www.jcforiest.com\/\u0022\u003E\u003Cstrong\u003EJasmine Foriest\u003C\/strong\u003E\u003C\/a\u003E was robbed at gunpoint in her hometown of Columbus, Ga., she took note of how much information about the crime fell through the cracks of the ensuing police investigation.\u003C\/p\u003E\u003Cp\u003EShe said the police officer who interviewed her was dismissive and neglected to write down details that Foriest found significant. The deficient police report was picked up by local media, which led to news stories that inaccurately described the crime and left out important information.\u003C\/p\u003E\u003Cp\u003EForiest said she learned from the incident that incomplete information doesn\u2019t mitigate violence. The perspectives and stories of people who experience violence are essential to reliable data.\u003C\/p\u003E\u003Cp\u003EThe incident guided Foriest as she committed to research that gathers complete and accurate data on multiple types of violence, including violent injury and homicide, intimate partner violence, gender-based violence, and suicide.\u003C\/p\u003E\u003Cp\u003EForiest earned a bachelor\u2019s in health science from Columbus State University. She also holds two master\u2019s degrees: one in public health from the University of Southern California, and another in technology leadership and management from Agnes Scott College.\u003C\/p\u003E\u003Cp\u003EIn 2021, Foriest started her Ph.D. in human-centered computing at Georgia Tech to understand how technology influences violence.\u003C\/p\u003E\u003Cp\u003E\u201cI look at all types of violence as an outcome of how technology affects communication,\u201d she said.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EOne thing she discovered was that even though technology can amplify victims\u2019 voices, it is often used to silence them.\u003C\/p\u003E\u003Cp\u003E\u201cThe same social dynamics that keep people from disclosing their violent experiences to formal reporting sources offline also happen online,\u201d she said.\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EBringing the Cardiff Model to the U.S.\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EBefore arriving at Tech, Foriest worked for eight years as an injury prevention coordinator at Grady Memorial Hospital in Atlanta. She implemented a trauma recovery center and Atlanta\u2019s first hospital-based violence intervention program.\u003C\/p\u003E\u003Cp\u003EWhile in that position, she worked with the\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cardiff.ac.uk\/documents\/2665796-the-cardiff-model-for-violence-prevention\u0022\u003E\u003Cstrong\u003ECardiff Model for Violence Prevention,\u003C\/strong\u003E\u003C\/a\u003E a public health approach to violence prevention developed by researchers at Cardiff University in Wales.\u003C\/p\u003E\u003Cp\u003EThe Cardiff model\u2019s philosophy is that violence prevention is best achieved when the healthcare and law enforcement sectors combine geographical data to determine where violence occurs in a community.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cThe Cardiff model taught Wales there was a lot about violence they didn\u2019t know from police data alone,\u201d Foriest said.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EOne example is that researchers learned an alarming number of hospital patients were brought in from local taverns. This finding informed policymakers to implement new regulations, such as changing licensing requirements and serving alcohol in toughened glasses or non-glass vessels so they can\u2019t be used as weapons.\u003C\/p\u003E\u003Cp\u003EIn 2011, the city of Cardiff reported a 42% reduction in hospital admissions for hospital injuries. It wasn\u2019t long before the researchers in the U.S. began importing the Cardiff model. In 2018, it became an official policy of the Centers for Disease Control and Prevention (CDC).\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThe U.S. Department of Justice found in 2022 that 58% of violent crimes were not reported to law enforcement. Sixteen cities that make up the\u0026nbsp;\u003Ca href=\u0022https:\/\/www.uscardiffnetwork.com\/\u0022\u003E\u003Cstrong\u003ECardiff Model for Violence Prevention National Network\u003C\/strong\u003E\u003C\/a\u003Eare now gathering and mapping patient-reported violent injury data from hospitals to fill that data gap.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EAtlanta is one of the cities in that network, and Foriest has been an on-the-ground researcher collecting that data. Her work with the Cardiff model seamlessly integrated into her Ph.D. research as she sought ways to turn technology into a safe avenue of violence disclosure.\u003C\/p\u003E\u003Cp\u003EWorking with Alex Godwin, a former Ph.D. student at Georgia Tech who is now an assistant professor at American University, she helped develop a user interface and mapping algorithm. The tool allows hospital patients who are violence victims to identify the location of the violent incident they experienced.\u003C\/p\u003E\u003Cp\u003EForiest said, \u201cAround the Covid-19 pandemic, we had challenges getting patients screened, and we thought we should explore different options.\u003C\/p\u003E\u003Cp\u003E\u201cOur interface allows patients to tap down to the degree they\u2019re comfortable on the geographic location where they were injured.\u003C\/p\u003E\u003Cp\u003E\u201cIt improved our ability to map data tremendously and decreased some of the risks patients face when disclosing violence.\u201d\u003C\/p\u003E\u003Cp\u003EForiest and Godwin\u0027s paper on the development of the interface tool earned an honorable mention for best paper at the 2025 Conference on Human Factors in Computing Systems (CHI) in Yokohama, Japan.\u003C\/p\u003E\u003Cp\u003EForiest also co-authored an award-winning paper at the 2024 Conference on Computer-Supported Cooperative Work (CSCW). That paper examined how social media often silences violence victims.\u003C\/p\u003E\u003Cp\u003EForiest is also a fellow for Data Science and Innovation at the CDC, where she continues her work on the Cardiff model. She also examines how news media coverage of suicides can often reinforce stigmas about the causes of suicide in that role.\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EThriving at Tech\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EForiest is entering her fifth year as a Ph.D. student, but before she came to Tech, she had no computing experience. She applied to numerous Ph.D. programs but was eventually persuaded that technology could complement her public health expertise and her goal of preventing violence.\u003C\/p\u003E\u003Cp\u003E\u201cTech was the only place where I could gain a new skill set while doing the things that I wanted to do in research,\u201d she said.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cThat felt like the best fit for me, where I would get the most out of my training. I was encouraged by faculty and my peers to recognize that my perspective is valuable, and I can speak from that place and bridge my knowledge with HCI concepts.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EInspired by her own experience with a flawed police investigation, Jasmine Foriest is adapting the Cardiff Model\u2014a public health approach developed in Wales\u2014to the U.S. Her work emphasizes the importance of capturing diverse perspectives, particularly from marginalized communities, to create more accurate and actionable data on various forms of violence, including intimate partner violence and suicide.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Jasmine Foriest is using technology to gather complete and accurate data on violence, addressing gaps in traditional reporting methods and developing tools to help victims disclose information safely."}],"uid":"36530","created_gmt":"2025-05-28 17:36:42","changed_gmt":"2025-05-28 17:41:19","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-05-28T00:00:00-04:00","iso_date":"2025-05-28T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"677149":{"id":"677149","type":"image","title":"Summit-on-Responsible-Computing--AI--and-Society_86A9671-Enhanced-NR.jpg","body":null,"created":"1748453824","gmt_created":"2025-05-28 17:37:04","changed":"1748453824","gmt_changed":"2025-05-28 17:37:04","alt":"Jasmine Foriest","file":{"fid":"261017","name":"Summit-on-Responsible-Computing--AI--and-Society_86A9671-Enhanced-NR.jpg","image_path":"\/sites\/default\/files\/2025\/05\/28\/Summit-on-Responsible-Computing--AI--and-Society_86A9671-Enhanced-NR.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/05\/28\/Summit-on-Responsible-Computing--AI--and-Society_86A9671-Enhanced-NR.jpg","mime":"image\/jpeg","size":85875,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/05\/28\/Summit-on-Responsible-Computing--AI--and-Society_86A9671-Enhanced-NR.jpg?itok=bNCFsdmy"}}},"media_ids":["677149"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"8862","name":"Student Research"}],"keywords":[{"id":"9153","name":"Research Horizons"},{"id":"187915","name":"go-researchnews"},{"id":"173212","name":"Human-Computer Intraction"},{"id":"1814","name":"violence"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"682263":{"#nid":"682263","#data":{"type":"news","title":"AR\/VR Researchers Bring Immersive Experience to News Stories","body":[{"value":"\u003Cp\u003EIt hasn\u2019t been long since consumers put down the newspaper and picked up their phones to get their news.\u003C\/p\u003E\u003Cp\u003EIt may not be long before augmented reality\/virtual reality (AR\/VR) headsets cause them to keep their phones in their pockets when they want to read The New York Times or The Washington Post.\u003C\/p\u003E\u003Cp\u003EData visualization and AR\/VR researchers at Georgia Tech are exploring how users can interact with news stories through AR\/VR headsets and are determining which stories are best suited for virtual presentation.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ETao Lu\u003C\/strong\u003E, a Ph.D. student at the School of Interactive Computing, Assistant Professor \u003Cstrong\u003EYalong\u003C\/strong\u003E \u003Cstrong\u003EYang\u003C\/strong\u003E, and Associate Professor \u003Cstrong\u003EAlex\u003C\/strong\u003E \u003Cstrong\u003EEndert\u003C\/strong\u003E led a recent study that they say is among the first to explore user preference in virtually designed news stories.\u003C\/p\u003E\u003Cp\u003EThe researchers will present a paper they authored based on the study at the 2025 Conference on Human Factors in Computing Systems this week in Yokohama, Japan.\u003C\/p\u003E\u003Cp\u003EDigital platforms have elevated explanatory journalism, which provides greater context for a subject through data, images, and in-depth analysis. These platforms also allow stories to be more visually appealing through graphic design and animation.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ELu said AR\/VR can further elevate explanatory journalism through 3D, interactive spatial environments. He added that media organizations should think about how the stories they produce will appear in AR\/VR as much as they think about how they will appear on mobile devices.\u003C\/p\u003E\u003Cp\u003E\u201cWe\u2019re giving users another option to experience the story and for designers and developers to show their stories in another modality,\u201d Lu said.\u003C\/p\u003E\u003Cp\u003E\u201cA screen-based story on a smartphone is easy to use and cost-effective. However, some stories are better presented in AR\/VR, which will become more popular as technology gets cheaper. AR\/VR can provide 3D spatial information that would be hard to understand on a phone or desktop screen.\u201d\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EActive or Passive Interactions?\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EUsing Meta\u2019s Oculus Quest 3, the researchers and their collaborators created four immersive virtual reality simulations from web-based news stories produced by The New York Times:\u003C\/p\u003E\u003Cul\u003E\u003Cli\u003EWhy opening windows was key to classroom ventilation during the Covid-19 pandemic\u003C\/li\u003E\u003Cli\u003EThe destruction of Black homes and businesses in the Tulsa Race Massacre\u003C\/li\u003E\u003Cli\u003EHow climate change could create dramatic dangers in the Atlantic Ocean\u003C\/li\u003E\u003Cli\u003EHow 9\/11 changed Manhattan\u2019s financial district\u003C\/li\u003E\u003C\/ul\u003E\u003Cp\u003EThe study aimed to determine whether users prefer to be actively or passively immersed in a story, whether from a first-person or third-person point of view, or a combination of these perspectives.\u003C\/p\u003E\u003Cp\u003E\u201cWe\u2019re in the nascent stages of storytelling in VR,\u201d said Endert, whose research specializes in data visualization. \u201cWe lack the design knowledge of which mode of immersion we should use if we want a certain reaction from the audience. Understanding design is at the crux of our study.\u201d\u003C\/p\u003E\u003Cp\u003EActive immersion gives the user complete control over their experience. The classroom simulation offers a first-person point of view and allows users to teleport from one point in the classroom to another. New information from the story is presented each time the user moves to a new point.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThe researchers acknowledged they could design a free-roaming simulation that allows users to walk freely within the classroom. However, they restricted that ability for this study due to safety concerns and lab space constraints.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EIn the Tulsa Race Massacre simulation, which uses a passive, first-person point of view, users follow a predefined route along one of Tulsa\u2019s main streets. Information about each building is presented at each step.\u003C\/p\u003E\u003Cp\u003EThe Atlantic Ocean simulation is an active, third-person experience. The user sees a representation of Earth and can select which interaction points to explore to learn new information.\u003C\/p\u003E\u003Cp\u003EThe 9\/11 simulation is a passive third-person experience. Each step includes a narrative paragraph with companion visual elements, and users proceed to the next step through a navigation trigger.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EFinding the Right Balance\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003ELu said that first-person active enhances spatial awareness, while third-person passive improves contextual understanding. Journalists and VR designers must determine which presentation is most effective case by case.\u003C\/p\u003E\u003Cp\u003EYang said the goal should be to balance interests in making those determinations, which might require compromise. Knowing how users prefer to consume news is critical, but journalists still have an editorial responsibility to decide what the public should know and how to present information.\u003C\/p\u003E\u003Cp\u003E\u201cYou have more freedom to explore in an active experience versus a passive experience,\u201d Yang said. \u201cBut if you give them too much freedom, they might stray from your planned narrative and miss important information you think they should know. We want to understand how we can balance both ends of this spectrum and what the right level is that we can give people in storytelling.\u201d\u003C\/p\u003E\u003Cp\u003EThe study and others indicate that users retain information better when they feel like they are part of the story. Yang said the technology to make that possible isn\u2019t there yet, but it\u2019s coming along as wearable VR devices become more accessible.\u003C\/p\u003E\u003Cp\u003EThe debate is whether these devices will become people\u0027s preferred technology for consuming content. According to the Pew Research Center, 86% of U.S. adults say they at least sometimes get their news from a smartphone, computer, or tablet.\u003C\/p\u003E\u003Cp\u003E\u201cI believe AR and VR will be mainstream in the future and will replace everything, but I think there\u2019s a transition period,\u201d Yang said. \u201cOlder devices will exist and act as support. It\u2019s an ecosystem.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EPh.D. student Tao Lu, Assistant Professor Yalong Yang, and Associate Professor Alex Endert developed VR simulations of four New York Times stories using Meta\u2019s Oculus Quest 3 headset to study user preferences.\u003C\/p\u003E\u003Cp\u003ETheir findings suggest that AR\/VR can offer a more spatially rich and emotionally resonant way to experience complex news topics, potentially reshaping how media organizations design and deliver digital stories.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech researchers are pioneering the use of augmented and virtual reality (AR\/VR) to transform news consumption by creating immersive, interactive 3D environments."}],"uid":"36530","created_gmt":"2025-05-06 18:52:58","changed_gmt":"2025-05-06 18:55:25","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-05-01T00:00:00-04:00","iso_date":"2025-05-01T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"677035":{"id":"677035","type":"image","title":"IMG_3568-copy.jpg","body":"\u003Cp\u003EAssistant Professor Yalong Yang looks over the shoulder of Ph.D. student Tao Lu as they create a simulation of a news story presented in virtual reality. Photo by Nathan Deen (College of Computing)\u003C\/p\u003E","created":"1746557625","gmt_created":"2025-05-06 18:53:45","changed":"1746557625","gmt_changed":"2025-05-06 18:53:45","alt":"Assistant Professor Yalong Yang looks over the shoulder of Ph.D. student Tao Lu as they create a simulation of a news story presented in virtual reality.","file":{"fid":"260895","name":"IMG_3568-copy.jpg","image_path":"\/sites\/default\/files\/2025\/05\/06\/IMG_3568-copy.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/05\/06\/IMG_3568-copy.jpg","mime":"image\/jpeg","size":9753715,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/05\/06\/IMG_3568-copy.jpg?itok=LP_Hv8pB"}}},"media_ids":["677035"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"143","name":"Digital Media and Entertainment"},{"id":"135","name":"Research"}],"keywords":[{"id":"9153","name":"Research Horizons"},{"id":"187915","name":"go-researchnews"},{"id":"1597","name":"Augmented Reality"},{"id":"145251","name":"virtual reality"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"682262":{"#nid":"682262","#data":{"type":"news","title":"Commemoration Platform Lets You Determine How You\u0027re Remembered Online","body":[{"value":"\u003Cp\u003EOn Halloween night in 2022, more than 100,000 people flooded the streets of Seoul, South Korea, to celebrate and participate in the city\u2019s festivities. Thousands funneled into a 14-foot-wide alley in the Itaewon district from multiple directions.\u003C\/p\u003E\u003Cp\u003EThe crowd grew so large that no one could move in the alley, resulting in the deadliest crowd crush in the nation\u2019s history. Nearly 160 people were killed, and another 196 were injured.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ESoonho\u003C\/strong\u003E \u003Cstrong\u003EKwon\u003C\/strong\u003E, a first-year human-centered computing Ph.D. student at Georgia Tech, lived within walking distance of the alley when the incident occurred.\u003C\/p\u003E\u003Cp\u003E\u201cIt was tragic,\u201d Kwon said. \u201cIt really makes you think about how life is fragile. Everyone in my community talked about what it would have been like if they were in that alleyway.\u201d\u003C\/p\u003E\u003Cp\u003EMany of the victims were young people \u2014 some of them teens who had no identification on them. Kwon thought about their family members being told their loved ones\u2019 lives had been cut short. He wondered what memories those families would have of the deceased.\u003C\/p\u003E\u003Cp\u003EThe incident inspired Kwon to create a new mobile platform that helps young adults and career professionals create a post-death memorial for their families. The platform, which Kwon and his research collaborators named \u003Cem\u003ETimeless\u003C\/em\u003E, allows users to be remembered how they want to be remembered in the event of their untimely death.\u003C\/p\u003E\u003Cp\u003E\u201cMost death preparation services are for terminally ill patients or aging adults, focusing on will management or funeral planning,\u201d Kwon said. \u201cWe thought such needs may differ for young adults and asked how we could design a system that better caters to their needs.\u201d\u003C\/p\u003E\u003Cp\u003E\u003Cem\u003ETimeless\u003C\/em\u003E is a photo-based death preparation system that enables users to send a physical package containing pre-curated pictures, voice recordings, and letters to a designated recipient in the event of their passing.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThe system syncs with a user\u2019s mobile photo album and creates a list of scanned faces. Users can select a face and view all the photos they\u2019ve taken with that person. They can choose which photos they want sent to that person after death and write individual messages for each image.\u003C\/p\u003E\u003Cp\u003EOnce the user\u2019s death has been reported, \u003Cem\u003ETimeless\u003C\/em\u003E sends a package to each selected individual with printed photos, letters, and a QR code or a CD that contains videos or voice recordings.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EBreaking the Ice\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EKwon and his collaborators designed \u003Cem\u003ETimeless\u003C\/em\u003E based on a group study that asked participants to imagine what would happen if they unexpectedly died. The participants were asked what was on their bucket lists, their epitaphs, and what they would wish for if they could make one wish come true.\u003C\/p\u003E\u003Cp\u003E\u201cSurprisingly, people were happy to participate because we framed it in a way that wasn\u2019t gloomy,\u201d Kwon said. \u201cMany shared that reflecting on their death motivated them to actively express their love and be grateful for what they have. Turning something as heavy as death into something positive was a key design implication.\u201d\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EDigital vs. Physical\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EKwon began his research career examining virtual commemoration systems, including Facebook and Instagram commemoration pages, during the Covid-19 pandemic and exploring how technology can meaningfully memorialize the deceased.\u003C\/p\u003E\u003Cp\u003EHe said two aspects distinguish \u003Cem\u003ETimeless\u003C\/em\u003E from other commemoration platforms:\u0026nbsp;\u003C\/p\u003E\u003Cul\u003E\u003Cli\u003EThe deceased can decide how and by whom they want to be remembered.\u003C\/li\u003E\u003Cli\u003EThe fusion of digital memorialization with physical memorialization\u003C\/li\u003E\u003C\/ul\u003E\u003Cp\u003E\u201cLeveraging only the digital side of it can be superficial,\u201d Kwon said. \u201cWe build monuments, statues, and tombstones because the notion of death itself is losing your physical presence. By making it physical, we aimed to connect the discussion on digital legacies to traditional human commemoration forms.\u201d\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EAI Afterlife\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EKwon also said he is aware of artificial intelligence (AI) afterlife. This emerging technology allows people to train an AI agent and produce digital avatars with which family and friends can communicate after they die.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EMeredith\u003C\/strong\u003E \u003Cstrong\u003ERingel\u003C\/strong\u003E \u003Cstrong\u003EMorris\u003C\/strong\u003E, director and principal scientist for human-AI interaction at Google DeepMind, spoke about AI afterlife in October during the Summit on AI, Responsible Computing, and Society hosted by Georgia Tech\u2019s School of Interactive Computing.\u003C\/p\u003E\u003Cp\u003EIn her talk, Morris spoke about the criticism AI afterlife is already facing for causing people to experience extended grief and the inability to move on from losing a loved one.\u003C\/p\u003E\u003Cp\u003EKwon said another drawback is that AI agents are susceptible to hallucinations and could say untrue things about the deceased.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cHow can you say for sure that the representation of AI is me?\u201d he said. \u201cAs researchers, our role is to explore and critically examine how the emergence of such technology may shape society while striving to ensure its development benefits people.\u201d\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EKwon sees \u003Cem\u003ETimeless\u003C\/em\u003E as a catalyst for meaningful discussions about how a digital legacy curation system may accurately reflect a user\u2019s wishes before death.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EHe will present a paper on \u003Cem\u003ETimeless\u003C\/em\u003E\u0027s design process and its implications at the 2025 ACM Conference on Human Factors in Computing Systems (CHI) this week in Yokohama, Japan.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EIn the wake of the 2022 Itaewon crowd crush, Georgia Tech Ph.D. student Soonho Kwon created a mobile app called \u0022Timeless\u0022 to help young people control how they are remembered after death.\u003C\/p\u003E\u003Cp\u003EKwon\u2019s goal is to empower users to shape their digital legacies and offer meaningful comfort to those they leave behind.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech Ph.D. student Soonho Kwon has developed a mobile platform that allows users to curate and send personalized photo-based memorial packages\u2014complete with images, voice recordings, and letters\u2014to loved ones after their death, aiming to g"}],"uid":"36530","created_gmt":"2025-05-06 18:35:35","changed_gmt":"2025-05-06 18:42:55","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-04-28T00:00:00-04:00","iso_date":"2025-04-28T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"677034":{"id":"677034","type":"image","title":"IMG_3277_adjusted.jpg","body":"\u003Cp\u003ESoonho Kwon is one of the developers of Timeless, a mobile platform that creates personalized memorial packages\u2014including curated photos, voice recordings, and letters\u2014to be sent to loved ones after their death. Photo by Nathan Deen\/College of Computing.\u003C\/p\u003E","created":"1746556558","gmt_created":"2025-05-06 18:35:58","changed":"1746556558","gmt_changed":"2025-05-06 18:35:58","alt":"Soonho Kwon","file":{"fid":"260894","name":"IMG_3277_adjusted.jpg","image_path":"\/sites\/default\/files\/2025\/05\/06\/IMG_3277_adjusted.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/05\/06\/IMG_3277_adjusted.jpg","mime":"image\/jpeg","size":7837532,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/05\/06\/IMG_3277_adjusted.jpg?itok=AWJm17X1"}}},"media_ids":["677034"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"42901","name":"Community"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"194248","name":"International Education"},{"id":"134","name":"Student and Faculty"},{"id":"8862","name":"Student Research"}],"keywords":[{"id":"9153","name":"Research Horizons"},{"id":"187915","name":"go-researchnews"},{"id":"173212","name":"Human-Computer Intraction"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"680787":{"#nid":"680787","#data":{"type":"news","title":"New Lab Expanding Healthcare Access Through Novel Sensing Prototypes","body":[{"value":"\u003Cp\u003EA new lab is working to expand access to practical sensing systems. These systems could benefit people struggling with addiction and alert people with limited healthcare access to potentially life-threatening medical issues.\u003C\/p\u003E\u003Cp\u003EDevice prototypes like these usually require massive amounts of time and external resources to build, but thanks to the \u003Ca href=\u0022https:\/\/www.uncommonsenselabs.com\/home\u0022\u003E\u003Cstrong\u003EUncommon Sense Lab\u003C\/strong\u003E\u003C\/a\u003E, they can now be conveniently developed on Georgia Tech\u2019s campus.\u003C\/p\u003E\u003Cp\u003EThe lab is housed in Georgia Tech\u2019s School of Interactive Computing and is managed by Assistant Professor \u003Ca href=\u0022https:\/\/www.alexandertadams.com\/\u0022\u003E\u003Cstrong\u003EAlexander Adams\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003E\u201cOur overall goal is to give better access to healthcare,\u201d Adams said. \u201cWe\u2019re always looking at who we\u2019re doing this for, how we\u2019re getting it to them, how it addresses specific needs, and how to make it as financially accessible as possible.\u003C\/p\u003E\u003Cp\u003E\u201cThere\u2019s always a space for high-end, high-precision equipment, but not everyone has access, and people are often afraid to get checked out because of the cost. If we can build something that doesn\u2019t necessarily give someone a perfect measurement of a condition, but it can tell them they should go to the doctor, that might be enough to save a life.\u201d\u003C\/p\u003E\u003Cp\u003EThe lab provides resources to interdisciplinary researchers with backgrounds in computing, robotics, mechanical engineering, electrical engineering, and biomedical engineering to develop novel sensing and feedback system prototypes.\u003C\/p\u003E\u003Cp\u003E\u201cWe render physical prototypes that would be difficult to build without a centralized location for these resources,\u201d said Adams, who is affiliated with the \u003Ca href=\u0022https:\/\/research.gatech.edu\/robotics\u0022\u003EInstitute for Robotics and Intelligent Machines\u003C\/a\u003E and the \u003Ca href=\u0022https:\/\/research.gatech.edu\/bio\u0022\u003EParker H. Petit Institute for Bioengineering and Bioscience\u003C\/a\u003E. \u201cWe give students access to the tools and knowledge to build things that would typically seem unreachable.\u003C\/p\u003E\u003Cp\u003E\u201cThere\u2019s nowhere else on campus with this collective that can go end-to-end from mechanical engineering to biomedical engineering to electrical engineering to usability.\u201d\u003C\/p\u003E\u003Cp\u003EExamples of current prototypes being developed in the lab include a device that trains people with post-traumatic stress disorder to breathe in more regular patterns, and another that measures a person\u2019s heart rate when they vape.\u003C\/p\u003E\u003Cp\u003E\u201cWe want to learn more about that behavior through these sensing devices, and then we\u2019ll look at figuring out how we can help people correct their breathing patterns or quit their addiction,\u201d Adams said.\u003C\/p\u003E\u003Cp\u003EThe Uncommon Sense Lab offers high-tech, state-of-the-art machinery, including:\u003C\/p\u003E\u003Cul\u003E\u003Cli\u003E3D printers, including fused deposition modeling (FDM) printers for multi-material, high-precision prints\u003C\/li\u003E\u003Cli\u003EA laser cutter for producing printed circuit boards (PCBs)\u003C\/li\u003E\u003Cli\u003ESurface mount PCB manufacturing station with soldering tools, paste dispensers, and rework stations\u003C\/li\u003E\u003Cli\u003EOptical work benches for optical system design, including microscopes and fluidics workstations\u003C\/li\u003E\u003Cli\u003EResin materials for casting and molding prosthetics\u003C\/li\u003E\u003Cli\u003EVacuum chambers and pressure chambers\u003C\/li\u003E\u003Cli\u003ESaws, mills, lathes, and other mechanical tools for processing wood and soft metals\u003C\/li\u003E\u003Cli\u003ESaws, grinders, polishers, and other wet tools for glass, stone, and ceramics\u003C\/li\u003E\u003C\/ul\u003E\u003Cp\u003ESince he started at the \u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003E\u003Cstrong\u003ESchool of Interactive Computing\u003C\/strong\u003E\u003C\/a\u003E in 2022, Adams has envisioned the lab. The lab space in the Technology Square Research Building in Midtown was thoroughly renovated, including access control, a new ceiling grid, environmental controls, pressurized air, plumbing, and vacuum and air filtration systems.\u003C\/p\u003E\u003Cp\u003E\u201cThis is the result of having built two labs at previous institutions, what I\u2019ve learned about my type of work and my field, and what the most useful things are to handle our diverse projects,\u201d he said.\u003C\/p\u003E\u003Cp\u003E\u201cOne of the reasons I came to Georgia Tech was because they saw the value of being interdisciplinary in a computing world and having a full lab space instead of just an office.\u201d\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EAdams said the lab will accelerate the timelines of current projects for the researchers who use it and create more bandwidth for them to take on more projects.\u003C\/p\u003E\u003Cp\u003E\u201cI want my students to have everything at hand instead of waiting every time we need to do something,\u201d he said. \u201cThis space is for someone who might have an idea for a remote diagnostic tool, but they\u2019re wondering how to build it, add computation, and test it. This is the solution for those wondering how they can do that without spending a year finding and organizing access to facilities or ordering various parts.\u201d\u003C\/p\u003E\u003Cp\u003EAdams said the lab is not a public space, but anyone interested in using it can make a written request for access. The work must be part of a collaboration, and faculty must provide funds to use resources. Access is contingent upon passing several safety courses and in-person training.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003ESchool of Interactive Computing\u0027s Alexander Adams created the Uncommon Sense Lab and works with students to design, fabricate, and implement new ubiquitous and wearable sensing systems.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"School of Interactive Computing\u0027s Alexander Adams created the Uncommon Sense Lab to design, fabricate, and implement new ubiquitous and wearable sensing systems."}],"uid":"32045","created_gmt":"2025-02-27 23:07:57","changed_gmt":"2025-03-26 01:18:35","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-02-27T00:00:00-05:00","iso_date":"2025-02-27T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"676421":{"id":"676421","type":"image","title":"Assistant Professor Alex Adams (right) created the Uncommon Sense Lab to develop novel sensing systems for health.","body":null,"created":"1740706706","gmt_created":"2025-02-28 01:38:26","changed":"1740706706","gmt_changed":"2025-02-28 01:38:26","alt":"Assistant Professor Alex Adams (right) created the Uncommon Sense Lab to develop novel sensing systems for health.","file":{"fid":"260206","name":"The-Uncommon-Sense-Lab_86A7795-Enhanced-NR.jpg","image_path":"\/sites\/default\/files\/2025\/02\/27\/The-Uncommon-Sense-Lab_86A7795-Enhanced-NR.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/02\/27\/The-Uncommon-Sense-Lab_86A7795-Enhanced-NR.jpg","mime":"image\/jpeg","size":155672,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/02\/27\/The-Uncommon-Sense-Lab_86A7795-Enhanced-NR.jpg?itok=-DV4R9rQ"}},"676422":{"id":"676422","type":"image","title":"Assistant Professor Alex Adams (center) works with students to design, fabricate, and implement new ubiquitous and wearable sensing systems.","body":null,"created":"1740706744","gmt_created":"2025-02-28 01:39:04","changed":"1740706744","gmt_changed":"2025-02-28 01:39:04","alt":"Assistant Professor Alex Adams (center) works with students to design, fabricate, and implement new ubiquitous and wearable sensing systems.","file":{"fid":"260207","name":"The-Uncommon-Sense-Lab_86A7827-Enhanced-NR.jpg","image_path":"\/sites\/default\/files\/2025\/02\/27\/The-Uncommon-Sense-Lab_86A7827-Enhanced-NR.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/02\/27\/The-Uncommon-Sense-Lab_86A7827-Enhanced-NR.jpg","mime":"image\/jpeg","size":183097,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/02\/27\/The-Uncommon-Sense-Lab_86A7827-Enhanced-NR.jpg?itok=TyPUFDaQ"}}},"media_ids":["676421","676422"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"66442","name":"MS HCI"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"8862","name":"Student Research"}],"keywords":[{"id":"10199","name":"Daily Digest"},{"id":"181991","name":"Georgia Tech News Center"},{"id":"187915","name":"go-researchnews"},{"id":"190095","name":"digital health wearables"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EBen Snedeker\u003C\/p\u003E\u003Cp\u003EComms. Mgr.\u003C\/p\u003E\u003Cp\u003EGeorgia Tech College of Computing\u003C\/p\u003E\u003Cp\u003Ealbert.snedeker@cc.gatech.edu\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"680585":{"#nid":"680585","#data":{"type":"news","title":"New Algorithm Teaches Robots Through Human Perspective","body":[{"value":"\u003Cp\u003EA new data creation paradigm and algorithmic breakthrough from Georgia Tech has laid the groundwork for humanoid assistive robots to help with laundry, dishwashing, and other household chores. The framework enables these robots to learn new skills by mimicking actions from first-person videos of everyday activities.\u003C\/p\u003E\u003Cp\u003ECurrent training methods limit robots from being produced at the necessary scale to put a robot in every home, said \u003Cstrong\u003ESimar\u003C\/strong\u003E \u003Cstrong\u003EKareer\u003C\/strong\u003E, a Ph.D. student in the School of Interactive Computing.\u003C\/p\u003E\u003Cp\u003E\u201cTraditionally, collecting data for robotics means creating demonstration data,\u201d Kareer said. \u201cYou operate the robot\u2019s joints with a controller to move it and achieve the task you want, and you do this hundreds of times while recording sensor data, then train your models. This is slow and difficult. The only way to break that cycle is to detach the data collection from the robot itself.\u201d\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/youtu.be\/ckGUsdFX9pU?si=7qmGR1D5P_iPAVMt\u0022\u003E\u003Cstrong\u003E[VIDEO: Meta Shares EgoMimic Case Study Video]\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003EOther fields, such as computer vision and natural language processing (NLP), already leverage training data passively culled from the internet to create powerful generative AI and large-language models (LLMs).\u003C\/p\u003E\u003Cp\u003EMany roboticists, however, have shifted toward interventions that allow individual users to teach their robots how to perform tasks. Kareer believes a similar source of passive data can be established to enable practical generalized training that scales the production of humanoid robots.\u003C\/p\u003E\u003Cp\u003EThis is why Kareer collaborated with School of IC Assistant Professor \u003Cstrong\u003EDanfei\u003C\/strong\u003E \u003Cstrong\u003EXu\u003C\/strong\u003E and his \u003Ca href=\u0022https:\/\/rl2.cc.gatech.edu\/\u0022\u003E\u003Cstrong\u003ERobot Learning and Reasoning Lab\u003C\/strong\u003E\u003C\/a\u003E to develop EgoMimic, an algorithmic framework that leverages data from egocentric videos.\u003C\/p\u003E\u003Cp\u003EMeta\u2019s Ego4D dataset inspired Kareer\u2019s project. The benchmark dataset, released in 2023, consists of first-person videos of humans performing daily activities. This open-source data set trains AI models from a first-person human perspective.\u003C\/p\u003E\u003Cp\u003E\u201cWhen I looked at Ego4D, I saw a dataset that\u2019s the same as all the large robot datasets we\u2019re trying to collect, except it\u2019s with humans,\u201d Kareer said. \u201cYou just wear a pair of glasses, and you go do things. It doesn\u2019t need to come from the robot. It should come from something more scalable and passively generated, which is us.\u201d\u003C\/p\u003E\u003Cp\u003EKareer acquired a pair of Meta\u2019s Project Aria research glasses, which contain a rich sensor suite and can record video from a first-person perspective through external RGB and SLAM cameras.\u003C\/p\u003E\u003Cp\u003EKareer recorded himself folding a shirt while wearing the glasses and repeated the process. He did the same with other tasks such as placing a toy in a bowl and groceries into a bag. Then, he constructed a humanoid robot with pincers for hands and attached the glasses to the top to mimic a first-person viewpoint.\u003C\/p\u003E\u003Cp\u003EThe robot performed each task repeatedly for two hours. Kareer said building a traditional training algorithm would take days of teleoperating and recording robot sensory data. For his project, he only needed to gather a baseline of sensory data to ensure performance improvement.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EKareer bridged the gap between the two training sets with the EgoMimic algorithm. The robot\u2019s task performance rating increased by as much as 400% among various tasks with just 90 minutes of recorded footage. It also showed the ability to perform these tasks in unseen environments.\u003C\/p\u003E\u003Cp\u003EIf enough people wear Aria glasses or other smart glasses while performing daily tasks, it can create the passive data bank needed to train robots on a massive scale.\u003C\/p\u003E\u003Cp\u003EThis type of data collection can enable nearly endless possibilities for roboticists to help humans achieve more in their everyday lives. Humanoid robots can be produced and trained at an industrial level and be able to perform tasks the same way humans do.\u003C\/p\u003E\u003Cp\u003E\u201cThis work is most applicable to jobs that you can get a humanoid robot to do,\u201d Kareer said. \u201cIn whatever industry we are allowed to collect egocentric data, we can develop humanoid robots.\u201d\u003C\/p\u003E\u003Cp\u003EKareer will present his paper on EgoMimic at the 2025 IEEE Engineers\u2019 International Conference on Robotics and Automation (ICRA), which will take place from May 19 to 23 in Atlanta. The paper was co-authored by Xu and School of IC Assistant Professor \u003Cstrong\u003EJudy\u003C\/strong\u003E \u003Cstrong\u003EHoffman\u003C\/strong\u003E, fellow Tech students \u003Cstrong\u003EDhruv\u003C\/strong\u003E \u003Cstrong\u003EPatel\u003C\/strong\u003E, \u003Cstrong\u003ERyan\u003C\/strong\u003E \u003Cstrong\u003EPunamiya\u003C\/strong\u003E, \u003Cstrong\u003EPranay\u003C\/strong\u003E \u003Cstrong\u003EMathur\u003C\/strong\u003E, and \u003Cstrong\u003EShuo\u003C\/strong\u003E \u003Cstrong\u003ECheng\u003C\/strong\u003E, and \u003Cstrong\u003EChen\u003C\/strong\u003E \u003Cstrong\u003EWang\u003C\/strong\u003E, a Ph.D. student at Stanford.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EInspired by a dataset created by Meta, a Georgia Tech Ph.D. student is bringing a new perspective to robotics training.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Inspired by a dataset created by Meta, a Georgia Tech Ph.D. student is bringing a new perspective to robotics training."}],"uid":"32045","created_gmt":"2025-02-19 15:00:13","changed_gmt":"2025-02-19 20:20:46","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-02-19T00:00:00-05:00","iso_date":"2025-02-19T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"676332":{"id":"676332","type":"image","title":"Georgia Tech Ph.D. student Simar Kareer is revolutionizing how robots are trained.","body":null,"created":"1739977597","gmt_created":"2025-02-19 15:06:37","changed":"1739977597","gmt_changed":"2025-02-19 15:06:37","alt":"Georgia Tech Ph.D. student Simar Kareer is revolutionizing how robots are trained.","file":{"fid":"260101","name":"Simar Kareer_86A7668 (1).jpg","image_path":"\/sites\/default\/files\/2025\/02\/19\/Simar%20Kareer_86A7668%20%281%29.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/02\/19\/Simar%20Kareer_86A7668%20%281%29.jpg","mime":"image\/jpeg","size":118241,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/02\/19\/Simar%20Kareer_86A7668%20%281%29.jpg?itok=jakxURZ2"}}},"media_ids":["676332"],"related_links":[{"url":"https:\/\/youtu.be\/ckGUsdFX9pU?si=b-J_aUjaDNpMpq2b","title":"Project Aria Case Study: Introducing EgoMimic by the Georgia Institute of Technology"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"135","name":"Research"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"10199","name":"Daily Digest"},{"id":"181991","name":"Georgia Tech News Center"},{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EBen Snedeker, Communication Manager\u003C\/p\u003E\u003Cp\u003EGeorgia Tech College of Computing\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"679678":{"#nid":"679678","#data":{"type":"news","title":"Biden Administration Names Interactive Computing Researcher as PECASE Recipient","body":[{"value":"\u003Cp\u003EA researcher in Georgia Tech\u2019s School of Interactive Computing has received the nation\u2019s highest honor given to early career scientists and engineers.\u003C\/p\u003E\u003Cp\u003EAssociate Professor Josiah Hester was one of 400 people awarded the Presidential Early Career Award for Scientists and Engineers (PECASE), the Biden Administration announced in a\u003Ca href=\u0022https:\/\/www.whitehouse.gov\/ostp\/news-updates\/2025\/01\/14\/president-biden-honors-nearly-400-federally-funded-early-career-scientists\/\u0022\u003E\u003Cstrong\u003E press release\u003C\/strong\u003E\u003C\/a\u003E on Tuesday.\u003C\/p\u003E\u003Cp\u003EThe PECASE winners\u2019 research projects are funded by government organizations, including the National Science Foundation (NSF), the National Institutes of Health (NIH), the Centers for Disease Control and Prevention (CDC), and NASA. They will be invited to visit the White House later this year.\u003C\/p\u003E\u003Cp\u003EHester joins Associate Professor \u003Ca href=\u0022https:\/\/www.mse.gatech.edu\/news\/juan-pablo-correa-baena-named-pecase-recipient-president-biden\u0022\u003E\u003Cstrong\u003EJuan-Pablo Correa-Baena\u003C\/strong\u003E\u003C\/a\u003E from the School of Materials Science and Engineering as the two Tech faculty who received the honor.\u003C\/p\u003E\u003Cp\u003EHester said his nomination was based on the \u003Ca href=\u0022https:\/\/www.mccormick.northwestern.edu\/news\/articles\/2022\/02\/josiah-hester-receives-prestigious-nsf-career-award\/\u0022\u003E\u003Cstrong\u003ENSF Faculty Early Career Development Program (CAREER\u003C\/strong\u003E\u003C\/a\u003E) award he received in 2022 as an assistant professor at Northwestern University. He said the NSF submits its nominations to the White House for the PECASE awards, but researchers are not informed until the list of winners is announced.\u003C\/p\u003E\u003Cp\u003E\u201cFor me, I always thought this was an unachievable, unassailable type of thing because of the reputation of the folks in computing who\u2019ve won previously,\u201d Hester said. \u201cIt was always a far-reaching goal. I was shocked. It\u2019s something you would never in a million years think you would win.\u201d\u003C\/p\u003E\u003Cp\u003EHester is known for pioneering research in a new subfield of sustainable computing dedicated to creating battery-free devices powered by solar energy, kinetic energy, and radio waves. He co-led a team that developed the first \u003Ca href=\u0022https:\/\/www.mccormick.northwestern.edu\/magazine\/spring-2021\/future-played-without-batteries\/\u0022\u003E\u003Cstrong\u003Ebattery-free handheld gaming device\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003ELast year, Hester co-authored an \u003Ca href=\u0022https:\/\/cacm.acm.org\/research\/the-internet-of-batteryless-things\/\u0022\u003E\u003Cstrong\u003Earticle published\u003C\/strong\u003E\u003C\/a\u003E in the Association of Computing Machinery\u2019s in-house journal, the Communications of the ACM, in which he coined the term \u201cInternet of Battery-less Things.\u201d\u003C\/p\u003E\u003Cp\u003EThe Internet of Things is the network of physical computing devices capable of connecting to the internet and exchanging data. However, these devices eventually die. Landfills are overflowing with billions of them and their toxic power cells, harming our ecosystem.\u003C\/p\u003E\u003Cp\u003EIn his CAREER award, Hester outlined projects that would work toward replacing the most used computing devices with sustainable, battery-free alternatives.\u003C\/p\u003E\u003Cp\u003E\u201cI want everything to be an Internet of Batteryless Things \u2014 computational devices that could last forever,\u201d Hester said. \u201cI outlined a bunch of different ways that you could do that from the computer engineering side and a little bit from the human-computer interaction side. They all had a unifying theme of making computing more sustainable and climate-friendly.\u201d\u003C\/p\u003E\u003Cp\u003EHester is also a Sloan Research Fellow, an honor he received in 2022. In 2021, Popular Sciene named him to its \u003Ca href=\u0022https:\/\/www.popsci.com\/science\/brilliant-scientists-2021\/#Josiah%20Hester\u0022\u003E\u003Cstrong\u003EBrilliant 10\u003C\/strong\u003E\u003C\/a\u003E list. He also received the Most Promising Engineer or Scientist Award from the American Indian Science Engineering Society, which recognizes significant contributions from the indigenous peoples of North America and the Pacific Islands in STEM disciplines.\u003C\/p\u003E\u003Cp\u003EPresident Bill Clinton established PECASE in 1996. The White House press release recognizes exceptional scientists and engineers who demonstrate leadership early in their careers and present innovative and far-reaching developments in science and technology.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EHester joins Associate Professor \u003Ca href=\u0022https:\/\/www.mse.gatech.edu\/news\/juan-pablo-correa-baena-named-pecase-recipient-president-biden\u0022\u003E\u003Cstrong\u003EJuan-Pablo Correa-Baena\u003C\/strong\u003E\u003C\/a\u003E from the School of Materials Science and Engineering as the two Tech faculty who received the honor.\u003C\/p\u003E\u003Cp\u003EThe PECASE winners\u2019 research projects are funded by government organizations, including the National Science Foundation (NSF), the National Institutes of Health (NIH), the Centers for Disease Control and Prevention (CDC), and NASA. They will be invited to visit the White House later this year.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Interactive Computing Associate Professor Josiah Hester is one of 400 people to be awarded the Presidential Early Career Award For Scientists and Engineers (PECASE), the nation\u0027s highest honor for early career researchers."}],"uid":"36530","created_gmt":"2025-01-16 19:19:32","changed_gmt":"2025-01-16 19:21:19","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-01-16T00:00:00-05:00","iso_date":"2025-01-16T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"676048":{"id":"676048","type":"image","title":"EECS_86A9315-Enhanced-NR.jpg","body":null,"created":"1737055188","gmt_created":"2025-01-16 19:19:48","changed":"1737055188","gmt_changed":"2025-01-16 19:19:48","alt":"Josiah Hester","file":{"fid":"259752","name":"EECS_86A9315-Enhanced-NR.jpg","image_path":"\/sites\/default\/files\/2025\/01\/16\/EECS_86A9315-Enhanced-NR.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/01\/16\/EECS_86A9315-Enhanced-NR.jpg","mime":"image\/jpeg","size":105806,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/01\/16\/EECS_86A9315-Enhanced-NR.jpg?itok=i8gfRKxZ"}}},"media_ids":["676048"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"135","name":"Research"}],"keywords":[{"id":"9153","name":"Research Horizons"},{"id":"187915","name":"go-researchnews"},{"id":"172013","name":"Faculty Awards and Honors"},{"id":"1740","name":"National Award"}],"core_research_areas":[{"id":"39531","name":"Energy and Sustainable Infrastructure"},{"id":"39471","name":"Materials"},{"id":"39501","name":"People and Technology"},{"id":"39491","name":"Renewable Bioproducts"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENATHAN DEEN\u003C\/p\u003E\u003Cp\u003ECOMMUNICATIONS OFFICER\u003C\/p\u003E\u003Cp\u003ESCHOOL OF INTERACTIVE COMPUTING\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"678840":{"#nid":"678840","#data":{"type":"news","title":" Helluva Journey: Graduate Student Reflects on 13 Years and 4 Degrees at Tech","body":[{"value":"\u003Cp\u003EFor 13 years, \u003Cstrong\u003EKantwon Rogers\u003C\/strong\u003E kept coming back to Georgia Tech for more.\u003C\/p\u003E\u003Cp\u003EMore degrees to earn. More opportunities to teach. More lives to change.\u003C\/p\u003E\u003Cp\u003EHe held six internships at companies such as Amazon, Google, and Intel Corporation, and each time he couldn\u2019t wait to return to Georgia Tech\u2019s campus.\u003C\/p\u003E\u003Cp\u003EHis experiences at Georgia Tech have made it clear: Education is where he belongs.\u003C\/p\u003E\u003Cp\u003E\u201cEvery time I\u2019ve interned, I didn\u2019t like it, so I came back to school,\u201d Rogers said. \u201cBeing in school for this long has never felt like compromising something else I would rather have been doing.\u201d\u003C\/p\u003E\u003Cp\u003ERogers said he\u2019ll walk across the stage Thursday at McCamish Pavilion with no regrets as he receives his Ph.D. in computer science (CS) \u2014 the fourth degree he\u2019s earned since arriving at Georgia Tech in 2011. He also holds his bachelor\u2019s in computer engineering, a master\u2019s in electrical and computer engineering, and a master\u2019s in human-computer interaction (HCI).\u003C\/p\u003E\u003Cp\u003EThat first master\u2019s degree was mandated by his mother, Joan Dennis. She worked as a single parent without a college education in a competitive field in which most people had a master\u2019s.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThe second master\u2019s changed his life. Rogers planned to pursue an engineering-based Ph.D. after his first master\u2019s, but he missed the application deadline. He looked for alternatives to searching for industry jobs, and he learned the application deadline for master\u2019s programs was later than Ph.D. programs.\u003C\/p\u003E\u003Cp\u003E\u201cIt was a blessing in disguise,\u201d Rogers said. \u201cMy background before the second master\u2019s had been in computer engineering. It wasn\u2019t people-focused, and I realized I cared more about people than electrons. Doing my master\u2019s in HCI, I learned what it meant to do research with people in mind and how to design technology with people in mind.\u201d\u003C\/p\u003E\u003Cp\u003EThat decision put his research on a new trajectory. When he earned his master\u2019s in human-computer interaction, he knew the Ph.D. he wanted to pursue. Accepted into the CS Ph.D. program, Rogers worked with former School of Interactive Computing professor and chair Ayanna Howard, who is now the Dean of the College of Engineering at Ohio State.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EHoward still advises Rogers along with School of IC associate professor \u003Ca href=\u0022https:\/\/rail.gatech.edu\/\u0022\u003E\u003Cstrong\u003ESonia Chernova\u003C\/strong\u003E\u003C\/a\u003E. Rogers found a niche research field within human-robot interaction and built his dissertation around the ethics of robots and artificial intelligence and whether there are acceptable situations for a robot to lie to humans. For example, what should a chatbot tell a child if it is asked whether Santa Claus is real?\u003C\/p\u003E\u003Cp\u003EIn 2023, Rogers became a finalist in Georgia Tech\u2019s Three Minute (3M) Thesis Competition in which graduate students compete to explain their research in three minutes. He successfully defended his dissertation in November.\u0026nbsp;\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EStudent Teacher\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003ERogers hasn\u2019t lost touch with the new waves of incoming students over the years. Thousands of current students and Georgia Tech alumna know him as an instructor of the Computing for Engineers course (CS 1371), a CS course required for engineering majors.\u003C\/p\u003E\u003Cp\u003EIt\u2019s the same class Rogers took his first semester as a freshman, and it became one of his favorite undergraduate courses. A master\u2019s degree is required to teach the course. He inquired about becoming an instructor when he knew he would return for a second master\u2019s.\u003C\/p\u003E\u003Cp\u003ERogers remembered the first day he taught in front of hundreds of students as his best and worst day at Georgia Tech. He taught the class in the morning, and later that day, he learned his mother unexpectedly passed away.\u003C\/p\u003E\u003Cp\u003E\u201cIt was a very conflicting time for me,\u201d Rogers said. \u201cBeing able to teach the class helped me get through my mom\u2019s death. I poured everything into it and tried to do everything I could to help students and be selfless the way my mom was toward me and my sister.\u201d\u003C\/p\u003E\u003Cp\u003ERogers said he wanted the class to be more than a requirement for engineering students to learn the basics of coding and computer programming. He saw it as an opportunity for engineering students to think differently about CS. He said some students have told him they switched their majors to CS because they took his course.\u003C\/p\u003E\u003Cp\u003E\u201cI get to be the first exposure a lot of students get to computer science,\u201d he said. \u201cThis class has 700 to 1,000 students every semester, and being able to have that kind of impact is very enticing.\u003C\/p\u003E\u003Cp\u003E\u201cThere\u2019s never been a time since I\u2019ve been teaching it when I didn\u2019t look forward to it. Every day, I wake up excited to teach.\u201d\u003C\/p\u003E\u003Cp\u003EEven when pursuing his Ph.D. consumed much of his time, he saw teaching as an outlet rather than a hindrance.\u003C\/p\u003E\u003Cp\u003E\u201cMultiple people have told me to stop teaching because it doesn\u2019t get you a Ph.D. For me, teaching has always been the fun part. There\u2019s more in life than research, and teaching was an important counterbalance.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EStaying Connected\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003ERogers has also never been one to stay in a comfort zone or cut himself off from campus life. In addition to teaching CS 1371, Rogers has lived on campus throughout his time at Georgia Tech. As a grad student, he has been a resident advisor at Smith Hall and Hanson Hall, which house first-year students.\u003C\/p\u003E\u003Cp\u003E\u201cI\u2019m up to date on all the slang that comes out,\u201d Rogers said. \u201cIt helps keep me relatable. I know what it\u2019s like being a freshman taking this class, not knowing college, not knowing yourself, being confused. They\u2019ll be going through problems in their lives, and I\u2019m able to help them because I\u2019ve been through some of the same things.\u201d\u003C\/p\u003E\u003Cp\u003ERogers said his career goal is to become a university president, but what\u0027s next in the immediate future is still up in the air.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EHe\u2019s applied for postdoc positions and hasn\u2019t ruled out returning to Georgia Tech in that capacity. He may also teach CS 1371 one more semester in the spring while he sorts out his plans. However, he\u2019s treating this semester as his last and preparing his goodbyes.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cI don\u2019t know what emotions I\u2019ll feel,\u201d Rogers said about attending the Ph.D. graduation ceremony Thursday. \u201cI\u2019ll let myself feel whatever I want. Throughout this process, I\u2019ve been delusionally proud of myself for everything I\u2019ve done.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EKantwon Rogers has spent 13 years at Georgia Tech. In that timeframe, he\u0027s earned four degrees and taught as an instructor for the Computing for Engineers (CS 1371) course for eight years.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Kantwon Rogers is set to receive his Ph.D. in computer science, which will be the fourth degree he\u0027s earned from Georgia Tech"}],"uid":"36530","created_gmt":"2024-12-11 18:56:14","changed_gmt":"2024-12-12 14:17:59","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2024-12-11T00:00:00-05:00","iso_date":"2024-12-11T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"675831":{"id":"675831","type":"image","title":"208A9900.jpg","body":null,"created":"1733943431","gmt_created":"2024-12-11 18:57:11","changed":"1733943431","gmt_changed":"2024-12-11 18:57:11","alt":"Three students sit at a table laughing.","file":{"fid":"259502","name":"208A9900.jpg","image_path":"\/sites\/default\/files\/2024\/12\/11\/208A9900.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/12\/11\/208A9900.jpg","mime":"image\/jpeg","size":98798,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/12\/11\/208A9900.jpg?itok=BFLGQ5RM"}}},"media_ids":["675831"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"130","name":"Alumni"},{"id":"42901","name":"Community"},{"id":"129","name":"Institute and Campus"},{"id":"193157","name":"Student Honors and Achievements"}],"keywords":[{"id":"40171","name":"fall commencement"},{"id":"68621","name":"doctoral graduation"},{"id":"629","name":"graduation"},{"id":"40181","name":"fall graduation"},{"id":"175425","name":"georgia tech graduation"},{"id":"120531","name":"georgia tech graduate"},{"id":"172161","name":"GA Tech Ph.D. student"}],"core_research_areas":[],"news_room_topics":[{"id":"71871","name":"Campus and Community"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003C\/p\u003E\u003Cp\u003ECommunications Officer\u003C\/p\u003E\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E","format":"limited_html"}],"email":["ndeen6@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"678594":{"#nid":"678594","#data":{"type":"news","title":" Researchers Say AI Copyright Cases Could Have Negative Impact on Academic Research","body":[{"value":"\u003Cp\u003EDeven Desai and Mark Riedl have seen the signs for a while.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ETwo years since OpenAI introduced ChatGPT, dozens of lawsuits have been filed alleging technology companies have infringed copyright by using published works to train artificial intelligence (AI) models.\u003C\/p\u003E\u003Cp\u003EAcademic AI research efforts could be significantly hindered if courts rule in the plaintiffs\u0027 favor.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EDesai and Riedl are Georgia Tech researchers raising awareness about how these court rulings could force academic researchers to construct new AI models with limited training data. The two collaborated on a benchmark academic paper that examines the landscape of the ethical issues surrounding AI and copyright in industry and academic spaces.\u003C\/p\u003E\u003Cp\u003E\u201cThere are scenarios where courts may overreact to having a book corpus on your computer, and you didn\u2019t pay for it,\u201d Riedl said. \u201cIf you trained a model for an academic paper, as my students often do, that\u2019s not a problem right now. The courts could deem training is not fair use. That would have huge implications for academia.\u003C\/p\u003E\u003Cp\u003E\u201cWe want academics to be free to do their research without fear of repercussions in the marketplace because they\u2019re not competing in the marketplace,\u201d Riedl said.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.scheller.gatech.edu\/directory\/faculty\/desai\/index.html\u0022\u003E\u003Cstrong\u003EDesai\u003C\/strong\u003E\u003C\/a\u003E is the Sue and John Stanton Professor of Business Law and Ethics at the \u003Ca href=\u0022https:\/\/www.scheller.gatech.edu\/index.html\u0022\u003E\u003Cstrong\u003EScheller College of Business\u003C\/strong\u003E\u003C\/a\u003E. He researches how business interests and new technology shape privacy, intellectual property, and competition law. \u003Ca href=\u0022https:\/\/eilab.gatech.edu\/mark-riedl.html\u0022\u003E\u003Cstrong\u003ERiedl\u003C\/strong\u003E\u003C\/a\u003E is a professor at the College of Computing\u2019s \u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003E\u003Cstrong\u003ESchool of Interactive Computing\u003C\/strong\u003E\u003C\/a\u003E, researching human-centered AI, generative AI, explainable AI, and gaming AI.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ETheir paper, \u003Cem\u003EBetween Copyright and Computer Science: The Law and Ethics of Generative AI\u003C\/em\u003E, was published in the \u003Ca href=\u0022https:\/\/scholarlycommons.law.northwestern.edu\/njtip\/vol22\/iss1\/2\/\u0022\u003E\u003Cstrong\u003ENorthwestern Journal of Technology and Intellectual Property\u003C\/strong\u003E\u003C\/a\u003E on Monday.\u003C\/p\u003E\u003Cp\u003EDesai and Riedl say they want to offer solutions that balance the interests of various stakeholders. But that requires compromise from all sides.\u003C\/p\u003E\u003Cp\u003EResearchers should accept they may have to pay for the data they use to train AI models. Content creators, on the other hand, should receive compensation, but they may need to accept less money to ensure data remains affordable for academic researchers to acquire.\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EWho Benefits?\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EThe doctrine of fair use is at the center of every copyright debate. According to the U.S. Copyright Office, fair use permits the unlicensed use of copyright-protected works in certain circumstances, such as distributing information for the public good, including teaching and research.\u003C\/p\u003E\u003Cp\u003EFair use is often challenged when one or more parties profit from published works without compensating the authors.\u003C\/p\u003E\u003Cp\u003EAny original published content, including a personal website on the internet, is protected by copyright. However, copyrighted material is republished on websites or posted on social media innumerable times every day without the consent of the original authors.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EIn most cases, it\u2019s unlikely copyright violators gained financially from their infringement.\u003C\/p\u003E\u003Cp\u003EBut Desai said business-to-business cases are different. \u003Ca href=\u0022https:\/\/www.nytimes.com\/2023\/12\/27\/business\/media\/new-york-times-open-ai-microsoft-lawsuit.html\u0022\u003E\u003Cstrong\u003EThe New York Times\u003C\/strong\u003E\u003C\/a\u003E is one of many daily newspapers and media companies that have sued OpenAI for using its content as training data. Microsoft is also a defendant in The New York Times\u2019 suit because it invested billions of dollars into OpenAI\u2019s development of AI tools like ChatGPT.\u003C\/p\u003E\u003Cp\u003E\u201cYou can take a copyrighted photo and put it in your Twitter post or whatever you want,\u201d Desai said. \u201cThat\u2019s probably annoying to the owner. Economically, they probably wanted to be paid. But that\u2019s not business to business. What\u2019s happening with Open AI and The New York Times is business to business. That\u2019s big money.\u201d\u003C\/p\u003E\u003Cp\u003EOpenAI started as a nonprofit dedicated to the safe development of artificial general intelligence (AGI) \u2014 AI that, in theory, can rival human thinking and possess autonomy.\u003C\/p\u003E\u003Cp\u003EThese AI models would require massive amounts of data and expensive supercomputers to process that data. OpenAI could not raise enough money to afford such resources, so it created a for-profit arm controlled by its parent nonprofit.\u003C\/p\u003E\u003Cp\u003EDesai, Riedl, and many others argue that OpenAI ceased its research mission for the public good and began developing consumer products.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cIf you\u2019re doing basic research that you\u2019re not releasing to the world, it doesn\u2019t matter if every so often it plagiarizes The New York Times,\u201d Riedl said. \u201cNo one is economically benefitting from that. When they became a for-profit and produced a product, now they were making money from plagiarized text.\u201d\u003C\/p\u003E\u003Cp\u003EOpenAI\u2019s for-profit arm is valued at $80 billion, but content creators have not received a dime since the company has scraped massive amounts of copyrighted material as training data.\u003C\/p\u003E\u003Cp\u003EThe New York Times has posted warnings on its sites that its content cannot be used to train AI models. Many other websites offer a robot.txt file that contains instructions for bots about which pages can and cannot be accessed.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ENeither of these measures are legally binding and are often ignored.\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003ESolutions\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EDesai and Riedl offer a few options for companies to show good faith in rectifying the situation.\u003C\/p\u003E\u003Cul\u003E\u003Cli\u003ESpend the money. Desai says Open AI and Microsoft could have afforded its training data and avoided the hassle of legal consequences.\u003Cbr\u003E\u003Cbr\u003E\u201cIf you do the math on the costs to buy the books and copy them, they could have paid for them,\u201d he said. \u201cIt would\u2019ve been a multi-million dollar investment, but they\u2019re a multi-billion dollar company.\u201d\u003Cbr\u003E\u0026nbsp;\u003C\/li\u003E\u003Cli\u003EBe selective. Models can be trained on randomly selected texts from published works, allowing the model to understand the writing style without plagiarizing.\u0026nbsp;\u003Cbr\u003E\u003Cbr\u003E\u201cI don\u2019t need the entire text of War and Peace,\u201d Desai said. \u201cTo capture the way authors express themselves, I might only need a hundred pages. I\u2019ve also reduced the chance that my model will cough up entire texts.\u201d\u003Cbr\u003E\u0026nbsp;\u003C\/li\u003E\u003Cli\u003ELeverage libraries. The authors agree libraries could serve as an ideal middle ground as a place to store published works and compensate authors for access to those works, though the amount may be less than desired.\u003Cbr\u003E\u003Cbr\u003E\u201cMost of the objections you could raise are taken care of,\u201d Desai said. \u201cThey are legitimate access copies that are secure. You get access to only as much as you need. Libraries at universities have already become schools of information.\u201d\u003C\/li\u003E\u003C\/ul\u003E\u003Cp\u003EDesai and Riedl hope the legal action taken by publications like The New York Times will send a message to companies that develop AI tools to pump the breaks. If they don\u2019t, researchers uninterested in profit could pay the steepest price.\u003C\/p\u003E\u003Cp\u003EThe authors say it\u2019s not a new problem but is reaching a boiling point.\u003C\/p\u003E\u003Cp\u003E\u201cIn the history of copyright, there are ways that society has dealt with the problem of compensating creators and technology that copies or reduces your ability to extract money from your creation,\u201d Desai said. \u201cWe wanted to point out there\u2019s a way to get there.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003ETwo years since OpenAI introduced ChatGPT, dozens of lawsuits have been filed alleging technology companies have infringed copyright by using published works to train artificial intelligence (AI) models.\u003C\/p\u003E\u003Cp\u003EAcademic AI research efforts could be significantly hindered if courts rule in the plaintiffs\u0027 favor.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EDesai and Riedl are Georgia Tech researchers raising awareness about how these court rulings could force academic researchers to construct new AI models with limited training data. The two collaborated on a benchmark academic paper that examines the landscape of the ethical issues surrounding AI and copyright in industry and academic spaces.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Deven Desai and Mark Riedl are Georgia Tech researchers raising awareness about how court rulings for AI copyright cases could force academic researchers to construct new AI models with limited training data."}],"uid":"36530","created_gmt":"2024-11-21 18:41:45","changed_gmt":"2024-12-11 18:51:23","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2024-11-21T00:00:00-05:00","iso_date":"2024-11-21T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"675713":{"id":"675713","type":"image","title":"006_Deven Desai + Mark Riedl_86A8863.jpg","body":null,"created":"1732214565","gmt_created":"2024-11-21 18:42:45","changed":"1732214565","gmt_changed":"2024-11-21 18:42:45","alt":"Deven Desai and Mark Riedl","file":{"fid":"259369","name":"006_Deven Desai + Mark Riedl_86A8863.jpg","image_path":"\/sites\/default\/files\/2024\/11\/21\/006_Deven%20Desai%20%2B%20Mark%20Riedl_86A8863.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/11\/21\/006_Deven%20Desai%20%2B%20Mark%20Riedl_86A8863.jpg","mime":"image\/jpeg","size":101688,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/11\/21\/006_Deven%20Desai%20%2B%20Mark%20Riedl_86A8863.jpg?itok=il8z2cMB"}}},"media_ids":["675713"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"151","name":"Policy, Social Sciences, and Liberal Arts"},{"id":"135","name":"Research"}],"keywords":[{"id":"192863","name":"go-ai"},{"id":"9153","name":"Research Horizons"},{"id":"187812","name":"artificial intelligence (AI)"},{"id":"193860","name":"Artifical Intelligence"},{"id":"10828","name":"copyright"},{"id":"190302","name":"copyright law"},{"id":"38031","name":"copyright lawsuits"},{"id":"43101","name":"Georgia Tech Scheller College of Business"},{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"39501","name":"People and Technology"},{"id":"39511","name":"Public Service, Leadership, and Policy"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ECommunications Officer\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E","format":"limited_html"}],"email":["ndeen6@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"674021":{"#nid":"674021","#data":{"type":"news","title":"LLMs Generate Western Bias Even When Trained with Non-Western Languages","body":[{"value":"\u003Cp\u003ELarge language models tend to exhibit Western cultural bias even when they are prompted by or trained on non-English languages like Arabic, Georgia Tech researchers have learned.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA new paper authored by researchers in Georgia Tech\u0027s School of Interactive Computing reveals these models have trouble understanding contextual nuances that are specific to non-Western cultures.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPh.D. student Tarek Naous and his advisors, associate professors Wei Xu and Alan Ritter, challenged ChatGPT-4 and an Arabic-specific LLM to choose the most appropriate word to complete a sentence. Some of the words it could choose from were contextually correct and would make sense within Arabic culture, while others fell within Western paradigms.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn questions asking for suggestions for food dishes, drinks, or names of Arabic women, the models chose Western responses \u2014 ravioli for food, whiskey for drinks, and Roseanne for names.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe implication is that LLMs appear to fall short in their ability to assist users who have non-Western backgrounds.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs a method of measuring cultural bias, the team also introduced CAMeL (Cultural Appropriateness Measure Set for LMs). CAMeL is a benchmark data set that includes 628 naturally occurring prompts and 20,368 entities spanning eight categories that contrast Arab and Western cultures.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESince the researchers announced their paper, it has received attention on social media and in external media.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo learn more about the authors and their work, read the article spotlighting them on\u0026nbsp;\u003Ca href=\u0022https:\/\/venturebeat.com\/ai\/large-language-models-exhibit-significant-western-cultural-bias-study-finds\/\u0022\u003EVentureBeat\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003ENew research from Georgia Tech School of Interactive Computing Associate Professor Wei Xu is attracting media attention. VentureBeat recently examined Xu\u0027s findings that indicate large language models\u0026nbsp;appear to fall short in their ability to assist users who have non-Western backgrounds.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"New Georgia Tech research indicates that LLMs appear to fall short in their ability to assist users who have non-Western backgrounds."}],"uid":"32045","created_gmt":"2024-04-05 14:19:56","changed_gmt":"2024-12-09 17:36:57","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2024-04-05T00:00:00-04:00","iso_date":"2024-04-05T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"673633":{"id":"673633","type":"image","title":"School of Interactive Computing Associate Professor Wei Xu","body":null,"created":"1712326804","gmt_created":"2024-04-05 14:20:04","changed":"1712326804","gmt_changed":"2024-04-05 14:20:04","alt":"School of Interactive Computing Associate Professor Wei Xu","file":{"fid":"257051","name":"wei xu_story.jpg","image_path":"\/sites\/default\/files\/2024\/04\/05\/wei%20xu_story.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/04\/05\/wei%20xu_story.jpg","mime":"image\/jpeg","size":45675,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/04\/05\/wei%20xu_story.jpg?itok=JLX2Q2BU"}}},"media_ids":["673633"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"135","name":"Research"}],"keywords":[{"id":"10199","name":"Daily Digest"},{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen, Communications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech School of Interactive Computing\u003C\/p\u003E\r\n\r\n\u003Cp\u003Enathan.deen@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"678471":{"#nid":"678471","#data":{"type":"news","title":"Minority English Dialects Vulnerable to Automatic Speech Recognition Inaccuracy","body":[{"value":"\u003Cp\u003EThe Automatic Speech Recognition (ASR) models that power voice assistants like Amazon Alexa may have difficulty transcribing English speakers with minority dialects.\u003C\/p\u003E\u003Cp\u003EA study by Georgia Tech and Stanford researchers compared the transcribing performance of leading ASR models for people using Standard American English (SAE) and three minority dialects \u2014 African American Vernacular English (AAVE), Spanglish, and Chicano English.\u003C\/p\u003E\u003Cp\u003EInteractive Computing Ph.D. student \u003Ca href=\u0022https:\/\/camille2019.github.io\/\u0022\u003E\u003Cstrong\u003ECamille Harris\u003C\/strong\u003E\u003C\/a\u003E is the lead author of a paper accepted into the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP) this week in Miami.\u003C\/p\u003E\u003Cp\u003EHarris recruited people who spoke each dialect and had them read from a Spotify podcast dataset, which includes podcast audio and metadata. Harris then used three ASR models \u2014 wav2vec 2.0, HUBERT, and Whisper \u2014 to transcribe the audio and compare their performances.\u003C\/p\u003E\u003Cp\u003EFor each model, Harris found SAE transcription significantly outperformed each minority dialect. The models more accurately transcribed men who spoke SAE than women who spoke SAE. Members who spoke Spanglish and Chicano English had the least accurate transcriptions out of the test groups.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EWhile the models transcribed SAE-speaking women less accurately than their male counterparts, that did not hold true across minority dialects. Minority men had the most inaccurate transcriptions of all demographics in the study.\u003C\/p\u003E\u003Cp\u003E\u201cI think people would expect if women generally perform worse and minority dialects perform worse, then the combination of the two must also perform worse,\u201d Harris said. \u201cThat\u2019s not what we observed.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cSometimes minority dialect women performed better than Standard American English. We found a consistent pattern that men of color, particularly Black and Latino men, could be at the highest risk for these performance errors.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EAddressing underrepresentation\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EHarris said the cause of that outcome starts with the training data used to build these models. Model performance reflected the underrepresentation of minority dialects in the data sets.\u003C\/p\u003E\u003Cp\u003EAAVE performed best under the Whisper model, which Harris said had the most inclusive training data of minority dialects.\u003C\/p\u003E\u003Cp\u003EHarris also looked at whether her findings mirrored existing systems of oppression. Black men have high incarceration rates and are one of the people groups most targeted by police. Harris said there could be a correlation between that and the low rate of Black men enrolled in universities, which leads to less representation in technology spaces.\u003C\/p\u003E\u003Cp\u003E\u201cMinority men performing worse than minority women doesn\u2019t necessarily mean minority men are more oppressed,\u201d she said. \u201cThey may be less represented than minority women in computing and the professional sector that develops these AI systems.\u201d\u003C\/p\u003E\u003Cp\u003EHarris also had to be cautious of a few variables among AAVE, including code-switching and various regional subdialects.\u003C\/p\u003E\u003Cp\u003EHarris noted in her study there were cases of code-switching to SAE. Speakers who code-switched performed better than speakers who did not.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EHarris also tried to include different regional speakers.\u003C\/p\u003E\u003Cp\u003E\u201cIt\u2019s interesting from a linguistic and history perspective if you look at migration patterns of Black folks \u2014 perhaps people moving from a southern state to a northern state over time creates different linguistic variations,\u201d she said. \u201cThere are also generational variations in that older Black Americans may speak differently from younger folks. I think the variation was well represented in our data. We wanted to be sure to include that for robustness.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003ETikTok barriers\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EHarris said she built her study on a paper she authored that examined user-design barriers and biases faced by Black content creators on TikTok. She presented that paper at the Association of Computing Machinery\u2019s (ACM) 2023 Conference on Computer Supported Cooperative Works.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThose content creators depended on TikTok for a significant portion of their income. When providing captions for videos grew in popularity, those creators noticed the ASR tool built into the app inaccurately transcribed them. That forced the creators to manually input their captions, while SAE speakers could use the ASR feature to their benefit.\u003C\/p\u003E\u003Cp\u003E\u201cMinority users of these technologies will have to be more aware and keep in mind that they\u2019ll probably have to do a lot more customization because things won\u2019t be tailored to them,\u201d Harris said.\u003C\/p\u003E\u003Cp\u003EHarris said there are ways that designers of ASR tools could work toward being more inclusive of minority dialects, but cultural challenges could arise.\u003C\/p\u003E\u003Cp\u003E\u201cIt could be difficult to collect more minority speech data, and you have to consider consent with that,\u201d she said. \u201cDevelopers need to be more community-engaged to think about the implications of their models and whether it\u2019s something the community would find helpful.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EInteractive Computing Ph.D. student \u003Ca href=\u0022https:\/\/camille2019.github.io\/\u0022\u003E\u003Cstrong\u003ECamille Harris\u003C\/strong\u003E\u003C\/a\u003E is the lead author of a paper accepted into the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP) this week in Miami.\u003C\/p\u003E\u003Cp\u003EHarris recruited people who spoke each dialect and had them read from a Spotify podcast dataset, which includes podcast audio and metadata. Harris then used three ASR models \u2014 wav2vec 2.0, HUBERT, and Whisper \u2014 to transcribe the audio and compare their performances.\u003C\/p\u003E\u003Cp\u003EFor each model, Harris found SAE transcription significantly outperformed each minority dialect. The models more accurately transcribed men who spoke SAE than women who spoke SAE. Members who spoke Spanglish and Chicano English had the least accurate transcriptions out of the test groups.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EWhile the models transcribed SAE-speaking women less accurately than their male counterparts, that did not hold true across minority dialects. Minority men had the most inaccurate transcriptions of all demographics in the study.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"The Automatic Speech Recognition (ASR) models that power voice assistants like Amazon Alexa may have difficulty transcribing English speakers with minority dialects."}],"uid":"36530","created_gmt":"2024-11-15 18:59:54","changed_gmt":"2024-12-02 16:39:44","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2024-11-15T00:00:00-05:00","iso_date":"2024-11-15T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"675652":{"id":"675652","type":"image","title":"Summit on Responsible Computing, AI, and Society_86A9696-Enhanced-NR.jpg","body":null,"created":"1731697203","gmt_created":"2024-11-15 19:00:03","changed":"1731697203","gmt_changed":"2024-11-15 19:00:03","alt":"Camille Harris","file":{"fid":"259300","name":"Summit on Responsible Computing, AI, and Society_86A9696-Enhanced-NR.jpg","image_path":"\/sites\/default\/files\/2024\/11\/15\/Summit%20on%20Responsible%20Computing%2C%20AI%2C%20and%20Society_86A9696-Enhanced-NR.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/11\/15\/Summit%20on%20Responsible%20Computing%2C%20AI%2C%20and%20Society_86A9696-Enhanced-NR.jpg","mime":"image\/jpeg","size":67965,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/11\/15\/Summit%20on%20Responsible%20Computing%2C%20AI%2C%20and%20Society_86A9696-Enhanced-NR.jpg?itok=p5e1wYY6"}}},"media_ids":["675652"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"177001","name":"speech recognition"},{"id":"134041","name":"bias"},{"id":"9153","name":"Research Horizons"},{"id":"188776","name":"go-research"},{"id":"187915","name":"go-researchnews"},{"id":"192863","name":"go-ai"},{"id":"193860","name":"Artifical Intelligence"},{"id":"99601","name":"inequality"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71901","name":"Society and Culture"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ECommunications Officer\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E","format":"limited_html"}],"email":["ndeen6@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"678357":{"#nid":"678357","#data":{"type":"news","title":"Excel Students Design Customized Technologies Through HCI-centered Course","body":[{"value":"\u003Cp\u003EGeorgia Tech students with intellectual and developmental disabilities (IDD) are designing technologies tailored to them while teaching faculty and researchers about their needs in the process.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ERachel Lowy\u003C\/strong\u003E, a Ph.D. student in the School of Interactive Computing, piloted a new human-computer interaction design course for IDD students in Georgia Tech\u2019s \u003Ca href=\u0022https:\/\/excel.gatech.edu\/\u0022\u003E\u003Cstrong\u003EExcel\u003C\/strong\u003E\u003C\/a\u003E program. Excel is an Inclusive Postsecondary Education (IPSE) program that offers a four-year track for IDD students to earn two separate certificates.\u003C\/p\u003E\u003Cp\u003ELowy said the course differs from typical technology courses taught to IDD students. It provides autonomy and encourages students to contribute input on how the course is designed and which technology projects they want to create. They reflect critically on the role of technology in the world and use that reflection to design technology for themselves.\u003C\/p\u003E\u003Cp\u003EThe course is also unique because it involves a mix of professional educators and technology researchers working together. Lowy taught the class alongside her advisor, Assistant Professor \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/jennifer-kim\u0022\u003E\u003Cstrong\u003EJennifer Kim\u003C\/strong\u003E\u003C\/a\u003E, her lab colleague, Kaely Hall, master\u2019s students in the Georgia Tech MS-HCI program, computer science undergraduates, and Excel educators.\u003C\/p\u003E\u003Cp\u003E\u201cWe have a few models of students designing next to designers in classrooms, but they tend to be only taught by professionals in K-12 education, not necessarily HCI researchers in higher education. They rarely include students with IDD,\u201d she said.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cIn higher education, HCI projects may not go further than the classroom space. This course was special because we can teach these students with IDD high-level concepts about HCI and adopt their ideas into ongoing projects. We can keep working on them after the class has finished.\u201d\u003C\/p\u003E\u003Cp\u003ELowy said she designed the course based on previous work on accessible co-design and consulted with Assistant Professor \u003Ca href=\u0022https:\/\/tiles.cc.gatech.edu\/\u0022\u003E\u003Cstrong\u003EJessica Roberts\u003C\/strong\u003E\u003C\/a\u003E, an educational technology researcher in the School of IC, to develop course materials. She refined the course with her co-teachers as she taught it, responding to observations and reflections from students.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EIf the students had not been allowed to provide their input, Lowy and her team would never have learned how IDD students prefer to use different technologies. Lowy said they took that feedback to implement strength-based activities.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cSo much technology design for people with disabilities focuses on what they cannot do,\u201d she said. \u201cOur lab likes to focus on what they can do and their strengths.\u201d\u003C\/p\u003E\u003Cp\u003EDuring one class, the researchers brought a robot dog into the classroom to determine whether it could supply emotional support to the students. The feedback they received showed the students were more interested in how the robot dog could be a companion in day-to-day activities.\u003C\/p\u003E\u003Cp\u003E\u201cWe came in with an idea of how the participants might want to use the technology,\u201d Lowy said. \u201cThe students had a much broader view of what they might like to use this technology for. They reflected on their lives, and that\u2019s exactly what we want good design to do.\u201d\u003C\/p\u003E\u003Cp\u003ELowy said she hopes the course serves as a blueprint for inclusive advanced technology courses at the university level.\u003C\/p\u003E\u003Cp\u003E\u201cMost of their technology courses focus on workplace education like how to use Microsoft Suite, Google Calendar, or Outlook,\u201d she said. \u201cWe\u2019re working on more of a foundational level about how those technologies are designed and whether they work for them.\u201d\u003C\/p\u003E\u003Cp\u003EShe also said the course could be a step toward more inclusiveness in university classroom environments with traditional students and students with IDD learning together.\u003C\/p\u003E\u003Cp\u003E\u201cSomething that IPSE students have told me is that it\u2019s hard to keep up with lectures, and they sometimes struggle to keep up in class,\u201d she said. \u201cIt\u2019d be great if they take a class specifically targeted to them at their own pace with a hands-on element to it, and they got to learn through experiential activities. Then they take the knowledge they\u2019ve gleaned into an inclusive class where they work with their peers.\u201d\u003C\/p\u003E\u003Cp\u003EShe also suggested other models universities might offer, like an Intro to HCI course for IDD students that allows them to work on projects with students enrolled in the traditional Intro to HCI course.\u003C\/p\u003E\u003Cp\u003E\u201cAny university with an IPSE program and an HCI program on campus can do this,\u201d she said.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003E\u003Cstrong\u003ERachel Lowy\u003C\/strong\u003E, a Ph.D. student in the School of Interactive Computing, piloted a new human-computer interaction design course for IDD students in Georgia Tech\u2019s \u003Ca href=\u0022https:\/\/excel.gatech.edu\/\u0022\u003E\u003Cstrong\u003EExcel\u003C\/strong\u003E\u003C\/a\u003E program. Lowy said the course differs from typical technology courses taught to IDD students. It provides autonomy and encourages students to contribute input on how the course is designed and which technology projects they want to create.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech students with intellectual and developmental disabilities (IDD) are designing technologies tailored to them while teaching faculty and researchers about their needs in the process."}],"uid":"36530","created_gmt":"2024-11-12 16:41:45","changed_gmt":"2024-11-12 18:06:52","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2024-11-12T00:00:00-05:00","iso_date":"2024-11-12T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"675597":{"id":"675597","type":"image","title":"DSC_0360.JPG","body":null,"created":"1731434770","gmt_created":"2024-11-12 18:06:10","changed":"1731434770","gmt_changed":"2024-11-12 18:06:10","alt":"A robot dog stands in the middle of a classroom surrounded by people","file":{"fid":"259237","name":"DSC_0360.JPG","image_path":"\/sites\/default\/files\/2024\/11\/12\/DSC_0360.JPG","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/11\/12\/DSC_0360.JPG","mime":"image\/jpeg","size":151704,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/11\/12\/DSC_0360.JPG?itok=XNMDegdJ"}}},"media_ids":["675597"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"42901","name":"Community"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"42911","name":"Education"},{"id":"8862","name":"Student Research"}],"keywords":[{"id":"9153","name":"Research Horizons"},{"id":"174112","name":"excel program"},{"id":"411","name":"CEISMC"},{"id":"189625","name":"accessible education"},{"id":"10028","name":"Disabilities Research"},{"id":"242","name":"disabilities"},{"id":"185827","name":"learning disabilities"},{"id":"40051","name":"learning disability solutions"},{"id":"185875","name":"disability advocate"},{"id":"14646","name":"human-computer interaction"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003C\/p\u003E\u003Cp\u003ECommunications Officer\u003C\/p\u003E\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E","format":"limited_html"}],"email":["ndeen6@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"677744":{"#nid":"677744","#data":{"type":"news","title":"Study Shows Election Data Visualization Design Can Be a Powerful Persuasion Tool","body":[{"value":"\u003Cp\u003EFrom election forecasts and pandemic dashboards to stock market charts and scientific figures, many people trust data visualizations as objective truths and neutral representations of reality.\u003C\/p\u003E\u003Cp\u003EHowever, a study led by Georgia Tech and University of California, Berkeley researchers shows that annotations can lead people to draw different conclusions from the same visualizations. Their findings suggest readers should look beyond the presented data to make informed decisions.\u003C\/p\u003E\u003Cp\u003E\u201cPeople question things less if they see something that\u2019s visualized, and they think this is a reliable, trustworthy source they can use to form an opinion or persuade others,\u201d said Cindy Xiong, an assistant professor in the School of Interactive Computing. \u201cPeople don\u2019t realize the persuasive power of visualization, and they\u2019re not as vigilant to critically think about the data they interact with.\u201d\u003C\/p\u003E\u003Cp\u003EFor example, people tend to trust the information in an election data visualization. That makes them susceptible to narratives that visualization designers may use to obtain a certain outcome.\u003C\/p\u003E\u003Cp\u003EWorking with Chase Stokes, a Ph.D. candidate at UC Berkeley\u2019s School of Information, Xiong investigated how text position, semantic content, and biased wording impact viewers\u2019 perception of visualizations.\u003C\/p\u003E\u003Cp\u003EThey found people often reach the same conclusions suggested by titles and annotations on a chart.\u003C\/p\u003E\u003Cp\u003E\u201cVisual changes have a great deal of impact on how people interpret a chart,\u201d Stokes said. \u201cTitles, captions, and annotations strongly affect people\u2019s conclusions.\u201d\u003C\/p\u003E\u003Cp\u003EXiong and Stokes created a study centered around two hypothetical political parties \u2014 a blue party and a green party. They used a bar chart to show how many votes each party has received over the past three years. The data shows the blue party received more votes year after year than the green party, but the gap has closed each year.\u003C\/p\u003E\u003Cp\u003EThe researchers surveyed participants to predict which party would win in the fourth year. Responses were split nearly 50-50 before leveraging highlights and annotations.\u003C\/p\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cp\u003EWhen the researchers highlighted the green party\u2019s increasing voter support year after year, the prediction responses overwhelmingly favored the green party. Predictions favored the blue party when the researchers highlighted blue had won every year.\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EExisting Bias\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EWhile the scenario created by Xiong and Stokes reflects an ideal world of neutrality, the researchers concede that existing beliefs about political parties play a strong role in determining real-world bias. Participants consistently reported charts that supported one of the two parties were biased, and that perception intensified if the participants disagreed with the text provided.\u003C\/p\u003E\u003Cp\u003E\u201cIf I supported the green party, and I saw this chart, I would think blue party supporters made it because it\u2019s supporting the side that I don\u2019t agree with,\u201d Stokes said.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cIf the chart represented Republicans and Democrats, many people would perceive the data in a way that reinforces what they already think. If they disagreed with the party\u2019s ideologies, they would likely see the visualization as biased regardless of its portrayal.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EDesigner Responsibility\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EXiong and Stokes found that although textually annotated data patterns do not strongly sway people\u2019s predictions to favor one party over another, they make people suspicious of the designer\u2019s beliefs.\u003C\/p\u003E\u003Cp\u003E\u201cIt\u2019s easy to make a chart that alienates half the people you\u2019re trying to reach,\u201d Stokes said. \u201cFiguring out a way to make data accessible, understandable, and interesting to people who may not agree with your story is critical to mending that trust between designer and consumer.\u201d\u003C\/p\u003E\u003Cp\u003EFor example, someone who trusts the information presented to them on Fox News may not trust what they see in The New York Times. Designers must account for the distrust between the public and information sources when creating their visualizations.\u003C\/p\u003E\u003Cp\u003E\u201cThe solution to reaching the widest possible audience is to provide both sides of the story, even if the designer wants to persuade people toward a certain perspective,\u201d Xiong said.\u003C\/p\u003E\u003Cp\u003E\u201cIf you are making visualizations for a political candidate, it\u2019s difficult to persuade people that you\u2019re not biased. You could visually highlight your key takeaways. But adding textual annotations to your chart will make people think you\u2019re pushing hard for some narrative.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EStaying Informed\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EVoters, meanwhile, should be aware that most visualizations contain bias. The researchers agreed voters should gather information from various sources, including those that don\u2019t align with their opinions.\u003C\/p\u003E\u003Cp\u003E\u201cVoters should look for visualizations that talk about both sides and give you those different perspectives so you can make informed decisions about your future,\u201d Stokes said. \u201cIf you see a visualization that highlights one story, you should respond by finding the other side. There\u2019s never just one interpretation of a visualization.\u201d\u003C\/p\u003E\u003Cp\u003EXiong and Stokes published their findings in a paper that is being presented this week during the Institute of Electrical and Electronics Engineers\u2019 Visualization and Visual Analytics (VIS) Conference.\u003C\/p\u003E\u003Cdiv\u003E\u003Ch4\u003E\u003Cstrong\u003ERecent St\u003C\/strong\u003E\u003C\/h4\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EFrom election forecasts and pandemic dashboards to stock market charts and scientific figures, many people trust data visualizations as objective truths and neutral representations of reality.\u003C\/p\u003E\u003Cp\u003EHowever, a study led by Georgia Tech and University of California, Berkeley researchers shows that annotations can lead people to draw different conclusions from the same visualizations. Their findings suggest readers should look beyond the presented data to make informed decisions.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A study led by Georgia Tech and University of California, Berkeley researchers shows that annotations can lead people to draw different conclusions from the same visualizations."}],"uid":"36530","created_gmt":"2024-10-18 20:19:54","changed_gmt":"2024-10-18 20:20:50","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2024-10-18T00:00:00-04:00","iso_date":"2024-10-18T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"675369":{"id":"675369","type":"image","title":"2X6A2880.jpg","body":null,"created":"1729282801","gmt_created":"2024-10-18 20:20:01","changed":"1729282801","gmt_changed":"2024-10-18 20:20:01","alt":"Cindy Xiong","file":{"fid":"258982","name":"2X6A2880.jpg","image_path":"\/sites\/default\/files\/2024\/10\/18\/2X6A2880.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/10\/18\/2X6A2880.jpg","mime":"image\/jpeg","size":86109,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/10\/18\/2X6A2880.jpg?itok=X6tNDuPV"}}},"media_ids":["675369"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"193818","name":"2024 Presidential election"},{"id":"193821","name":"2024 election"},{"id":"4065","name":"election"},{"id":"33301","name":"data analytics"},{"id":"38921","name":"data visualization"},{"id":"4508","name":"political"},{"id":"187915","name":"go-researchnews"},{"id":"9153","name":"Research Horizons"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003C\/p\u003E\u003Cp\u003ECommunications Officer\u003C\/p\u003E\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"677243":{"#nid":"677243","#data":{"type":"news","title":"SKYSCENES Leverages New Algorithms to Improve Safety for Autonomous Flying Vehicles","body":[{"value":"\u003Cp\u003EAn artificial intelligence (AI) training dataset developed at Georgia Tech is \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/skyscenes-dataset-could-lead-safe-reliable-autonomous-flying-vehicles\u0022\u003Esetting a new standard for the safety and reliability of autonomous drones and flying vehicles\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003ESKYSCENES compiles more than 33,000 annotated computer-generated aerial images. With applications in urban planning, disaster response, and autonomous navigation, the dataset trains computer vision models to better detect and identify objects in aerial images, which can be challenging for existing AI models.\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/skyscenes-dataset-could-lead-safe-reliable-autonomous-flying-vehicles\u0022\u003ERead the full story\u003C\/a\u003E to learn how School of Interactive Computing Ph.D. student \u003Cstrong\u003ESahil\u003C\/strong\u003E \u003Cstrong\u003EKhose\u003C\/strong\u003E and Assistant Professor \u003Cstrong\u003EJudy\u003C\/strong\u003E \u003Cstrong\u003EHoffman\u003C\/strong\u003E developed this groundbreaking dataset to pave the way for the future of autonomous aviation.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech researchers have created a new benchmark dataset of computer-generated aerial images. Judy Hoffman, an assistant professor at Georgia Tech\u2019s School of Interactive Computing, worked with students to create SKYSCENES, a dataset containing over 33,000 computer-generated aerial images of cities.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"New research from Georgia Tech\u2019s School of Interactive Computing is paving the way for the future of autonomous aviation."}],"uid":"32045","created_gmt":"2024-10-02 15:05:04","changed_gmt":"2024-10-16 18:06:08","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2024-10-02T00:00:00-04:00","iso_date":"2024-10-02T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"675195":{"id":"675195","type":"image","title":"Georgia Tech School of Interactive Computing Ph.D. student Sahil Khose","body":"\u003Cp\u003EPh.D. student Sahil Khose worked with Assistant Professor Judy Hoffman to curate SKYSCENES, a new benchmark dataset that provides well-annotated aerial images of cities that computer vision algorithms can use to operate autonomous flying vehicles. Photos by Kevin Beasley\/College of Computing.\u003C\/p\u003E","created":"1727881514","gmt_created":"2024-10-02 15:05:14","changed":"1727881514","gmt_changed":"2024-10-02 15:05:14","alt":"Georgia Tech School of Interactive Computing Ph.D. student Sahil Khose","file":{"fid":"258796","name":"2X6A9656 (1).jpg","image_path":"\/sites\/default\/files\/2024\/10\/02\/2X6A9656%20%281%29.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/10\/02\/2X6A9656%20%281%29.jpg","mime":"image\/jpeg","size":41388,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/10\/02\/2X6A9656%20%281%29.jpg?itok=dxPOB_Ud"}}},"media_ids":["675195"],"related_links":[{"url":"https:\/\/www.cc.gatech.edu\/news\/skyscenes-dataset-could-lead-safe-reliable-autonomous-flying-vehicles","title":"SKYSCENES Dataset Could Lead to Safe, Reliable Autonomous Flying Vehicles"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"135","name":"Research"}],"keywords":[{"id":"187915","name":"go-researchnews"},{"id":"10199","name":"Daily Digest"},{"id":"187812","name":"artificial intelligence (AI)"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen, Communications Officer\u003C\/p\u003E\u003Cp\u003EGeorgia Tech School of Interactive Computing\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022mailto:nathan.deen@cc.gatech.edu\u0022\u003Enathan.deen@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"677073":{"#nid":"677073","#data":{"type":"news","title":"AI Oral Assessment Tool Uses Socratic Method to Test Students\u0027 Knowledge","body":[{"value":"\u003Cp\u003EA year ago, Ray Hung, a master\u2019s student in computer science, assisted Professor Thad Starner in constructing an artificial intelligence (AI)-powered \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/professor-deploying-anti-plagiarism-detection-tool-900-student-course\u0022\u003E\u003Cstrong\u003Eanti-plagiarism tool \u003C\/strong\u003E\u003C\/a\u003Efor Starner\u2019s 900-student Intro to Artificial Intelligence (CS3600) course.\u003C\/p\u003E\u003Cp\u003EWhile the tool proved effective, Hung began considering ways to deter plagiarism and improve the education system.\u003C\/p\u003E\u003Cp\u003EPlagiarism can be prevalent in online exams, so Hung looked at oral examinations commonly used in European education systems and rooted in the Socratic method.\u003C\/p\u003E\u003Cp\u003EOne of the advantages of oral assessments is they naturally hinder cheating. Consulting ChatGPT wouldn\u2019t benefit a student unless the student memorizes the entire answer. Even then, follow-up questions would reveal a lack of genuine understanding.\u003C\/p\u003E\u003Cp\u003EHung drew inspiration from the 2009 reboot of Star Trek, particularly the opening scene in which a young Spock provides oral answers to questions prompted by AI.\u003C\/p\u003E\u003Cp\u003E\u201cI think we can do something similar,\u201d Hung said. \u201cResearch has shown that oral assessment improves people\u2019s material understanding, critical thinking, and communication skills.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cThe problem is that it\u2019s not scalable with human teachers. A professor may have 600 students. Even with teaching assistants, it\u2019s not practical to conduct oral assessments. But with AI, it\u2019s now possible.\u201d\u003C\/p\u003E\u003Cp\u003EHung developed \u003Ca href=\u0022https:\/\/socraticmind.com\/\u0022\u003E\u003Cstrong\u003EThe Socratic Mind\u003C\/strong\u003E\u003C\/a\u003E with Starner, Scheller College of Business Assistant Professor Eunhee Sohn, and researchers from the Georgia Tech Center for 21st Century Universities (C21U).\u003C\/p\u003E\u003Cp\u003EThe Socratic Mind is a scalable, AI-powered oral assessment platform leveraging Socratic questioning to challenge students to explain, justify, and defend their answers to showcase their understanding.\u003C\/p\u003E\u003Cp\u003E\u201cWe believe that if you truly understand something, you should be able to explain it,\u201d Hung said.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cThere is a deeper need for fostering genuine understanding and cultivating high-order thinking skills. I wanted to promote an education paradigm in which critical thinking, material understanding, and communication skills play integral roles and are at the forefront of our education.\u201d\u003C\/p\u003E\u003Cp\u003EHung entered his project into the\u003Ca href=\u0022https:\/\/tools-competition.org\/23-24-accelerating-and-assessing-learning-winners\/#:~:text=students%20with%20disabilities.-,Socratic%20Mind,-%7C%20Socratic%20Mind%20Inc\u0022\u003E\u003Cstrong\u003E Learning Engineering Tools Competition\u003C\/strong\u003E\u003C\/a\u003E, one of the largest education technology competitions in the world. Hung and his collaborators were among five teams that won a Catalyst Award and received a $50,000 prize.\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EBenefits for Students\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EThe Socratic Mind will be piloted in several classes this semester with about 2,000 students participating. One of those classes is the Intro to Computing (CS1301) class taught by College of Computing Professor David Joyner.\u003C\/p\u003E\u003Cp\u003EHung said The Socratic Mind will be a resource students can use to prepare to defend their dissertation or to teach a class if they choose to pursue a Ph.D. Anyone struggling with public speaking or preparing for job interviews will find the tool helpful.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cMany users are interested in AI roleplay to practice real-world conversations,\u201d he said. \u201cThe AI can roleplay a manager if you want to discuss a promotion. It can roleplay as an interviewer if you have a job interview. There are a lot of uses for oral assessment platforms where you can practice talking with an AI.\u003C\/p\u003E\u003Cp\u003E\u201cI hope this tool helps students find their education more valuable and help them become better citizens, workers, entrepreneurs, or whoever they want to be in the future.\u201d\u003C\/p\u003E\u003Cp\u003EHung said the chatbot is not only conversational but also adverse to human persuasion because it follows the Socratic method of asking follow-up questions.\u003C\/p\u003E\u003Cp\u003E\u201cChatGPT and most other large language models are trained as helpful, harmless assistants,\u201d he said. \u201cIf you argue with it and hold your position strong enough, you can coerce it to agree. We don\u2019t want that.\u003C\/p\u003E\u003Cp\u003E\u201cThe Socratic Mind AI will follow up with you in real-time about what you just said, so it\u2019s not a one-way conversation. It\u2019s interactive and engaging and mimics human communication well.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EEducational Overhaul\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EC21U Director of Research in Education Innovation Jonna Lee and C21U Research Scientist Meryem Soylu will measure The Socratic Mind\u2019s effectiveness during the pilot and determine its scalability.\u003C\/p\u003E\u003Cp\u003E\u201cI thought it would be interesting to develop this further from a learning engineering perspective because it\u2019s about systematic problem solving, and we want to create scalable solutions with technologies,\u201d Lee said.\u003C\/p\u003E\u003Cp\u003E\u201cI hope we can find actionable insights about how this AI tool can help transform classroom learning and assessment practices compared to traditional methods. We see the potential for personalized learning for various student populations, including non-traditional lifetime learners.\u0022\u003C\/p\u003E\u003Cp\u003EHung said The Socratic Mind has the potential to revolutionize the U.S. education system depending on how the system chooses to incorporate AI. \u0026nbsp;\u003C\/p\u003E\u003Cp\u003ERecognizing the advancement of AI is likely an unstoppable trend. Hung advocates leveraging AI to enhance learning and unlock human potential rather than focusing on restrictions.\u003C\/p\u003E\u003Cp\u003E\u201cWe are in an era in which information is abundant, but wisdom is scarce,\u201d Hung said. \u201cShallow and rapid interactions drive social media, for example. We think it\u2019s a golden time to elevate people\u2019s critical thinking and communication skills.\u201d\u003C\/p\u003E\u003Cp\u003EFor more information about The Socratic Mind and to try a demo, visit the project\u0027s \u003Ca href=\u0022https:\/\/socraticmind.com\/\u0022\u003E\u003Cstrong\u003Ewebsite\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EHung entered his project into the\u003Ca href=\u0022https:\/\/tools-competition.org\/23-24-accelerating-and-assessing-learning-winners\/#:~:text=students%20with%20disabilities.-,Socratic%20Mind,-%7C%20Socratic%20Mind%20Inc\u0022\u003E\u003Cstrong\u003E Learning Engineering Tools Competition\u003C\/strong\u003E\u003C\/a\u003E, one of the largest education technology competitions in the world. Hung and his collaborators were among five teams that won a Catalyst Award and received a $50,000 prize.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Ray Hung, a CS master\u0027s student, has developed a tool called The Socratic Mind, an AI-powered oral assessment platform leveraging Socratic questioning"}],"uid":"36530","created_gmt":"2024-09-24 14:49:16","changed_gmt":"2024-10-16 18:04:49","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2024-09-24T00:00:00-04:00","iso_date":"2024-09-24T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"675079":{"id":"675079","type":"image","title":"socratic_mind _story graphic.jpg","body":null,"created":"1727189367","gmt_created":"2024-09-24 14:49:27","changed":"1727189367","gmt_changed":"2024-09-24 14:49:27","alt":"Socrates","file":{"fid":"258672","name":"socratic_mind _story graphic.jpg","image_path":"\/sites\/default\/files\/2024\/09\/24\/socratic_mind%20_story%20graphic.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/09\/24\/socratic_mind%20_story%20graphic.jpg","mime":"image\/jpeg","size":205085,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/09\/24\/socratic_mind%20_story%20graphic.jpg?itok=Yvs3CRjJ"}}},"media_ids":["675079"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"42911","name":"Education"},{"id":"193158","name":"Student Competition Winners (academic, innovation, and research)"}],"keywords":[{"id":"187915","name":"go-researchnews"},{"id":"193860","name":"Artifical Intelligence"},{"id":"190865","name":"AI-ALOE"},{"id":"192863","name":"go-ai"},{"id":"9153","name":"Research Horizons"},{"id":"13481","name":"C21U"},{"id":"11807","name":"online education"},{"id":"193940","name":"college of lifetime learning"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003C\/p\u003E\u003Cp\u003ECommunications Officer\u003C\/p\u003E\u003Cp\u003ESchool of interactive Computing\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"677477":{"#nid":"677477","#data":{"type":"news","title":"Soil-Powered Fuel Cell Makes List of Best Sustainability Designs","body":[{"value":"\u003Cp\u003EA newly designed soil-powered fuel cell that could provide a sustainable alternative to batteries was recognized as an honorable mention in the annual Fast Company Innovation by Design Awards.\u003C\/p\u003E\u003Cp\u003ETerracell is roughly the size of a paperback book and uses microbes found in soil to generate energy for low-power applications.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EPrevious designs for soil microbial fuel cells required water submergence or saturated soil. Terracell can function in soil with a volumetric water content of 42%\u003C\/p\u003E\u003Cp\u003ETerracell placed in Fast Company\u2019s list of the \u003Ca href=\u0022https:\/\/www.fastcompany.com\/91129811\/students-innovation-by-design-2024\u0022\u003E\u003Cstrong\u003Ebest sustainability-focused designs of 2024\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003EResearchers at Northwestern University lead the multi-institution research team that designed Terracell.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EJosiah\u003C\/strong\u003E \u003Cstrong\u003EHester\u003C\/strong\u003E, an associate professor in \u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003EGeorgia Tech\u0027s School of Interactive Computing\u003C\/a\u003E who previously worked at Northwestern, directs the \u003Ca href=\u0022https:\/\/kamoamoa.com\/\u0022\u003EKa Moamoa Lab\u003C\/a\u003E, where the project was conceived.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThe team includes researchers from Northwestern, Georgia Tech, Stanford, the University of California-San Diego, and the University of California-Santa Cruz.\u003C\/p\u003E\u003Cp\u003ETheir research was published in January in the Proceedings of the Association for Computing Machinery on Interactive, Mobile, Wearable, and Ubiquitous Technologies. The researchers will also present this work at the ACM international joint conference on Pervasive and Ubiquitous Computing (Ubicomp), Oct. 5-9.\u003C\/p\u003E\u003Cp\u003EAccording to the Fast Company website, the Innovation by Design Awards recognize \u201cdesigners and businesses solving the most crucial problems of today and anticipating the pressing issues of tomorrow.\u201d Winners are published in Fast Company Magazine and are honored at the Fast Company Innovation Festival in the fall.\u003C\/p\u003E\u003Cp\u003E\u201cTerracell could reduce e-waste and extend the useful lifetime of electronics deployed for agriculture, environmental monitoring, and smart cities,\u201d Hester said. \u201cWe were honored to be recognized for the design innovation award. It is a testament to the promise of sustainable computing and our hope for a more sustainable world.\u201d\u003C\/p\u003E\u003Cp\u003EFor more information about Terracell, see the story featured on Northwestern Now, or visit the project\u2019s \u003Ca href=\u0022https:\/\/www.terracell.org\/\u0022\u003E\u003Cstrong\u003Ewebsite\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EAssociate Professor of Interactive Computing \u003Cstrong\u003EJosiah\u003C\/strong\u003E \u003Cstrong\u003EHester\u003C\/strong\u003E\u0027s lab is developing new technology that harvests energy from soil. Terracell placed in Fast Company\u2019s list of the best sustainability-focused designs of 2024.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"New technology being developed at Georgia Tech placed in Fast Company\u2019s list of the best sustainability-focused designs of 2024."}],"uid":"32045","created_gmt":"2024-10-11 14:16:38","changed_gmt":"2024-10-11 14:23:43","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2024-10-11T00:00:00-04:00","iso_date":"2024-10-11T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"675290":{"id":"675290","type":"image","title":"Lighted bulb in the dirt illustrates new technology that draws energy from dirt.","body":"\u003Cp\u003EAn Adobe stock conceptual image of a lighted bulb in the dirt illustrating new technology that draws energy from dirt.\u003C\/p\u003E","created":"1728656208","gmt_created":"2024-10-11 14:16:48","changed":"1728656208","gmt_changed":"2024-10-11 14:16:48","alt":"An Adobe stock conceptual image of a lighted bulb in the dirt illustrating new technology that draws energy from dirt.","file":{"fid":"258897","name":"AdobeStock_241936979.jpeg","image_path":"\/sites\/default\/files\/2024\/10\/11\/AdobeStock_241936979.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/10\/11\/AdobeStock_241936979.jpeg","mime":"image\/jpeg","size":105240,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/10\/11\/AdobeStock_241936979.jpeg?itok=6MaZJidR"}},"671840":{"id":"671840","type":"image","title":"Georgia Tech Associate Professor of Interactive Computing Josiah Hester","body":null,"created":"1695750013","gmt_created":"2023-09-26 17:40:13","changed":"1695750013","gmt_changed":"2023-09-26 17:40:13","alt":"Georgia Tech Associate Professor of Interactive Computing Josiah Hester","file":{"fid":"254978","name":"Josiah Hester_86A0504.jpg","image_path":"\/sites\/default\/files\/2023\/09\/26\/Josiah%20Hester_86A0504.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/09\/26\/Josiah%20Hester_86A0504.jpg","mime":"image\/jpeg","size":598031,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/09\/26\/Josiah%20Hester_86A0504.jpg?itok=9adMnFyo"}}},"media_ids":["675290","671840"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"144","name":"Energy"},{"id":"154","name":"Environment"}],"keywords":[{"id":"10199","name":"Daily Digest"},{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"39531","name":"Energy and Sustainable Infrastructure"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen, Communications Officer\u003Cbr\u003EGeorgia Tech School of Interactive Computing\u003Cbr\u003Enathan.deen@cc.gatech.edu\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"677200":{"#nid":"677200","#data":{"type":"news","title":"New Generative Tool Provides Images to Accompany Step-by-step Instructions","body":[{"value":"\u003Cp\u003ELEGO can show you how it\u2019s done.\u003C\/p\u003E\u003Cp\u003EProper instructions can be the difference between success and failure, whether for a parent putting together a crib or someone administering CPR.\u003C\/p\u003E\u003Cp\u003EWhile large language models (LLMs) can provide step-by-step instructions for assembling a crib, administering CPR, and other activities, Bolin Lai thinks they can go further.\u003C\/p\u003E\u003Cp\u003ELai is a machine learning Ph.D. student who developed LEGO. This new framework allows generative artificial intelligence (AI) models to create first-person synthetic images based on text prompts. These images provide users with visual step-by-step instructions to complete a task.\u003C\/p\u003E\u003Cp\u003EFor example, someone may not know how to properly handwash laundry if they\u2019ve always relied on a washing machine.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ELai said they could consult an LLM, but it provides instructions only in textual output. Users may feel better about doing the task correctly if they have a corresponding image to reference.\u003C\/p\u003E\u003Cp\u003E\u201cThose instructions from LLMs could be very generic, so you\u2019re reading lots of words, and it may not apply to your current situation,\u201d Lai said. \u201cThough you can input an image to GPT for more customized guidance, reading pure textual response isn\u2019t efficient. Our model can understand the image and provide instructions by generating an image action frame showing people how to do it exactly.\u201d\u003C\/p\u003E\u003Cp\u003EIf a person wanted to know how to scrub a pair of trousers properly with a brush, they would first take a first-person photo of their situation. They can then upload that photo and prompt LEGO for instructions on washing the trousers with a brush.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EBased on the text in the prompt and the provided photo, the model generates a new image of someone scrubbing the trousers with the brush in the same environment.\u003C\/p\u003E\u003Cp\u003EThe possibilities are innumerable, but Lai said his goal is to provide a way for people to learn new skills in everyday scenarios. Some of those skills could prove to be lifesaving.\u003C\/p\u003E\u003Cp\u003E\u201cIn some rural areas, there may not be any quick medical service available,\u201d he said. \u201cIf an emergency happens, people can use this tool and find professional steps to assist the person who needs medical care.\u201d\u003C\/p\u003E\u003Cp\u003ELai started this project while interning at Meta GenAI and authored a paper titled LEGO: Learning Egocentric Action Frame Generation via Visual Instruction Tuning. His paper will be presented at the European Conference on Computer Vision Oct. 5-9 in Milan, Italy.\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EGathering Data\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003ELai said his work stems from Meta\u2019s release of the \u003Ca href=\u0022https:\/\/ego4d-data.org\/\u0022\u003E\u003Cstrong\u003EEGO4D dataset\u003C\/strong\u003E\u003C\/a\u003E, a benchmark dataset consisting of first-person videos of humans performing everyday activities. The dataset was created to facilitate research in augmented and virtual reality and robotics.\u003C\/p\u003E\u003Cp\u003ELai used still images from EGO4D to generate accurate and believable images in LEGO\u2019s output.\u003C\/p\u003E\u003Cp\u003E\u201cIt\u2019s so valuable, and they have corresponding annotations for the narration about what people are doing in the videos,\u201d he said of EGO4D. \u201cWith so many egocentric videos, we can do much research on egocentric vision. We can have better data to train models and explore more action categories. We can learn the interaction of hands and objects and how the object\u2019s state can change, such as moving from one place to another or changing its shape.\u201d\u003C\/p\u003E\u003Cp\u003ELai also curated images from a dataset called EPIC-KITCHENS, which contains first-person images of kitchen items, to bolster training.\u003C\/p\u003E\u003Cp\u003EUsing a pair of smart glasses that could capture first-person images wherever he went, Lai then collected images of real-world scenarios that may require instructional assistance. He fed the images of those scenarios into LEGO and received accurate and believable synthetic images of completed tasks.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EHe found that the model needs a single image to generate new images demonstrating a step-by-step process to complete a task.\u003C\/p\u003E\u003Cp\u003E\u201cWe show the model can a have high-quality generation of a real-world image. The task is challenging because the background in the user\u2019s input image may be complex and chaotic. Other generative models are trained on all synthetic images with clean backgrounds and a few objects dominating the foreground. They oversimplify the problem and may not apply to the real world.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EFrom Images to Video\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003ELai envisions scaling his work to AI-generated video in which instructional videos could be the output instead of still images. These videos would show images of the instructional process and could be accompanied by narration.\u003C\/p\u003E\u003Cp\u003EHe said that possibility is a long way off. Current generative AI video tools such as OpenAI\u2019s Sora can generate videos up to 60 seconds long, but Lai says he doesn\u2019t have access to the resources to reach that length.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cWe need more powerful computing resources to make it into a video, which was our initial goal, but we have found it difficult. It\u2019s currently unaffordable for us, so we simplified the problem into image generation.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EIf a person wanted to know how to scrub a pair of trousers properly with a brush, they would first take a first-person photo of their situation. They can then upload that photo and prompt LEGO for instructions on washing the trousers with a brush.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EBased on the text in the prompt and the provided photo, the model generates a new image of someone scrubbing the trousers with the brush in the same environment.\u003C\/p\u003E\u003Cp\u003EThe possibilities are innumerable, but Lai said his goal is to provide a way for people to learn new skills in everyday scenarios. Some of those skills could prove to be lifesaving.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A new framework allows generative artificial intelligence (AI) models to create first-person synthetic images based on text prompts"}],"uid":"36530","created_gmt":"2024-09-30 17:42:51","changed_gmt":"2024-09-30 17:43:43","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2024-09-30T00:00:00-04:00","iso_date":"2024-09-30T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"675163":{"id":"675163","type":"image","title":"knead_dough_input.png","body":null,"created":"1727718187","gmt_created":"2024-09-30 17:43:07","changed":"1727718187","gmt_changed":"2024-09-30 17:43:07","alt":"Kneading dough","file":{"fid":"258763","name":"knead_dough_input.png","image_path":"\/sites\/default\/files\/2024\/09\/30\/knead_dough_input.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/09\/30\/knead_dough_input.png","mime":"image\/png","size":686604,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/09\/30\/knead_dough_input.png?itok=UEvx_fcK"}}},"media_ids":["675163"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003C\/p\u003E\u003Cp\u003ECommunications Officer\u003C\/p\u003E\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"677158":{"#nid":"677158","#data":{"type":"news","title":"SKYSCENES Dataset Could Lead to Safe, Reliable Autonomous Flying Vehicles","body":[{"value":"\u003Cp\u003EIs it a building or a street? How tall is the building? Are there powerlines nearby?\u003C\/p\u003E\u003Cp\u003EThese are details autonomous flying vehicles would need to know to function safely. However, few aerial image datasets exist that can adequately train the computer vision algorithms that would pilot these vehicles.\u003C\/p\u003E\u003Cp\u003EThat\u2019s why Georgia Tech researchers created a new benchmark dataset of computer-generated aerial images.\u003C\/p\u003E\u003Cp\u003EJudy Hoffman, an assistant professor in Georgia Tech\u2019s School of Interactive Computing, worked with students in her lab to create SKYSCENES. The dataset contains over 33,000 aerial images of cities curated from a computer simulation program.\u003C\/p\u003E\u003Cp\u003EHoffman said sufficient training datasets could unlock the potential of autonomous flying vehicles. Constructing those datasets is a challenge the computer vision research community has been working for years to overcome.\u003C\/p\u003E\u003Cp\u003E\u201cYou can\u2019t crowdsource it the same way you would standard internet images,\u201d Hoffman said. \u201cTrying to collect it manually would be very slow and expensive \u2014 akin to what the self-driving industry is doing driving around vehicles, but now you\u2019re talking about drones flying around.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cWe must fix those problems to have models that work reliably and safely for flying vehicles.\u201d\u003C\/p\u003E\u003Cp\u003EMany existing datasets aren\u2019t annotated well enough for algorithms to distinguish objects in the image. For example, the algorithms may not recognize the surface of a building from the surface of a street.\u003C\/p\u003E\u003Cp\u003EWorking with Hoffman, Ph.D. student Sahil Khose tried a new approach \u2014 constructing a synthetic image data set from a ground-view, open-source simulator known as CARLA.\u003C\/p\u003E\u003Cp\u003ECARLA was originally designed to provide ground-view simulation for self-driving vehicles. It creates an open-world virtual reality that allows users to drive around in computer-generated cities.\u003C\/p\u003E\u003Cp\u003EKhose and his collaborators adjusted CARLA\u2019s interface to support aerial views that mimic views one might get from unmanned aerial vehicles (UAVs).\u0026nbsp;\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EWhat\u0027s the Forecast?\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EThe team also created new virtual scenarios to mimic the real world by accounting for changes in weather, times of day, various altitudes, and population per city. The algorithms will struggle to recognize the objects in the frame consistently unless those details are incorporated into the training data.\u003C\/p\u003E\u003Cp\u003E\u201cCARLA\u2019s flexibility offers a wide range of environmental configurations, and we take several important considerations into account while curating SKYSCENES images from CARLA,\u201d Khose said. \u201cThose include strategies for obtaining diverse synthetic data, embedding real-world irregularities, avoiding correlated images, addressing skewed class representations, and reproducing precise viewpoints.\u201d\u003C\/p\u003E\u003Cp\u003ESKYSCENES is not the largest dataset of aerial images to be released, but a paper co-authored by Khose shows that it performs better than existing models.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EKhose said models trained on this dataset exhibit strong generalization to real-world scenarios, and integration with real-world data enhances their performance. The dataset also controls variability, which is essential to perform various tasks.\u003C\/p\u003E\u003Cp\u003E\u201cThis dataset drives advancements in multi-view learning, domain adaptation, and multimodal approaches, with major implications for applications like urban planning, disaster response, and autonomous drone navigation,\u201d Khose said. \u201cWe hope to bridge the gap for synthetic-to-real adaptation and generalization for aerial images.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003ESeeing the Whole Picture\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EFor algorithms, generalization is the ability to perform tasks based on new data that expands beyond the specific examples on which they were trained.\u003C\/p\u003E\u003Cp\u003E\u201cIf you have 200 images, and you train a model on those images, they\u2019ll do well at recognizing what you want them to recognize in that closed-world initial setting,\u201d Hoffman said. \u201cBut if we were to take aerial vehicles and fly them around cities at various times of the day or in other weather conditions, they would start to fail.\u201d\u003C\/p\u003E\u003Cp\u003EThat\u2019s why Khose designed algorithms to enhance the quality of the curated images.\u003C\/p\u003E\u003Cp\u003E\u201cThese images are captured from 100 meters above ground, which means the objects appear small and are challenging to recognize,\u201d he said. \u201cWe focused on developing algorithms specifically designed to address this.\u201d\u003C\/p\u003E\u003Cp\u003EThose algorithms elevate the ability of ML models to recognize small objects, improving their performance in navigating new environments.\u003C\/p\u003E\u003Cp\u003E\u201cOur annotations help the models capture a more comprehensive understanding of the entire scene \u2014 where the roads are, where the buildings are, and know they are buildings and not just an obstacle in the way,\u201d Hoffman said. \u201cIt gives a richer set of information when planning a flight.\u003C\/p\u003E\u003Cp\u003E\u201cTo work safely, many autonomous flight plans might require a map given to them beforehand. If you have successful vision systems that understand exactly what the obstacles in the real world are, you could navigate in previously unseen environments.\u201d\u003C\/p\u003E\u003Cp\u003EFor more information about Georgia Tech Research at ECCV 2024, click \u003Ca href=\u0022https:\/\/sites.gatech.edu\/research\/eccv-2024\/\u0022\u003E\u003Cstrong\u003Ehere\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EFew aerial image datasets exist that can adequately train the computer vision algorithms that would pilot autonomous flying vehicles. Judy Hoffman, an assistant professor in Georgia Tech\u2019s School of Interactive Computing, worked with students in her lab to create SKYSCENES. The dataset contains over 33,000 aerial images of cities curated from a computer simulation program.\u003C\/p\u003E\u003Cp\u003EHoffman said sufficient training datasets could unlock the potential of autonomous flying vehicles. Constructing those datasets is a challenge the computer vision research community has been working for years to overcome.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":" Georgia Tech researchers created a new benchmark dataset of computer-generated aerial images that could allow autonomous flying vehicles to operate reliably and safely."}],"uid":"36530","created_gmt":"2024-09-26 19:06:34","changed_gmt":"2024-09-26 19:12:59","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2024-09-26T00:00:00-04:00","iso_date":"2024-09-26T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"675136":{"id":"675136","type":"image","title":"2X6A9645.jpg","body":null,"created":"1727377608","gmt_created":"2024-09-26 19:06:48","changed":"1727377608","gmt_changed":"2024-09-26 19:06:48","alt":"Sahil Khose","file":{"fid":"258733","name":"2X6A9645.jpg","image_path":"\/sites\/default\/files\/2024\/09\/26\/2X6A9645.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/09\/26\/2X6A9645.jpg","mime":"image\/jpeg","size":119198,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/09\/26\/2X6A9645.jpg?itok=vPDzbCmQ"}}},"media_ids":["675136"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"142","name":"City Planning, Transportation, and Urban Growth"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"135","name":"Research"}],"keywords":[{"id":"192863","name":"go-ai"},{"id":"188776","name":"go-research"},{"id":"193860","name":"Artifical Intelligence"},{"id":"173555","name":"Center for Machine Learning"},{"id":"186398","name":"autonomous drones"},{"id":"180975","name":"drones; UAV; unmanned aerial vehicles"},{"id":"174108","name":"autonomous aircraft"},{"id":"11506","name":"computer vision"},{"id":"8791","name":"computer vision algorithm"},{"id":"180840","name":"computer vision systems"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ECommunications Officer\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"675713":{"#nid":"675713","#data":{"type":"news","title":"AI Researcher Named to Harvard\u0027s Berkman-Klein Center Fellowship Program","body":[{"value":"\u003Cp\u003EA Georgia Tech researcher will continue to mitigate harmful post-deployment effects created by artificial intelligence (AI) as he joins the 2024-2025 cohort of fellows selected by the \u003Ca href=\u0022https:\/\/cyber.harvard.edu\/story\/2024-07\/incoming-2024-25-bkc-fellows\u0022\u003E\u003Cstrong\u003EBerkman-Klein Center (BKC) for Internet and Society at Harvard University\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003EUpol Ehsan is the first Georgia Tech graduate selected by BKC. As a fellow, he will contribute to its mission of exploring and understanding cyberspace, focusing on AI, social media, and university discourse.\u003C\/p\u003E\u003Cp\u003EEntering its 25th year, the BKC Harvard fellowship program addresses pressing issues and produces impactful research that influences academia and public policy. It offers a global perspective, a vibrant intellectual community, and significant funding and resources that attract top scholars and leaders.\u003C\/p\u003E\u003Cp\u003EThe program is highly competitive and sought after by early career candidates and veteran academic and industry professionals. Cohorts hail from numerous backgrounds, including law, computer science, sociology, political science, neuroscience, philosophy, and media studies.\u202f\u003C\/p\u003E\u003Cp\u003E\u201cHaving the opportunity to join such a talented group of people and working with them is a treat,\u201d Ehsan said. \u201cI\u2019m looking forward to adding to the prismatic network of BKC Harvard and learning from the cohesively diverse community.\u201d\u003C\/p\u003E\u003Cp\u003EWhile at Georgia Tech, Ehsan expanded the field of explainable AI (XAI) and pioneered a subcategory he labeled human-centered explainable AI (HCXAI). Several of his papers introduced novel and foundational concepts into that subcategory of XAI.\u003C\/p\u003E\u003Cp\u003EEhsan works with Professor Mark Riedl in the School of Interactive Computing and the \u003Ca href=\u0022https:\/\/eilab.gatech.edu\/\u0022\u003E\u003Cstrong\u003EHuman-centered AI and Entertainment Intelligence Lab\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003EEhsan says he will continue to work on research he introduced in his 2022 paper \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/algorithmic-aftermath-researcher-explores-damage-they-can-leave-behind\u0022\u003E\u003Cem\u003E\u003Cstrong\u003EThe Algorithmic Imprint\u003C\/strong\u003E\u003C\/em\u003E\u003C\/a\u003E, which shows how the potential harm from algorithms can linger even after an algorithm is no longer used. His research has informed the United Nations\u2019 algorithmic reparations policies and has been incorporated into the National Institute of Standards and Technology AI Risk Management Framework.\u003C\/p\u003E\u003Cp\u003E\u201cIt\u2019s a massive honor to receive this recognition of my work,\u201d Ehsan said. \u201cThe Algorithmic Imprint remains a globally applicable Responsible AI concept developed entirely from the Global South. This recognition is dedicated to the participants who made this work possible. I want to take their stories even further.\u0022\u003C\/p\u003E\u003Cp\u003EWhile at BKC Harvard, Ehsan will develop a taxonomy of potentially harmful AI effects after a model is no longer used. He will also design a process to anticipate these effects and create interventions. He said his work addresses an \u201caccountability blindspot\u201d in responsible AI, which tends to focus on potential harmful effects created during AI deployment.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EUpol Ehsan is the first Georgia Tech graduate selected by BKC. As a fellow, he will contribute to its mission of exploring and understanding cyberspace, focusing on AI, social media, and university discourse.\u003C\/p\u003E\u003Cp\u003EEntering its 25th year, the BKC Harvard fellowship program addresses pressing issues and produces impactful research that influences academia and public policy. It offers a global perspective, a vibrant intellectual community, and significant funding and resources that attract top scholars and leaders.\u003C\/p\u003E\u003Cp\u003EThe program is highly competitive and sought after by early career candidates and veteran academic and industry professionals. Cohorts hail from numerous backgrounds, including law, computer science, sociology, political science, neuroscience, philosophy, and media studies.\u202f\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A Georgia Tech researcher will continue to mitigate harmful post-deployment effects created by Artificial Intelligence (AI) as he joins the 2024-2025 cohort of fellows selected by the Berkman-Klein Center (BKC) for Internet and Society at Harvard Universi"}],"uid":"36530","created_gmt":"2024-08-01 14:02:12","changed_gmt":"2024-09-16 15:12:37","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2024-09-10T00:00:00-04:00","iso_date":"2024-09-10T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"674463":{"id":"674463","type":"image","title":"Upol Ehsan.jpeg","body":null,"created":"1722520941","gmt_created":"2024-08-01 14:02:21","changed":"1722520941","gmt_changed":"2024-08-01 14:02:21","alt":"Upol Ehsan","file":{"fid":"257992","name":"Upol Ehsan.jpeg","image_path":"\/sites\/default\/files\/2024\/08\/01\/Upol%20Ehsan.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/08\/01\/Upol%20Ehsan.jpeg","mime":"image\/jpeg","size":115401,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/08\/01\/Upol%20Ehsan.jpeg?itok=gfZ9imBs"}}},"media_ids":["674463"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"135","name":"Research"},{"id":"193157","name":"Student Honors and Achievements"}],"keywords":[{"id":"187915","name":"go-researchnews"},{"id":"192863","name":"go-ai"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ECommunications Officer\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"675869":{"#nid":"675869","#data":{"type":"news","title":"New Large-Language Model Can Protect Social Media Users\u0027 Privacy","body":[{"value":"\u003Cp\u003ESocial media users may need to think twice before hitting that \u201cPost\u201d button.\u003C\/p\u003E\u003Cp\u003EA new large-language model (LLM) developed by Georgia Tech researchers can help them filter content that could risk their privacy and offer alternative phrasing that keeps the context of their posts intact.\u003C\/p\u003E\u003Cp\u003EAccording to a new paper that will be presented at the \u003Ca href=\u0022https:\/\/2024.aclweb.org\/\u0022\u003E\u003Cstrong\u003E2024 Association for Computing Linguistics\u003C\/strong\u003E\u003C\/a\u003E(ACL) conference, social media users should tread carefully about the information they self-disclose in their posts.\u003C\/p\u003E\u003Cp\u003EMany people use social media to express their feelings about their experiences without realizing the risks to their privacy. For example, a person revealing their gender identity or sexual orientation may be subject to doxing and harassment from outside parties.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EOthers want to express their opinions without their employers or families knowing.\u003C\/p\u003E\u003Cp\u003EPh.D. student Yao Dou and associate professors Alan Ritter and Wei Xu originally set out to study user awareness of self-disclosure privacy risks on Reddit. Working with anonymous users, they created an LLM to detect at-risk content.\u003C\/p\u003E\u003Cp\u003EWhile the study boosted user awareness of the personal information they revealed, many called for an intervention. They asked the researchers for assistance to rewrite their posts so they didn\u2019t have to be concerned about privacy.\u003C\/p\u003E\u003Cp\u003EThe researchers revamped the model to suggest alternative phrases that reduce the risk of privacy invasion.\u003C\/p\u003E\u003Cp\u003EOne user disclosed, \u201cI\u2019m 16F I think I want to be a bi M.\u201d The new tool offered alternative phrases such as:\u003C\/p\u003E\u003Cul\u003E\u003Cli\u003E\u201cI am exploring my sexual identity.\u201d\u003C\/li\u003E\u003Cli\u003E\u201cI have a desire to explore new options.\u201d\u003C\/li\u003E\u003Cli\u003E\u201cI am attracted to the idea of exploring different gender identities.\u201d\u003C\/li\u003E\u003C\/ul\u003E\u003Cp\u003EDou said the challenge is making sure the model provides suggestions that don\u2019t change or distort the desired context of the post.\u003C\/p\u003E\u003Cp\u003E\u201cThat\u2019s why instead of providing one suggestion, we provide three suggestions that are different from each other, and we allow the user to choose which one they want,\u201d Dou said. \u201cIn some cases, the discourse information is important to the post, and in that case, they can choose what to abstract.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EWEIGHING THE RISKS\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EThe researchers sampled 10,000 Reddit posts from a pool of 4 million that met their search criteria. They annotated those posts and created 19 categories of self-disclosures, including age, sexual orientation, gender, race or nationality, and location.\u003C\/p\u003E\u003Cp\u003EFrom there, they worked with Reddit users to test the effectiveness and accuracy of their model, with 82% giving positive feedback.\u003C\/p\u003E\u003Cp\u003EHowever, a contingent thought the model was \u201coversensitive,\u201d highlighting content they did not believe posed a risk.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EUltimately, the researchers say users must decide what they will post.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cIt\u2019s a personal decision,\u201d Ritter said. \u201cPeople need to look at this and think about what they\u2019re writing and decide between this tradeoff of what benefits they are getting from sharing information versus what privacy risks are associated with that.\u201d\u003C\/p\u003E\u003Cp\u003EXu acknowledged that future work on the project should include a metric that gives users a better idea of what types of content are more at risk than others.\u003C\/p\u003E\u003Cp\u003E\u201cIt\u2019s kind of the way passwords work,\u201d she said. \u201cYears ago, they never told you your password strength, and now there\u2019s a bar telling you how good your password is. Then you realize you need to add a special character and capitalize some letters, and that\u2019s become a standard. This is telling the public how they can protect themselves. The risk isn\u2019t zero, but it helps them think about it.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EWHAT ARE THE CONSEQUENCES?\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EWhile doxing and harassment are the most likely consequences of posting sensitive personal information, especially for those who belong to minority groups, the researchers say users have other privacy concerns.\u003C\/p\u003E\u003Cp\u003EUsers should know that when they draft posts on a site, their input can be extracted by the site\u2019s application programming interface (API). If that site has a data breach, a user\u2019s personal information could fall into unwanted hands.\u003C\/p\u003E\u003Cp\u003E\u201cI think we should have a path toward having everything work locally on the user\u2019s computer, so it doesn\u2019t rely on any external APIs to send this data off their local machine,\u201d Ritter said.\u003C\/p\u003E\u003Cp\u003ERitter added that users could also be targets of popular scams like phishing without ever knowing it.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cPeople trying targeted phishing attacks can learn personal information about people online that might help them craft more customized attacks that could make users vulnerable,\u201d he said.\u003C\/p\u003E\u003Cp\u003EThe safest way to avoid a breach of privacy is to stay off social media. But Xu said that\u2019s impractical as there are resources and support these sites can provide that users may not get from anywhere else.\u003C\/p\u003E\u003Cp\u003E\u201cWe want people who may be afraid of social media to use it and feel safe when they post,\u201d she said. \u201cMaybe the best way to get an answer to a question is to ask online, but some people don\u2019t feel comfortable doing that, so a tool like this can make them more comfortable sharing without much risk.\u201d\u003C\/p\u003E\u003Cp\u003EFor more information about Georgia Tech research at ACL, please visit \u003Ca href=\u0022https:\/\/sites.gatech.edu\/research\/acl-2024\/\u0022\u003E\u003Cstrong\u003Ehttps:\/\/sites.gatech.edu\/research\/acl-2024\/\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EA new large-language model (LLM) developed by Georgia Tech researchers can help them filter content that could risk their privacy and offer alternative phrasing that keeps the context of their posts intact.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech researchers have developed an AI tool that filters content that risks the privacy of social media users from their posts."}],"uid":"36530","created_gmt":"2024-08-08 19:00:13","changed_gmt":"2024-09-03 15:58:27","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2024-08-07T00:00:00-04:00","iso_date":"2024-08-07T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"674539":{"id":"674539","type":"image","title":"2X6A9136.jpg","body":null,"created":"1723143622","gmt_created":"2024-08-08 19:00:22","changed":"1723143622","gmt_changed":"2024-08-08 19:00:22","alt":"Alan Ritter and Wei Xu stand infront of a white board full of post-it notes","file":{"fid":"258082","name":"2X6A9136.jpg","image_path":"\/sites\/default\/files\/2024\/08\/08\/2X6A9136.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/08\/08\/2X6A9136.jpg","mime":"image\/jpeg","size":108256,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/08\/08\/2X6A9136.jpg?itok=RBeCsS_Z"}}},"media_ids":["674539"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"135","name":"Research"}],"keywords":[{"id":"9153","name":"Research Horizons"},{"id":"192863","name":"go-ai"},{"id":"2556","name":"artificial intelligence"},{"id":"187812","name":"artificial intelligence (AI)"},{"id":"167543","name":"social media"},{"id":"114791","name":"Data Privacy"},{"id":"187915","name":"go-researchnews"},{"id":"10199","name":"Daily Digest"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ECommunications Officer\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"676100":{"#nid":"676100","#data":{"type":"news","title":"New App Helps Fit Physical Activities into Students\u0027 Busy Schedules","body":[{"value":"\u003Cp\u003EFor some students, an 8 a.m. class will take away the morning jog they enjoyed every day last semester. For others, a lab meeting time changed, and tennis doubles in the afternoon won\u2019t be an option anymore.\u003C\/p\u003E\u003Cp\u003EStudents returning to campus for a new semester often struggle to find time for physical activities because of their new routines and schedules. However, a new app developed at Georgia Tech helps busy students prioritize physical activity in their daily routines.\u003C\/p\u003E\u003Cp\u003EPh.D. student Kefan Xu of the \u003Ca href=\u0022https:\/\/sites.google.com\/view\/riarriaga\/home?authuser=0\u0022\u003E\u003Cstrong\u003EUbicomp Health and Wellness Lab at Georgia Tech\u003C\/strong\u003E\u003C\/a\u003E created Plannergy, a time management app that identifies open time blocks in users\u2019 schedules.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EXu introduced Plannergy at the Conference on Human Factors in Computing (CHI) in Honolulu, Hawaii in May. He says the app is ideal for college students because they tend to have busy and inconsistent schedules.\u003C\/p\u003E\u003Cp\u003EPlannergy allows users to track their schedules, reflect on what activities would be beneficial and timely, and strategize how to implement the activity into their schedule.\u003C\/p\u003E\u003Cp\u003E\u0022Currently, the app is catered to people who\u2019ve been physically inactive and have inconsistent schedules,\u201d Xu said. \u201cCollege students know their schedule will change when they begin a new semester. They need to get some physical activity and find opportunities in the day they can leverage. It could be as simple as walking to school instead of taking a scooter.\u201d\u003C\/p\u003E\u003Cp\u003EXu tested his app on 16 college students who planned their physical activities every seven days and followed a reflective iteration framework to track improvement. The results showed that Plannergy is an effective behavior change tool. The findings also indicate that it increases participants\u2019 awareness of their schedules.\u003C\/p\u003E\u003Cp\u003EThe American Heart Association says adults can reduce the risk of heart disease by participating in at least \u003Ca href=\u0022https:\/\/www.heart.org\/en\/healthy-living\/fitness\/fitness-basics\/aha-recs-for-physical-activity-in-adults\u0022\u003E\u003Cstrong\u003E150 minutes of moderate-intensity physical activity weekly\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003EThe Centers for Disease Control and Prevention released a \u003Ca href=\u0022https:\/\/www.cdc.gov\/mmwr\/volumes\/72\/wr\/mm7204a1.htm?s_cid=mm7204a1_w\u0022\u003E\u003Cstrong\u003Ereport in 2023\u003C\/strong\u003E\u003C\/a\u003E that found 72% of Americans aren\u2019t meeting that standard.\u003C\/p\u003E\u003Cp\u003EAs Xu points out in his paper, studies have shown that incorporating physical activity into a person\u2019s routine usually helps them maintain it. However, he\u2019s identified two common problems:\u003C\/p\u003E\u003Cul\u003E\u003Cli\u003EPeople lack understanding about their schedules and routines.\u003C\/li\u003E\u003Cli\u003EPeople have schedules that fluctuate from one day to the next.\u003C\/li\u003E\u003C\/ul\u003E\u003Cp\u003E\u201cIndividuals face a lot of changes in their life,\u201d Xu said. \u201cMaybe they\u2019re a student who has graduated, and they\u2019re going into industry, which means their daily routine will be different from what it was while they were in school. This app allows them to experiment with different time slots and activity types to figure out another way and help them update their activity routine no matter what life changes they face.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003ECUSTOM FIT\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003ESome users who have been inactive for extended periods may be unsure how much exercise they need. Plannergy can also help them determine the intensity level of the activity to help avoid overexertion.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cIf someone has been inactive for months, it\u2019s hard to ask them to run two miles daily,\u201d Xu said. \u201cThere\u2019s much for them to figure out. How much do they want to do, and at what intensity level? This app lets them gradually figure out the ideal activity. They can continue to track their progress and see if improvements are needed.\u201d\u003C\/p\u003E\u003Cp\u003EPlannergy is not limited to physical activity. Xu says one of the students in his study who worked out daily used the app to identify times in her schedule to take breaks or focus on more spiritual disciplines.\u003C\/p\u003E\u003Cp\u003E\u201cShe added yoga and removed some high-intensity physical activities, and her sleeping routine also changed,\u201d Xu said.\u003C\/p\u003E\u003Cp\u003EXu is working to improve the app. Future versions will have sensing technology to leverage health informatics so users can make better decisions. He also wants the app to record user data and make customized suggestions for activities that fit the user\u2019s schedule and preferred exercise intensity level.\u003C\/p\u003E\u003Cp\u003E\u201cThe app requires manual tracking, which can create user burden,\u201d he said. \u201cI think in the future, the process could be more automated. We want to keep it flexible but add more scaffolding to enhance user experience.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EPlannergy allows users to track their schedules, reflect on what activities would be beneficial and timely, and strategize how to implement the activity into their schedule.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Plannergy can help students fit physical activity into their busy and flucuating schedules."}],"uid":"36530","created_gmt":"2024-08-20 13:57:30","changed_gmt":"2024-09-03 15:57:10","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2024-08-20T00:00:00-04:00","iso_date":"2024-08-20T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"674643":{"id":"674643","type":"image","title":"2X6A9356.jpg","body":null,"created":"1724162260","gmt_created":"2024-08-20 13:57:40","changed":"1724162260","gmt_changed":"2024-08-20 13:57:40","alt":"Male student sitting on a track, holding a tennis racket, in between two old computer monitors","file":{"fid":"258193","name":"2X6A9356.jpg","image_path":"\/sites\/default\/files\/2024\/08\/20\/2X6A9356.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/08\/20\/2X6A9356.jpg","mime":"image\/jpeg","size":146978,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/08\/20\/2X6A9356.jpg?itok=Itig00QG"}}},"media_ids":["674643"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"8862","name":"Student Research"}],"keywords":[{"id":"34741","name":"mobile app"},{"id":"399","name":"physical activity"},{"id":"192845","name":" activity, fun"},{"id":"183904","name":"healthy choices"},{"id":"4073","name":"fitness"},{"id":"123671","name":"fitness tracking"},{"id":"33601","name":"health and fitness"},{"id":"10199","name":"Daily Digest"},{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ECommunications Officer\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"675196":{"#nid":"675196","#data":{"type":"news","title":"Middle Schoolers\u2019 Feedback Informs New Approach to AI-based Museum Exhibits","body":[{"value":"\u003Cp\u003EResearchers at Georgia Tech are creating accessible museum exhibits that explain artificial intelligence (AI) to middle school students, including the LuminAI interactive AI-based dance partner developed by Regents\u0027 Professor Brian Magerko.\u003C\/p\u003E\u003Cp\u003EPh.D. students Yasmine Belghith and Atefeh Mahdavi co-led a study in a museum setting that observed how middle schoolers interact with the popular AI chatbot ChatGPT.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cIt\u2019s important for museums, especially science museums, to start incorporating these kinds of exhibits about AI and about using AI so the general population can have that avenue to interact with it and transfer that knowledge to everyday tools,\u201d Belghith said.\u003C\/p\u003E\u003Cp\u003EBelghith and Mahdavi conducted their study with nine focus groups of 24 students at Chicago\u2019s \u003Ca href=\u0022https:\/\/www.msichicago.org\/\u0022\u003E\u003Cstrong\u003EMuseum of Science and Industry\u003C\/strong\u003E\u003C\/a\u003E. The team used the findings to inform their design of AI exhibits that the museum could display as early as 2025.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EBelghith is a Ph.D. student in human-centered computing. Her advisor is Assistant Professor Jessica Roberts in the School of Interactive Computing. Magerko advises Mahdavi, a Ph.D. student in digital media in the School of Literature, Media, and Communication.\u003C\/p\u003E\u003Cp\u003EBelghith and Mahdavi presented a paper about their study in May at the Association for Computing Machinery (ACM) 2024 Conference on Human Factors in Computing Systems (CHI) in Honolulu, Hawaii.\u003C\/p\u003E\u003Cp\u003ETheir work is part of a National Science Foundation (NSF) grant dedicated to fostering AI literacy among middle schoolers in informal environments.\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EExpanding Accessibility\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EWhile there are existing efforts to reach students in the classroom, the researchers believe AI education is most accessible in informal learning environments like museums.\u003C\/p\u003E\u003Cp\u003E\u201cThere\u2019s a need today for everybody to have some sort of AI literacy,\u201d Belghith said. \u201cMany middle schoolers will not be taking computer science courses or pursuing computer science careers, so there needs to be interventions to teach them what they should know about AI.\u201d\u003C\/p\u003E\u003Cp\u003EThe researchers found that most of the middle schoolers interacted with ChatGPT to either test its knowledge by prompting it to answer questions or socialize with it by having human-like conversations.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EOthers fit the mold of \u201ccontent explorers.\u201d They did not engage with the AI aspect of ChatGPT and focused more on the content it produced.\u003C\/p\u003E\u003Cp\u003EMahdavi said regardless of their approach, students would get \u201ctunnel vision\u201d in their interactions instead of exploring more of the AI\u2019s capabilities.\u003C\/p\u003E\u003Cp\u003E\u201cIf they go in a certain direction, they will continue to explore that,\u201d Mahdavi said. \u201cOne thing we can learn from this is to nudge kids and show them there are other things you can do with AI tools or get them to think about it another way.\u201d\u003C\/p\u003E\u003Cp\u003EThe researchers also paid attention to what was missing in the students\u2019 responses, which Mahdavi said was just as important as what they did talk about.\u003C\/p\u003E\u003Cp\u003E\u201cNone of them mentioned anything about ethics or what could be problematic about AI,\u201d she said. \u201cThat told us there\u2019s something they aren\u2019t thinking about but should be. We take that into account as we think about future exhibits.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EMaking an Impact\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EThe researchers visited the Museum of Science and Industry June 1-2 to conduct the first trial run of three AI-based exhibits they\u2019ve created. One of them is LuminAI, which was developed in \u003Ca href=\u0022https:\/\/expressivemachinery.gatech.edu\/\u0022\u003E\u003Cstrong\u003EMagerko\u2019s Expressive Machinery Lab\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003ELuminAI is an interactive art installation that allows people to engage in collaborative movement with an AI dance partner. Georgia Tech and Kennesaw State recently held the \u003Ca href=\u0022https:\/\/www.kennesaw.edu\/arts\/news\/posts\/lumin_ai_performance_collaboration.php\u0022\u003E\u003Cstrong\u003Efirst performance\u003C\/strong\u003E\u003C\/a\u003E of AI avatars dancing with human partners in front of a live audience.\u003C\/p\u003E\u003Cp\u003EDuri Long, a former Georgia Tech Ph.D. student who is now an assistant professor at Northwestern University, designed the second exhibit. KnowledgeNet is an interactive tabletop exhibit in which visitors build semantic networks by adding different characteristics to characters that interact together.\u003C\/p\u003E\u003Cp\u003EThe third exhibit, Data Bites, prompts users to build datasets of pizzas and sandwiches. Their selections train a machine-learning classifier in real time.\u003C\/p\u003E\u003Cp\u003EBelghith said the exhibits fostered conversations about AI between parents and children.\u003C\/p\u003E\u003Cp\u003E\u201cThe exhibit prototypes successfully engaged children in creative activities,\u201d she said. \u201cMany parents had to pull their kids away to continue their museum tour because the kids wanted more time to try different creations or dance moves.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EResearchers at Georgia Tech are creating accessible museum exhibits that explain artificial intelligence (AI) to middle school students, including the LuminAI interactive AI-based dance partner developed by Regents\u0027 Professor Brian Magerko.\u003C\/p\u003E\u003Cp\u003EPh.D. students Yasmine Belghith and Atefeh Mahdavi co-led a study in a museum setting that observed how middle schoolers interact with the popular AI chatbot ChatGPT.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EBelghith and Mahdavi conducted their study with nine focus groups of 24 students at Chicago\u2019s \u003Ca href=\u0022https:\/\/www.msichicago.org\/\u0022\u003E\u003Cstrong\u003EMuseum of Science and Industry\u003C\/strong\u003E\u003C\/a\u003E. The team used the findings to inform their design of AI exhibits that the museum could display as early as 2025.\u0026nbsp;\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Partnering with Chicago\u0027s Museum of Science and Industry, Researchers at Georgia Tech are creating accessible museum exhibits that explain artificial intelligence (AI) to middle school students."}],"uid":"36530","created_gmt":"2024-06-24 19:03:25","changed_gmt":"2024-07-17 14:05:31","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2024-06-21T00:00:00-04:00","iso_date":"2024-06-21T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"674234":{"id":"674234","type":"image","title":"RS5939_COTA_240502_AIDance_MY_0368.jpg","body":null,"created":"1719255844","gmt_created":"2024-06-24 19:04:04","changed":"1719255844","gmt_changed":"2024-06-24 19:04:04","alt":"LuminAI performance","file":{"fid":"257724","name":"RS5939_COTA_240502_AIDance_MY_0368.jpg","image_path":"\/sites\/default\/files\/2024\/06\/24\/RS5939_COTA_240502_AIDance_MY_0368.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/06\/24\/RS5939_COTA_240502_AIDance_MY_0368.jpg","mime":"image\/jpeg","size":118977,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/06\/24\/RS5939_COTA_240502_AIDance_MY_0368.jpg?itok=FFJyZ-qv"}}},"media_ids":["674234"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1188","name":"Research Horizons"}],"categories":[{"id":"42901","name":"Community"},{"id":"42911","name":"Education"},{"id":"42921","name":"Exhibitions"},{"id":"42891","name":"Georgia Tech Arts"},{"id":"148","name":"Music and Music Technology"}],"keywords":[{"id":"192863","name":"go-ai"},{"id":"2556","name":"artificial intelligence"},{"id":"4299","name":"middle school"},{"id":"193070","name":"AI education"},{"id":"187915","name":"go-researchnews"},{"id":"9153","name":"Research Horizons"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003C\/p\u003E\u003Cp\u003ECommunications Officer I\u003C\/p\u003E\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"675255":{"#nid":"675255","#data":{"type":"news","title":"Meet VAL, an AI Teammate That Can Adapt to Your Tendencies","body":[{"value":"\u003Cp\u003EA team\u2019s success in any competitive environment often hinges on how well each member can anticipate the actions of their teammates.\u003C\/p\u003E\u003Cp\u003EAssistant Professor \u003Ca href=\u0022https:\/\/chrismaclellan.com\/\u0022\u003E\u003Cstrong\u003EChristopher MacLellan\u003C\/strong\u003E\u003C\/a\u003E thinks teachable artificial intelligence (AI) agents are uniquely suited for this role and make ideal teammates for video gamers.\u003C\/p\u003E\u003Cp\u003EWith the help of funding from the U.S. Department of Defense, MacLellan hopes to prove his theory with a conversational, task-performing agent he co-engineered called the Verbal Apprentice Learner (VAL).\u003C\/p\u003E\u003Cp\u003E\u201cYou need the ability to adapt to what your teammates are doing to be an effective teammate,\u201d MacLellan said. \u201cWe\u2019re exploring this capability for AI agents in the context of video games.\u201d\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EUnlike generative AI chatbots like ChatGPT, VAL uses an interactive task-learning approach.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cVAL learns how you do things in the way you want them done,\u201d MacLellan said. \u201cWhen you tell it to do something, it will do it the way you taught it instead of some generic random way from the internet.\u201d\u003C\/p\u003E\u003Cp\u003EA key difference between VAL and a chatbot is that VAL can perceive and act within the gaming world. A chatbot, like ChatGPT, only perceives and acts within the chat dialog.\u003C\/p\u003E\u003Cp\u003EMacLellan immersed VAL into an open-sourced, simplified version of the popular Nintendo cooperative video game Overcooked to discover how well the agent can function as a teammate. In Overcooked, up to four players work together to prepare dishes in a kitchen while earning points for every completed order.\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EHow Fast Can Val Learn?\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EIn a study with 12 participants, MacLellan found that users could often correctly teach VAL new tasks with only a few examples.\u003C\/p\u003E\u003Cp\u003EFirst, the user must teach VAL how to play the game. Knowing that a single human error could compromise results, MacLellan designed three precautionary features:\u003C\/p\u003E\u003Cul\u003E\u003Cli\u003EWhen VAL receives a command such as \u0022cook an onion,\u0022 it asks clarifying questions to understand and confirm its task. As VAL continues to learn, clarification prompts decrease.\u003C\/li\u003E\u003Cli\u003EAn \u201cundo\u201d button to ensure users can reverse an errant command.\u003C\/li\u003E\u003Cli\u003EVAL contains GPT subcomponents to interpret user input, allowing it to adapt to ambiguous commands and typos. The GPT subcomponents drive changes in VAL\u2019s task knowledge, which it uses to perform tasks without additional guidance.\u003C\/li\u003E\u003C\/ul\u003E\u003Cp\u003EThe participants in MacLellan\u2019s study used these features to ensure VAL learned the tasks correctly.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThe high volume of prompts creates a more tedious experience. Still, MacLellan said it provides detailed data on system performance and user experience. That insight should make designing a more seamless experience in future versions of VAL possible.\u003C\/p\u003E\u003Cp\u003EThe prompts also require the AI to be explainable.\u003C\/p\u003E\u003Cp\u003E\u201cWhen VAL learns something, it uses the language model to label each node in the task knowledge graph that the system constructs,\u201d MacLellan said. \u201cYou can see what it learned and how it breaks tasks down into actions.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EBeyond Gaming\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EMacLellan\u2019s \u003Ca href=\u0022https:\/\/tail.cc.gatech.edu\/\u0022\u003E\u003Cstrong\u003ETeachable AI Lab\u003C\/strong\u003E\u003C\/a\u003E is devoted to developing AI that inexperienced users can train.\u003C\/p\u003E\u003Cp\u003E\u201cWe are trying to come up with a more usable system where anyone, including people with limited expertise, could come in and interact with the agent and be able to teach it within just five minutes of interacting with it for the first time,\u201d he said.\u003C\/p\u003E\u003Cp\u003EHis work caught the attention of the Department of Defense, which awarded MacLellan multiple grants to fund several of his projects, including VAL. The possibilities of how the DoD could use VAL, on and off the battlefield, are innumerable.\u003C\/p\u003E\u003Cp\u003E\u201c(The DoD) envisions a future in which people and AI agents jointly work together to solve problems,\u201d MacLellan said. \u201cYou need the ability to adapt to what your teammates are doing to be an effective teammate.\u003C\/p\u003E\u003Cp\u003E\u201cWe look at the dynamics of different teaming circumstances and consider what are the right ways to team AI agents with people. The key hypothesis for our project is agents that can learn on the fly and adapt to their users will make better teammates than those that are pre-trained like GPT.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EDesign Your Own Agent\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EMacLellan is co-organizing a gaming agent design competition sponsored by the Institute of Electrical and Electronic Engineers (IEEE) 2024 \u003Ca href=\u0022https:\/\/2024.ieee-cog.org\/\u0022\u003E\u003Cstrong\u003EConference on Games\u003C\/strong\u003E\u003C\/a\u003E in Milan, Italy.\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/strong-tact.github.io\/\u0022\u003E\u003Cstrong\u003EThe Dice Adventure Competition \u003C\/strong\u003E\u003C\/a\u003Einvites participants to design their own AI agent to play a multi-player, turn-based dungeon crawling game or to play the game as a human teammate. The competition this month and in July offers $1,000 in prizes for players and agent developers in the top three teams.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EA team\u2019s success in any competitive environment often hinges on how well each member can anticipate the actions of their teammates.\u003C\/p\u003E\u003Cp\u003EAssistant Professor \u003Ca href=\u0022https:\/\/chrismaclellan.com\/\u0022\u003E\u003Cstrong\u003EChristopher MacLellan\u003C\/strong\u003E\u003C\/a\u003E thinks teachable artificial intelligence (AI) agents are uniquely suited for this role and make ideal teammates for video gamers.\u003C\/p\u003E\u003Cp\u003EWith the help of funding from the U.S. Department of Defense, MacLellan hopes to prove his theory with a conversational, task-performing agent he co-engineered called the Verbal Apprentice Learner (VAL).\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A new AI teammate developed by Assistant Professor Christopher MacLellan could be the ideal co-opt video game partner."}],"uid":"36530","created_gmt":"2024-06-27 17:55:24","changed_gmt":"2024-07-17 14:05:01","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2024-06-27T00:00:00-04:00","iso_date":"2024-06-27T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"674252":{"id":"674252","type":"image","title":"VAL_86A1504-Enhanced-NR.jpg","body":null,"created":"1719510932","gmt_created":"2024-06-27 17:55:32","changed":"1719510932","gmt_changed":"2024-06-27 17:55:32","alt":"A female student wears the Meta Quest VR headset with two men standing behind her","file":{"fid":"257746","name":"VAL_86A1504-Enhanced-NR.jpg","image_path":"\/sites\/default\/files\/2024\/06\/27\/VAL_86A1504-Enhanced-NR.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/06\/27\/VAL_86A1504-Enhanced-NR.jpg","mime":"image\/jpeg","size":138089,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/06\/27\/VAL_86A1504-Enhanced-NR.jpg?itok=Oz9nUZQO"}}},"media_ids":["674252"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1188","name":"Research Horizons"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"192863","name":"go-ai"},{"id":"187812","name":"artificial intelligence (AI)"},{"id":"91511","name":"Video gaming"},{"id":"2356","name":"gaming"},{"id":"187915","name":"go-researchnews"},{"id":"9153","name":"Research Horizons"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ECommunications Officer\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"675434":{"#nid":"675434","#data":{"type":"news","title":"Visualization Tool Helps Oceanographers Predict Sediment Sample Hotspots","body":[{"value":"\u003Cp\u003EA new data visualization tool designed by a Georgia Tech Ph.D. student is helping a team of microbial ecologists, geobiologists, and oceanographers gain more insight into how deep-sea microorganisms interact within their environment.\u003C\/p\u003E\u003Cp\u003EWhat began as an internship at NASA turned into a unique opportunity for fourth-year Ph.D. student Adam Coscia. Coscia worked under the supervision of an interdisciplinary team of collaborative researchers from Caltech, the \u003Ca href=\u0022https:\/\/www.jpl.nasa.gov\/\u0022\u003E\u003Cstrong\u003EJet Propulsion Laboratory\u003C\/strong\u003E\u003C\/a\u003E (JPL) Caltech manages for NASA and the \u003Ca href=\u0022https:\/\/www.artcenter.edu\/\u0022\u003E\u003Cstrong\u003EArtCenter College of Design\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003ECoscia\u2019s mentors recommended him to a Caltech research team led by Victoria Orphan, a renowned microbial ecologist who studies microbial communities in the ocean and how they function within habitats in deep seafloor sediments.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EOrphan and her team, \u003Ca href=\u0022https:\/\/www.gps.caltech.edu\/people\/victoria-j-orphan\u0022\u003E\u003Cstrong\u003Ethe Orphan Lab at Caltech\u003C\/strong\u003E\u003C\/a\u003E, have conducted their research since 2004. They recently decided to take a data visualization approach to record their findings and plan future expeditions.\u003C\/p\u003E\u003Cp\u003E\u201cHistorically, our data sets have been discrete and have lived in separate Excel spreadsheets,\u201d Orphan said. \u201cMaybe at the end, we\u2019ll do some statistical analysis to find correlations in that data. Then we compare those to our maps. We didn\u2019t have a way of consolidating everything under one umbrella that allows us to learn more about these ecosystems.\u201d\u003C\/p\u003E\u003Cp\u003EOrphan said her team typically takes one or two research expeditions off the California coast annually. They spend three weeks using remotely operated vehicles (ROVs) to collect sediment samples from the ocean floor. Because time is at a premium, identifying the locations of the best samples is crucial.\u003C\/p\u003E\u003Cp\u003EOrphan is also an adjunct scientist at the \u003Ca href=\u0022https:\/\/www.mbari.org\/\u0022\u003E\u003Cstrong\u003EMonterey Bay Aquarium Research Institute (MBARI)\u003C\/strong\u003E\u003C\/a\u003E and works with the \u003Ca href=\u0022https:\/\/www.mbari.org\/team\/seafloor-mapping\/\u0022\u003E\u003Cstrong\u003ESeafloor Mapping Lab\u003C\/strong\u003E\u003C\/a\u003E. The lab uses an ROV-mounted low-altitude survey system to produce detailed maps of seafloor topography.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ETo help the Orphan Lab work effectively with topographic and photographic data, Coscia designed \u003Ca href=\u0022https:\/\/adamcoscia.com\/papers\/deepsee\/\u0022\u003E\u003Cstrong\u003EDeepSee\u003C\/strong\u003E\u003C\/a\u003E, an interactive web browser that can annotate and chart data using 3D visualization models and environmental maps.\u003C\/p\u003E\u003Cp\u003E\u201cThe idea is once you have the samples, and you\u2019re interested in a specific area with prior samples, you can go in and annotate on the map where to collect samples next with our drawing tool,\u201d Coscia said.\u003C\/p\u003E\u003Cp\u003E\u201cWe focused on the exploration and notetaking process with maps and data and having new ways of visualizing it. Scientists can draw and map out all their samples in real time. They can reference specific data much easier and determine where the team should go to get the best samples.\u201d\u003C\/p\u003E\u003Cp\u003EThe Orphan Lab has taken DeepSee live onboard its ship for its two most recent expeditions. Orphan has noticed an increased efficiency in expedition planning.\u003C\/p\u003E\u003Cp\u003E\u201cThe infrastructure put in place by Adam will make this an enabling tool not only for my group but for other oceanographers and scientists in other fields \u2014 anywhere there is a spatial distribution of information you want to connect to other metadata,\u201d she said.\u003C\/p\u003E\u003Cp\u003EOrphan brings new researchers into her lab at Caltech every year, and DeepSee has accelerated the process of getting newcomers up to speed.\u003C\/p\u003E\u003Cp\u003E\u201cWe can onboard them much easier and give them a sense of what data is available and where we\u2019ve collected information in a way that\u2019s much clearer than having them refer to an Excel spreadsheet,\u201d she said.\u003C\/p\u003E\u003Cp\u003EDeepSee also creates 3D data models under the sea floor using data interpolation, which estimates new data points based on the range of a set of known data points. Using the known data points, DeepSee fills in the blanks of the estimated data quality the researchers may find in nearby locations or further underneath the surface where samples were collected.\u003C\/p\u003E\u003Cp\u003E\u201cYou would never see anything visually below the sea floor,\u201d Coscia said. \u201cYou\u2019d have to go dig. But our 3D models show you that you might have data suggesting a hotspot just a few feet below the floor. That tells you where to sample next.\u201d\u003C\/p\u003E\u003Cp\u003ECoscia aims to incorporate machine learning (ML) models into a future version of DeepSee that will use collected data to predict future sites for sampling. However, ML model accuracy requires significantly more data.\u003C\/p\u003E\u003Cp\u003ECoscia hopes the current version of the tool catches on so researchers can more easily incorporate machine learning into their work.\u003C\/p\u003E\u003Cp\u003EFor now, the current version has plenty of uses, he said.\u003C\/p\u003E\u003Cp\u003E\u201cBeing able to organize and see your data, especially with maps, is always valuable,\u201d he said. \u201cMy passion is helping researchers and scientists see their data in new and valuable ways.\u201d\u003C\/p\u003E\u003Cp\u003ECoscia authored a paper on developing DeepSee, which he presented in May at the Conference on Human Factors in Computing Systems (CHI) in Honolulu, Hawaii.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EWhat began as an internship at NASA turned into a unique opportunity for fourth-year Ph.D. student Adam Coscia. Coscia worked under the supervision of an interdisciplinary team of collaborative researchers from Caltech, the \u003Ca href=\u0022https:\/\/www.jpl.nasa.gov\/\u0022\u003E\u003Cstrong\u003EJet Propulsion Laboratory\u003C\/strong\u003E\u003C\/a\u003E (JPL) Caltech manages for NASA and the \u003Ca href=\u0022https:\/\/www.artcenter.edu\/\u0022\u003E\u003Cstrong\u003EArtCenter College of Design\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003ECoscia\u2019s mentors recommended him to a Caltech research team led by Victoria Orphan, a renowned microbial ecologist who studies microbial communities in the ocean and how they function within habitats in deep seafloor sediments.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EOrphan and her team, \u003Ca href=\u0022https:\/\/www.gps.caltech.edu\/people\/victoria-j-orphan\u0022\u003E\u003Cstrong\u003Ethe Orphan Lab at Caltech\u003C\/strong\u003E\u003C\/a\u003E, have conducted their research since 2004. They recently decided to use data visualization to record their findings and plan future expeditions.\u003C\/p\u003E\u003Cp\u003ETo help the Orphan Lab work effectively with topographic and photographic data, Coscia designed \u003Ca href=\u0022https:\/\/adamcoscia.com\/papers\/deepsee\/\u0022\u003E\u003Cstrong\u003EDeepSee\u003C\/strong\u003E\u003C\/a\u003E, an interactive web browser that can annotate and chart data using 3D visualization models and environmental maps.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A new data visualization tool designed by a Georgia Tech Ph.D. student is helping a team of microbial ecologists, geobiologists, and oceanographers gain more insight into how deep-sea microorganisms interact within their environment."}],"uid":"36530","created_gmt":"2024-07-11 16:59:30","changed_gmt":"2024-07-12 13:47:54","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2024-07-11T00:00:00-04:00","iso_date":"2024-07-11T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"674331":{"id":"674331","type":"image","title":"victoria copy 2.jpg","body":null,"created":"1720717182","gmt_created":"2024-07-11 16:59:42","changed":"1720717182","gmt_changed":"2024-07-11 16:59:42","alt":"Scientists look at live feed from the ocean floor","file":{"fid":"257831","name":"victoria copy 2.jpg","image_path":"\/sites\/default\/files\/2024\/07\/11\/victoria%20copy%202.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/07\/11\/victoria%20copy%202.jpg","mime":"image\/jpeg","size":385104,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/07\/11\/victoria%20copy%202.jpg?itok=eqbCKi82"}}},"media_ids":["674331"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"135","name":"Research"}],"keywords":[{"id":"187915","name":"go-researchnews"},{"id":"9153","name":"Research Horizons"},{"id":"175805","name":"College of Computing visualization lab"},{"id":"38921","name":"data visualization"}],"core_research_areas":[{"id":"39431","name":"Data Engineering and Science"}],"news_room_topics":[{"id":"71911","name":"Earth and Environment"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003C\/p\u003E\u003Cp\u003ECommunications Officer\u003C\/p\u003E\u003Cp\u003EGeorgia Tech School of Interactive Computing\u003C\/p\u003E\u003Cp\u003Enathan.deen@cc.gatech.edu\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"675288":{"#nid":"675288","#data":{"type":"news","title":"Episode of \u0027Friends\u0027 Inspires New Tool that Provides Human-like Perception to MLLMs","body":[{"value":"\u003Cp\u003EFor Jitesh Jain, conducting a simple experiment while watching one of his favorite TV series became the genesis of a paper accepted into a prestigious computer vision conference.\u003C\/p\u003E\u003Cp\u003EJain is the creator of VCoder, a new tool that enhances the visual perception capabilities of multimodal large language models (MLLMs). Jain said MLLMs like GPT-4 with vision (GPT-4V) are prone to miss obscure objects that blend in with other objects in an image.\u003C\/p\u003E\u003Cp\u003EJain paused his TV as he watched \u003Cem\u003EThe One with the Halloween Party\u0026nbsp;\u003C\/em\u003Eepisode of the popular TV Series \u003Cem\u003EFriend\u003C\/em\u003Es.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EChandler stood out the most in a pink bunny costume while holding hands with Ross in a potato costume. As the two prepared for an arm-wrestling match with Joey and Phoebe, multiple groups socialized behind them.\u003C\/p\u003E\u003Cp\u003EJain wondered how accurate GPT-4V would be if he prompted itto describe everything happening in this image.\u003C\/p\u003E\u003Cp\u003E\u201cI watch a lot of TV series, so I frequently think about ways to leverage or include some aspects of those into my work,\u201d said Jain, a Ph.D. student in the School of Interactive Computing. \u201cThe scene was very cluttered, so I thought, what questions could I ask GPT-4 about this show.\u201d\u003C\/p\u003E\u003Cp\u003EOn the surface, the generative AI chatbot knew much about the image. It knew which show and episode it was from and recognized the man in the bunny costume as Chandler. It knew the main characters were hosting a Halloween party.\u003C\/p\u003E\u003Cp\u003EBut when Jain prompted the chatbot to count the number of people in the image, he discovered that GPT-4V and its open-source counterparts fell short of performing the simplest task.\u003C\/p\u003E\u003Cp\u003EIt answered 10 when the correct answer was 14. In the right corner of the image, there is a group of people standing in front of a dark curtain that GPT-4V had missed.\u0026nbsp;\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EAI Paradox\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EJain had a theory \u2014 the MLLMs had not been trained for the object perception task and did not have the necessary information to perceive the objects in the foreground and background.\u003C\/p\u003E\u003Cp\u003E\u201cWe started testing it with different pictures, and GPT-4V kept underperforming,\u201d Jain said. \u201cThe key takeaway is that it couldn\u2019t do a simple task such as counting the people in the scene, but it knew complex information such as what was happening and who the characters were. This phenomenon is Moravec\u2019s Paradox in Perception \u2014 the MLLMs visually reason about complex questions but fail at simple object perception tasks like counting.\u201d\u003C\/p\u003E\u003Cp\u003EJain said he has worked on image segmentation tools for the past two years. That includes when he was a research intern at Picsart AI under his now Ph.D. advisor Humphrey Shi, an associate professor in the School of Interactive Computing.\u003C\/p\u003E\u003Cp\u003EThe core idea behind VCoder is to act as a perceptive eye for the MLLM, using segmentation and depth maps obtained through established computer vision frameworks with minimal training costs. The tool also proposes evaluation metrics for object perception tasks like counting and ordering.\u003C\/p\u003E\u003Cp\u003EIts training and evaluation set consists of images from Common Objects in Context (COCO), a widely used object perception dataset. Associate Professor James Hays from the School of Interactive Computing was one of the academic collaborators who worked with Microsoft to create COCO.\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EImproving MLLMs\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EThough VCoder didn\u2019t know which show the image was from, it accurately described everything, including the number of people. It showed as much as 10% more accuracy than its nearest competitor.\u003C\/p\u003E\u003Cp\u003EIt could also identify the order of objects in a scene.\u003C\/p\u003E\u003Cp\u003EJain designed VCoder to integrate easily with existing MLLMs. He said augmenting MLLMs with VCoder leads to an MLLM with sound general reasoning and object perception capabilities.\u003C\/p\u003E\u003Cp\u003EHowever, he added he was unsure if integration would happen because companies like Open AI, which created GPT-4V, may overlook it.\u003C\/p\u003E\u003Cp\u003E\u201cThere\u2019s no way to know if they will integrate since GPT-4V is a closed model, and their main motivation is to make a product useful to consumers in general,\u201d he said. \u201cThey often ignore these small issues.\u201d\u003C\/p\u003E\u003Cp\u003EJain\u2019s paper was accepted into the Institute of Electrical and Electronics Engineers\u2019 2024 Conference on Computer Vision and Pattern Recognition (CVPR), occurring June 17-21 in Seattle. CVPR is the highest-ranked conference in computer vision according to Google Scholar.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EFor Jitesh Jain, conducting a simple experiment while watching one of his favorite TV series became the genesis of a paper accepted into a prestigious computer vision conference.\u003C\/p\u003E\u003Cp\u003EJain is the creator of VCoder, a new tool that enhances the visual perception capabilities of multimodal large language models (MLLMs). Jain said MLLMs like GPT-4 with vision (GPT-4V) are prone to miss obscure objects that blend in with other objects in an image.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Jitesh Jain is the creator of VCoder, a new tool that enhances the visual perception capabilities of multimodal large language models (MLLMs)"}],"uid":"36530","created_gmt":"2024-07-01 18:36:09","changed_gmt":"2024-07-01 18:37:57","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2024-06-18T00:00:00-04:00","iso_date":"2024-06-18T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"674279":{"id":"674279","type":"image","title":"2X6A9720.jpg","body":null,"created":"1719858982","gmt_created":"2024-07-01 18:36:22","changed":"1719858982","gmt_changed":"2024-07-01 18:36:22","alt":"Jitesh Jain and Humphrey Shi","file":{"fid":"257775","name":"2X6A9720.jpg","image_path":"\/sites\/default\/files\/2024\/07\/01\/2X6A9720.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/07\/01\/2X6A9720.jpg","mime":"image\/jpeg","size":3563310,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/07\/01\/2X6A9720.jpg?itok=RwAeH0kF"}}},"media_ids":["674279"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ECommunications Officer\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"675254":{"#nid":"675254","#data":{"type":"news","title":" College of Computing Alumna Wins ACM Dissertation Award","body":[{"value":"\u003Cp\u003EA College of Computing alumna has earned the highest honor given to doctoral candidates.\u003C\/p\u003E\u003Cp\u003ENivedita Arora received the \u003Ca href=\u0022https:\/\/www.acm.org\/media-center\/2024\/june\/dissertation-award-2023\u0022\u003E\u003Cstrong\u003E2024 Association for Computing Machinery (ACM) Doctoral Dissertation Award\u003C\/strong\u003E\u003C\/a\u003E during an awards ceremony on Saturday in San Francisco. Arora, an assistant professor at Northwestern University, is the first Georgia Tech alumna to win the award, which includes a prize of $20,000.\u003C\/p\u003E\u003Cp\u003EArora was a postdoctoral researcher at Georgia Tech\u2019s School of Interactive Computing during the 2022-2023 academic year. She also earned her Ph.D. in computer science and her master\u2019s in human-computer interaction from Georgia Tech.\u003C\/p\u003E\u003Cp\u003EAt Northwestern, she directs the\u0026nbsp;\u003Ca href=\u0022https:\/\/vaklab.wordpress.com\/\u0022\u003E\u003Cstrong\u003EVAK Sustainable Computing Lab\u003C\/strong\u003E\u003C\/a\u003E, which re-envisions computing from a sustainability-first approach.\u003C\/p\u003E\u003Cp\u003E\u201cThe ACM Doctoral Dissertation Award is the most prestigious recognition for doctoral research in our field,\u201d said \u003Ca href=\u0022https:\/\/josiahhester.com\/cv\/\u0022\u003E\u003Cstrong\u003EJosiah Hester\u003C\/strong\u003E\u003C\/a\u003E, an associate professor in the School of Interactive Computing who mentored Arora during her postdoc. \u201cThe award is a testament to the recipient\u0027s exceptional contributions to the field of computing, marking them as a world-class leader and innovator.\u201d\u003C\/p\u003E\u003Cp\u003EArora creates sustainable computational materials that harvest energy from their surrounding environments and can be responsibly disposed of at the end of their life cycles. Under the advisement of Professor \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/thad-starner\u0022\u003E\u003Cstrong\u003EThad Starner\u003C\/strong\u003E\u003C\/a\u003E and former Georgia Tech Professor Gregory Abowd, she won the dissertation award for her work involving interactive sticky notes.\u003C\/p\u003E\u003Cp\u003EThe interactive sticky notes perform computing tasks and allow wireless communication without battery dependency.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThrough her \u003Ca href=\u0022https:\/\/repository.gatech.edu\/entities\/publication\/2528c1f9-789b-4fd7-8184-b40933c0c6c4\u0022\u003E\u003Cstrong\u003Edissertation\u003C\/strong\u003E\u003C\/a\u003E, \u003Cem\u003ESustainable Interactive Wireless Stickers: From Materials to Devices on Applications\u003C\/em\u003E, Arora demonstrated that interactive sticky notes can capture audio, store it as memory, and relay it to another location. For example, an Amazon Alexa user can communicate commands to Alexa without being nearby.\u003C\/p\u003E\u003Cp\u003E\u201cWith rising climate change and e-waste, it is imperative to build computing technologies with a sustainability-first approach,\u201d Arora said. \u201cMy dissertation represents this core thinking. I am honored that ACM has recognized my research on sustainable computational materials. I am extremely grateful to my advisers, collaborators, friends, and family for their support.\u201d\u003C\/p\u003E\u003Cp\u003EHer dissertation also earned Outstanding Dissertation recognition from Georgia Tech\u2019s College of Computing in 2023. She also won the college\u2019s 2022 Outstanding Graduate Research Assistant Award.\u003C\/p\u003E\u003Cp\u003EArora was a finalist in the 2022 Fast Company Design Innovation Competition. In 2021, She won the ACM Gaetano Borriello Outstanding Ubiquitous Computing Student Award and was named an EECS Rising Star and a Foley Scholar.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003ENivedita Arora received the \u003Ca href=\u0022https:\/\/www.acm.org\/media-center\/2024\/june\/dissertation-award-2023\u0022\u003E\u003Cstrong\u003E2024 Association for Computing Machinery (ACM) Doctoral Dissertation Award\u003C\/strong\u003E\u003C\/a\u003E during an awards ceremony on Saturday in San Francisco. Arora, an assistant professor at Northwestern University, is the first Georgia Tech alumna to win the award, which includes a prize of $20,000.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Nivedita Arora received the 2024 Association for Computing Machinery (ACM) Doctoral Dissertation Award during an awards ceremony on Saturday in San Francisco."}],"uid":"36530","created_gmt":"2024-06-27 17:44:03","changed_gmt":"2024-06-27 17:47:58","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2024-06-25T00:00:00-04:00","iso_date":"2024-06-25T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"674251":{"id":"674251","type":"image","title":"Untitled 2.001.jpeg","body":null,"created":"1719510287","gmt_created":"2024-06-27 17:44:47","changed":"1719510287","gmt_changed":"2024-06-27 17:44:47","alt":"Nivedita Arora receiving the ACM Doctoral Dissertation Award","file":{"fid":"257745","name":"Untitled 2.001.jpeg","image_path":"\/sites\/default\/files\/2024\/06\/27\/Untitled%202.001.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/06\/27\/Untitled%202.001.jpeg","mime":"image\/jpeg","size":484885,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/06\/27\/Untitled%202.001.jpeg?itok=47mylSdw"}}},"media_ids":["674251"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"130","name":"Alumni"},{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"506","name":"alumni"},{"id":"171949","name":"Alumni Awards"},{"id":"175978","name":"#sheisgtcomputing"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003C\/p\u003E\u003Cp\u003ECommunications Officer\u003C\/p\u003E\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"675021":{"#nid":"675021","#data":{"type":"news","title":" Ph.D. Student Wins Best Paper at Robotics Conference","body":[{"value":"\u003Cp\u003EAsk a person to find a frying pan, and they will most likely go to the kitchen. Ask a robot to do the same, and you may get numerous responses, depending on how the robot is trained.\u003C\/p\u003E\u003Cp\u003ESince humans often associate objects in a home with the room they are in, Naoki Yokoyama thinks robots that navigate human environments to perform assistive tasks should mimic that reasoning.\u003C\/p\u003E\u003Cp\u003ERoboticists have employed natural language models to help robots mimic human reasoning over the past few years. However, Yokoyama, a Ph.D. student in robotics, said these models create a \u201cbottleneck\u201d that prevents agents from picking up on visual cues such as room type, size, d\u00e9cor, and lighting.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EYokoyama presented a new framework for semantic reasoning at the Institute of Electrical and Electronic Engineers (IEEE) \u003Ca href=\u0022https:\/\/www.ieee-ras.org\/conferences-workshops\/fully-sponsored\/icra\u0022\u003E\u003Cstrong\u003EInternational Conference on Robotics and Automation\u003C\/strong\u003E\u003C\/a\u003E (ICRA) last month in Yokohama, Japan. ICRA is the world\u2019s largest robotics conference.\u003C\/p\u003E\u003Cp\u003EYokoyama earned a best paper award in the Cognitive Robotics category with his \u003Ca href=\u0022http:\/\/naoki.io\/portfolio\/vlfm\u0022\u003E\u003Cstrong\u003EVision-Language Frontier Maps (VLFM) proposal\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003EAssistant Professor Sehoon Ha and Associate Professor Dhruv Batra from the School of Interactive Computing advised Yokoyama on the paper. Yokoyama authored the paper while interning at the Boston Dynamics\u2019 \u003Ca href=\u0022https:\/\/theaiinstitute.com\/\u0022\u003E\u003Cstrong\u003EAI Institute\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003E\u201cI think the cognitive robotic category represents a significant portion of submissions to ICRA nowadays,\u201d said Yokoyama, whose family is from Japan. \u201cI\u2019m grateful that our work is being recognized among the best in this field.\u201d\u003C\/p\u003E\u003Cp\u003EInstead of natural language models, Yokoyama used a renowned vision-language model called BLIP-2 and tested it on a Boston Dynamics \u201cSpot\u201d robot in home and office environments.\u003C\/p\u003E\u003Cp\u003E\u201cWe rely on models that have been trained on vast amounts of data collected from the web,\u201d Yokoyama said. \u201cThat allows us to use models with common sense reasoning and world knowledge. It\u2019s not limited to a typical robot learning environment.\u201d\u003C\/p\u003E\u003Ch6\u003E\u003Cstrong\u003EWhat is Blip-2?\u003C\/strong\u003E\u003C\/h6\u003E\u003Cp\u003EBLIP-2 matches images to text by assigning a score that evaluates how well the user input text describes the content of an image. The model removes the need for the robot to use object detectors and language models.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EInstead, the robot uses BLIP-2 to extract semantic values from RGB images with a text prompt that includes the target object.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EBLIP-2 then teaches the robot to recognize the room type, distinguishing the living room from the bathroom and the kitchen. The robot learns to associate certain objects with specific rooms where it will likely find them.\u003C\/p\u003E\u003Cp\u003EFrom here, the robot creates a value map to determine the most likely locations for a target object, Yokoyama said.\u003C\/p\u003E\u003Cp\u003EYokoyama said this is a step forward for intelligent home assistive robots, enabling users to find objects \u2014 like missing keys \u2014 in their homes without knowing an item\u2019s location.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cIf you\u2019re looking for a pair of scissors, the robot can automatically figure out it should head to the kitchen or the office,\u201d he said. \u201cEven if the scissors are in an unusual place, it uses semantic reasoning to work through each room from most probable location to least likely.\u201d\u003C\/p\u003E\u003Cp\u003EHe added that the benefit of using a VLM instead of an object detector is that the robot will include visual cues in its reasoning.\u003C\/p\u003E\u003Cp\u003E\u201cYou can look at a room in an apartment, and there are so many things an object detector wouldn\u2019t tell you about that room that would be informative,\u201d he said. \u201cYou don\u2019t want to limit yourself to a textual description or a list of object classes because you\u2019re missing many semantic visual cues.\u201d\u003C\/p\u003E\u003Cp\u003EWhile other VLMs exist, Yokoyama chose BLIP-2 because the model:\u003C\/p\u003E\u003Cul\u003E\u003Cli\u003EAccepts any text length and isn\u2019t limited to a small set of objects or categories.\u003C\/li\u003E\u003Cli\u003EAllows the robot to be pre-trained on vast amounts of data collected from the internet.\u003C\/li\u003E\u003Cli\u003EHas proven results that enable accurate image-to-text matching.\u003C\/li\u003E\u003C\/ul\u003E\u003Ch6\u003E\u003Cstrong\u003EHome, Office, and Beyond\u003C\/strong\u003E\u003C\/h6\u003E\u003Cp\u003EYokoyama also tested the Spot robot to navigate a more challenging office environment. Office spaces tend to be more homogenous and harder to distinguish from one another than rooms in a home.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cWe showed a few cases in which the robot will still work,\u201d Yokoyama said. \u201cWe tell it to find a microwave, and it searches for the kitchen. We tell it to find a potted plant, and it moves toward an area with windows because, based on what it knows from BLIP-2, that\u2019s the most likely place to find the plant.\u201d\u003C\/p\u003E\u003Cp\u003EYokoyama said as VLM models continue to improve, so will robot navigation. The increase in the number of VLM models has caused robot navigation to steer away from traditional physical simulations.\u003C\/p\u003E\u003Cp\u003E\u201cIt shows how important it is to keep an eye on the work being done in computer vision and natural language processing for getting robots to perform tasks more efficiently,\u201d he said. \u201cThe current research direction in robot learning is moving toward more intelligent and higher-level reasoning. These foundation models are going to play a key role in that.\u201d\u003C\/p\u003E\u003Cp\u003E\u003Cem\u003ETop photo by Kevin Beasley\/College of Computing.\u003C\/em\u003E\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003ERoboticists have employed natural language models to help robots mimic human reasoning over the past few years. However, Yokoyama, a Ph.D. student in robotics, said these models create a \u201cbottleneck\u201d that prevents agents from picking up on visual cues such as room type, size, d\u00e9cor, and lighting.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EYokoyama presented a new framework for semantic reasoning at the Institute of Electrical and Electronic Engineers (IEEE) \u003Ca href=\u0022https:\/\/www.ieee-ras.org\/conferences-workshops\/fully-sponsored\/icra\u0022\u003E\u003Cstrong\u003EInternational Conference on Robotics and Automation\u003C\/strong\u003E\u003C\/a\u003E (ICRA) last month in Yokohama, Japan. ICRA is the world\u2019s largest robotics conference.\u003C\/p\u003E\u003Cp\u003EYokoyama earned a best paper award in the Cognitive Robotics category with his \u003Ca href=\u0022http:\/\/naoki.io\/portfolio\/vlfm\u0022\u003E\u003Cstrong\u003EVision-Language Frontier Maps (VLFM) proposal\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Yokoyama presented a new framework for semantic reasoning for robots at the IEEE International Conference on Robotics and Automation, where he won best paper in the Cognitive Robotics category."}],"uid":"36530","created_gmt":"2024-06-06 14:26:46","changed_gmt":"2024-06-06 14:40:32","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2024-06-06T00:00:00-04:00","iso_date":"2024-06-06T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"674146":{"id":"674146","type":"image","title":"208A9469.jpg","body":null,"created":"1717684031","gmt_created":"2024-06-06 14:27:11","changed":"1717684031","gmt_changed":"2024-06-06 14:27:11","alt":"Three students kneeling around a spot robot","file":{"fid":"257622","name":"208A9469.jpg","image_path":"\/sites\/default\/files\/2024\/06\/06\/208A9469.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/06\/06\/208A9469.jpg","mime":"image\/jpeg","size":153459,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/06\/06\/208A9469.jpg?itok=E1iUHz3L"}}},"media_ids":["674146"],"groups":[{"id":"50876","name":"School of Interactive Computing"},{"id":"1188","name":"Research Horizons"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"135","name":"Research"},{"id":"152","name":"Robotics"},{"id":"193157","name":"Student Honors and Achievements"},{"id":"8862","name":"Student Research"}],"keywords":[{"id":"192863","name":"go-ai"},{"id":"187812","name":"artificial intelligence (AI)"},{"id":"10199","name":"Daily Digest"},{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003C\/p\u003E\u003Cp\u003ECommunications Officer\u003C\/p\u003E\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E","format":"limited_html"}],"email":["ndeen6@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"674848":{"#nid":"674848","#data":{"type":"news","title":"AI4GA Lays Groundwork for NSF-funded Nationwide K-12 AI Curriculum","body":[{"value":"\u003Cp\u003EWorking on a multi-institutional team of investigators, Georgia Tech researchers have helped the state of Georgia become the epicenter for developing K-12 AI educational curriculum nationwide.\u003C\/p\u003E\u003Cp\u003EThe new curriculum introduced by \u003Ca href=\u0022https:\/\/ai4ga.org\/\u0022\u003E\u003Cstrong\u003EArtificial Intelligence for Georgia (AI4GA)\u003C\/strong\u003E\u003C\/a\u003E has taught middle school students to use and understand AI. It\u2019s also equipped middle school teachers to teach the foundations of AI.\u003C\/p\u003E\u003Cp\u003EAI4GA is a branch of a larger initiative, the \u003Ca href=\u0022https:\/\/ai4k12.org\/\u0022\u003E\u003Cstrong\u003EArtificial Intelligence for K-12 (AI4K12)\u003C\/strong\u003E\u003C\/a\u003E. Funded by the National Science Foundation and led by researchers from Carnegie Mellon University and the University of Florida, AI4K12 is developing national K-12 guidelines for AI education.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EBryan Cox, the Kapor research fellow in Georgia Tech\u2019s \u003Ca href=\u0022https:\/\/constellations.gatech.edu\/\u0022\u003E\u003Cstrong\u003EConstellation Center for Equity in Computing\u003C\/strong\u003E\u003C\/a\u003E, drove a transformative computer science education initiative when he worked at the \u003Ca href=\u0022https:\/\/www.gadoe.org\/Pages\/Home.aspx\u0022\u003E\u003Cstrong\u003EGeorgia Department of Education\u003C\/strong\u003E\u003C\/a\u003E. Though he is no longer with the DOE, he persuaded the principal investigators of AI4K12 to use Georgia as their testing ground. He became a lead principal investigator for AI4GA.\u003C\/p\u003E\u003Cp\u003E\u201cWe\u2019re using AI4GA as a springboard to contextualize the need for AI literacy in populations that have the potential to be negatively impacted by AI agents,\u201d Cox said.\u003C\/p\u003E\u003Cp\u003EJudith Uchidiuno, an assistant professor in Georgia Tech\u2019s School of Interactive Computing, began working on the AI4K12 project as a post-doctoral researcher at Carnegie Mellon under lead PI Dave Touretzky. Joining the faculty at Georgia Tech enabled her to be an in-the-classroom researcher for AI4GA. She started her \u003Ca href=\u0022https:\/\/www.playandlearnlab.com\/\u0022\u003E\u003Cstrong\u003EPlay and Learn Lab\u003C\/strong\u003E\u003C\/a\u003E at Georgia Tech and hired two research assistants devoted to AI4GA.\u003C\/p\u003E\u003Cp\u003EFocusing on students from underprivileged backgrounds in urban, suburban, and rural communities, Uchidiuno said her team has worked with over a dozen Atlanta-based schools to develop an AI curriculum. The results have been promising.\u003C\/p\u003E\u003Cp\u003E\u201cOver the past three years, over 1,500 students have learned AI due to the work we\u2019re doing with teachers,\u201d Uchidiuno said. \u201cWe are empowering teachers through AI. They now know they have the expertise to teach this curriculum.\u201d\u003C\/p\u003E\u003Cp\u003EAI4GA is in its final semester of NSF funding, and the researchers have made their curriculum and teacher training publicly available. The principal investigators from Carnegie Mellon and the University of Florida will use the curriculum as a baseline for AI4K12.\u003C\/p\u003E\u003Ch6\u003E\u003Cstrong\u003ESTARTING STUDENTS YOUNG\u003C\/strong\u003E\u003C\/h6\u003E\u003Cp\u003EThough AI is a complex subject, the researchers argue middle schoolers aren\u2019t too young to learn about how it works and the social implications that come with it.\u003C\/p\u003E\u003Cp\u003E\u201cKids are interacting with it whether people like it or not,\u201d Uchidiuno said. \u201cMany of them already have smart devices. Some children have parents with smart cars. More and more students are using ChatGPT.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cThey don\u2019t have much understanding of the impact or the implications of using AI, especially data and privacy. If we want to prepare students who will one day build these technologies, we need to start them young and at least give them some critical thinking skills.\u201d\u003C\/p\u003E\u003Cp\u003EWill Gelder, a master\u2019s student in Uchidiuno\u2019s lab, helped analyze data exploring the benefits of co-designing the teaching curriculum with teachers based on months of working with students and learning how they understand AI. Rebecca Yu, a research scientist in Uchidiuno\u2019s lab, collected data to determine which parts of the curriculum were effective or ineffective.\u003C\/p\u003E\u003Cp\u003EThrough the\u0026nbsp;\u003Ca href=\u0022https:\/\/ncwit.org\/program\/bridgeup-stem\/\u0022\u003E\u003Cstrong\u003EBridgeUP STEM\u003C\/strong\u003E\u003C\/a\u003E Program at Georgia Tech, Uchidiuno worked with high school students to design video games that demonstrate their knowledge of AI based on the AI4GA curriculum. Students designed the games using various maker materials in 2D and 3D representations, and the games are currently in various stages of development by student developers at the Play and Learn Lab.\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cThe students love creative projects that let them express their creative thoughts,\u201d Gelder said. \u201cStudents love the opportunity to break out markers or crayons and design their dream robot and whatever functions they can think of.\u201d\u003C\/p\u003E\u003Cp\u003EYu said her research shows that many students demonstrate the ability to understand advanced concepts of AI through these creative projects.\u003C\/p\u003E\u003Cp\u003E\u201cTo teach the concept of algorithms, we have students use crayons to draw different colors to mimic all the possibilities a computer is considering in its decision-making,\u201d Yu said.\u003C\/p\u003E\u003Cp\u003E\u201cMany other curricula like ours don\u2019t go in-depth about the technical concepts, but AI4GA does. We show that with appropriate levels of scaffolding and instructions, they can learn them even without mathematical or programming backgrounds.\u201d\u0026nbsp;\u003C\/p\u003E\u003Ch6\u003E\u003Cstrong\u003EEMPOWERING TEACHERS\u003C\/strong\u003E\u003C\/h6\u003E\u003Cp\u003ECox cast a wide net to recruit middle school teachers with diverse student groups. A former student of his answered the call.\u003C\/p\u003E\u003Cp\u003EAmber Jones, a Georgia Tech alumna, taught at a school primarily consisting of Black and Latinx students. She taught a computer science course that covered building websites, using Excel, and basic coding.\u003C\/p\u003E\u003Cp\u003EJones said many students didn\u2019t understand the value and applications of what her course was teaching until she transitioned to the AI4GA curriculum.\u003C\/p\u003E\u003Cp\u003E\u201cAI for Georgia curriculum felt like every other lesson tied right back to the general academics,\u201d Jones said. \u201cI could say, \u2018Remember how you said you weren\u2019t going to ever use y equals mx plus b? Well, every time you use Siri, she\u0027s running y equals mx plus b.\u2019 I saw them drawing the connections and not only drawing them but looking for them.\u201d\u003C\/p\u003E\u003Cp\u003EConnecting AI back to their other classes, favorite social media platforms, and digital devices helped students understand the concepts and fostered interest in the curriculum.\u003C\/p\u003E\u003Cp\u003EJones\u2019s participation in the program also propelled her career forward. She now works as a consultant teaching AI to middle school students.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cI\u2019m kind of niche in my experiences,\u201d Jones said. \u201cSo, when someone says, \u2018Hey, we also want to do something with a young population that involves computer science,\u2019 I\u2019m in a small pool of people that can be looked to for guidance.\u201d\u003C\/p\u003E\u003Cp\u003EAI4GA quickly cultivated a new group of experts within a short timeframe.\u003C\/p\u003E\u003Cp\u003E\u201cThey\u2019ve made their classes their own,\u201d Cox said. \u201cThey add their own tweaks. Over the course of the project, the teachers were engaged in cultivating the lessons for their experience and their context based on the identity of their students.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EThe new curriculum introduced by \u003Ca href=\u0022https:\/\/ai4ga.org\/\u0022\u003E\u003Cstrong\u003EArtificial Intelligence for Georgia (AI4GA)\u003C\/strong\u003E\u003C\/a\u003E has taught middle school students to use and understand AI. It\u2019s also equipped middle school teachers to teach the foundations of AI.\u003C\/p\u003E\u003Cp\u003EAI4GA is a branch of a larger initiative, the \u003Ca href=\u0022https:\/\/ai4k12.org\/\u0022\u003E\u003Cstrong\u003EArtificial Intelligence for K-12 (AI4K12)\u003C\/strong\u003E\u003C\/a\u003E. Funded by the National Science Foundation and led by researchers from Carnegie Mellon University and the University of Florida, AI4K12 is developing national K-12 guidelines for AI education.\u0026nbsp;\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Working on a multi-institutional team of investigators, Georgia Tech researchers have helped the state of Georgia become the epicenter for developing K-12 AI educational curriculum nationwide."}],"uid":"36530","created_gmt":"2024-05-22 14:23:14","changed_gmt":"2024-05-28 15:20:14","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2024-05-21T00:00:00-04:00","iso_date":"2024-05-21T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"674056":{"id":"674056","type":"image","title":"AI4GA1.jpg","body":null,"created":"1716387803","gmt_created":"2024-05-22 14:23:23","changed":"1716387803","gmt_changed":"2024-05-22 14:23:23","alt":"AI4GA","file":{"fid":"257524","name":"AI4GA1.jpg","image_path":"\/sites\/default\/files\/2024\/05\/22\/AI4GA1.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/05\/22\/AI4GA1.jpg","mime":"image\/jpeg","size":159094,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/05\/22\/AI4GA1.jpg?itok=f3cyYibo"}}},"media_ids":["674056"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"42911","name":"Education"}],"keywords":[{"id":"192863","name":"go-ai"},{"id":"187812","name":"artificial intelligence (AI)"},{"id":"193070","name":"AI education"},{"id":"191003","name":"Georgia school districts"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71901","name":"Society and Culture"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003C\/p\u003E\u003Cp\u003ECommunications Officer\u003C\/p\u003E\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E","format":"limited_html"}],"email":["ndeen6@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"668663":{"#nid":"668663","#data":{"type":"news","title":"Students Earn Prestigious Fellowships Underscoring Institute\u2019s Leadership in AI","body":[{"value":"\u003Cp\u003EArtificial intelligence (AI) research by two Georgia Institute of Technology students has caught the attention of one of the world\u0027s leading financial services companies.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGaurav Verma and Yuxi Wu are recipients of 2023\u0026nbsp;\u003Ca href=\u0022https:\/\/www.jpmorgan.com\/technology\/artificial-intelligence\/research-awards\u0022\u003EJ.P. Morgan AI Research Ph.D. Fellowship Awards\u003C\/a\u003E. They are among 13 scholars being honored this year by J.P. Morgan Chase \u0026amp; Co. for AI research projects taking on real-world challenges.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0022Our goal is to recognize and enable the next generation of leading AI researchers. We want to create an environment where researchers can inspire change and make a lasting impact in our communities and across our industry,\u0022 said Manuela Veloso, Ph.D., head of AI Research, J.P. Morgan Chase \u0026amp; Co.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.jpmorgan.com\/technology\/artificial-intelligence\/research-awards\/phd-fellowship-2023\/gaurav-verma\u0022\u003EVerma\u003C\/a\u003E\u0026nbsp;is pursuing his Ph.D. in the\u0026nbsp;\u003Ca href=\u0022https:\/\/cse.gatech.edu\/\u0022\u003ESchool of Computational Science and Engineering\u003C\/a\u003E. Working with his advisor, Assistant Professor Srijan Kumar, Verma expects to ensure safety, equity, and well-being by creating multimodal learning and natural language processing approaches to achieve better human-AI interactions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.jpmorgan.com\/technology\/artificial-intelligence\/research-awards\/phd-fellowship-2023\/yuxi-wu\u0022\u003EWu\u003C\/a\u003E\u0026nbsp;is a Ph.D. candidate in the\u0026nbsp;\u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E. Empowering people regarding their privacy concerns is at the core of her research. Wu examines how cross-sector, collective action systems could better support end-user privacy. Professor Keith Edwards and Adjunct Assistant Professor Sauvik Das advise Wu.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0022It\u0027s inspiring to see our students and their work being honored with these prestigious fellowships,\u0022 said Irfan Essa, computer science professor and director of the\u0026nbsp;\u003Ca href=\u0022https:\/\/ml.gatech.edu\/\u0022\u003EMachine Learning Center at Georgia Tech\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0022Georgia Tech continues to lead in AI education and research. These fellowships for Gaurav and Yuxi are evidence that we\u0027re continuing to move in the right direction.\u0022\u003C\/p\u003E\r\n\r\n\u003Cp\u003EVerma and Wu are part of a spectrum of AI research spanning Georgia Tech. To unite this broad community and ensure it continues moving in the right direction, the Institute recently established\u0026nbsp;\u003Ca href=\u0022https:\/\/news.gatech.edu\/news\/2023\/06\/06\/ai-hub-georgia-tech-unite-campus-artificial-intelligence-rd-and-commercialization\u0022\u003EAI Hub at Georgia Tech\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0022AI has a deep history at Georgia Tech, and we continue to serve as leaders in many areas of AI research and education,\u0022 said Essa, interim co-director of AI Hub at Georgia Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0022Bringing all areas of AI under one umbrella, AI Hub at Georgia Tech will provide structure and governance as the Institute continues to lead and innovate in the burgeoning discipline of AI.\u0022\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003ETwo Georgia Tech Ph.D. students are being recognized for their innovative research with J.P. Morgan AI Research Fellowships.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Two Georgia Tech Ph.D. students are being recognized for their innovative research taking on real-world problems."}],"uid":"32045","created_gmt":"2023-08-01 18:59:19","changed_gmt":"2024-05-13 14:48:53","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2023-08-01T00:00:00-04:00","iso_date":"2023-08-01T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"671294":{"id":"671294","type":"image","title":"Georgia Tech Ph.D. students Gaurav Verma and Yuxi Wu ","body":null,"created":"1690916372","gmt_created":"2023-08-01 18:59:32","changed":"1690916372","gmt_changed":"2023-08-01 18:59:32","alt":"Georgia Tech Ph.D. students Gaurav Verma and Yuxi Wu ","file":{"fid":"254324","name":"Screen Shot 2023-08-01 at 10.29.55 AM.png","image_path":"\/sites\/default\/files\/2023\/08\/01\/Screen%20Shot%202023-08-01%20at%2010.29.55%20AM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/08\/01\/Screen%20Shot%202023-08-01%20at%2010.29.55%20AM.png","mime":"image\/png","size":534967,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/08\/01\/Screen%20Shot%202023-08-01%20at%2010.29.55%20AM.png?itok=8IXcQ01w"}}},"media_ids":["671294"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"1188","name":"Research Horizons"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"132","name":"Institute Leadership"},{"id":"8862","name":"Student Research"}],"keywords":[{"id":"10199","name":"Daily Digest"}],"core_research_areas":[{"id":"39431","name":"Data Engineering and Science"}],"news_room_topics":[{"id":"71871","name":"Campus and Community"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EBen Snedeker, Communications Manager II\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech College of Computing\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:albert.snedeker@cc.gatech.edu\u0022\u003Ealbert.snedeker@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"669338":{"#nid":"669338","#data":{"type":"news","title":"New Technology Promises More Efficient and Practical Virtual Reality Systems","body":[{"value":"\u003Cp\u003EGlitchy games and bulky headsets may soon be things of the past thanks to a\u0026nbsp;new\u0026nbsp;eye-tracking system\u0026nbsp;for\u0026nbsp;virtual reality\/augmented reality\u0026nbsp;(VR\/AR).\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEye tracking is an essential component of AR\/VR systems, but current systems have some limitations. These include a large size due to bulkier lens-based cameras and the high communication cost between the camera and the backend system.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech School of Computer Science Associate Professor Yingyan (Celine) Lin, Ph.D. student Hoaran You, and postdoctoral student Yang (Katie) Zhao have\u0026nbsp;developed a new\u0026nbsp;eye-tracking\u0026nbsp;system\u0026nbsp;that works around these limitations by combining\u0026nbsp;a\u0026nbsp;recently developed lens-less\u0026nbsp;camera, algorithm, and acceleration processor designs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThe current VR headsets are too heavy, gaming can lag, and using the controller is cumbersome. Combined, this prevents users from having a truly immersive experience. We mitigate all these problems,\u201d said You.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis new system,\u0026nbsp;\u003Cem\u003EEyeCoD: An Accelerated Eye Tracking System via FlatCam-based Algorithm \u0026amp; Accelerator Co-Design\u003C\/em\u003E, replaces the traditional camera lens with FlatCam, a lensless camera 5x \u2013 10x thinner and lighter. Combined with FlatCam, the team\u2019s system enables eye tracking to function at a reduced size, with improved efficiency, and without sacrificing the accuracy of the tracking algorithm. The system could also enhance user privacy by not including a lens-based camera.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnother feature of the\u0026nbsp;\u003Cem\u003EEyeCoD\u003C\/em\u003E\u0026nbsp;system\u0026nbsp;is that it\u0026nbsp;only puts the\u0026nbsp;portion\u0026nbsp;of the\u0026nbsp;screen\u0026nbsp;that\u0026nbsp;a\u0026nbsp;user\u2019s eyes focus on\u0026nbsp;in high resolution.\u0026nbsp;It does this by predicting where a user\u2019s eyes may land, then\u0026nbsp;instantaneously\u0026nbsp;rendering\u0026nbsp;these areas\u0026nbsp;in high\u0026nbsp;res. These\u0026nbsp;computational savings,\u0026nbsp;plus\u0026nbsp;a dedicated accelerator,\u0026nbsp;underpin\u0026nbsp;\u003Cem\u003EEyeCoD\u003C\/em\u003E\u2019s\u0026nbsp;ability to boost\u0026nbsp;processing speeds and efficiency.\u0026nbsp;\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe\u0026nbsp;team received the\u0026nbsp;\u003Ca href=\u0022https:\/\/licensing.research.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003EOffice of Technology Licensing\u2019s\u003C\/a\u003E\u0026nbsp;Tech Ready Grant\u0026nbsp;for its efforts earlier this year. Tech Ready Grants\u0026nbsp;offer\u0026nbsp;$25,000 to help faculty\u0026nbsp;transition\u0026nbsp;projects from the lab to\u0026nbsp;the\u0026nbsp;marketplace.\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe team hopes to use the funds to integrate the current demos into a compact eye-tracking system for use in commercial VR\/AR headsets.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlong with winning the Tech Ready Grant, the\u0026nbsp;team presented\u0026nbsp;\u003Cem\u003EEyeCoD\u0026nbsp;\u003C\/em\u003Eat\u0026nbsp;the\u0026nbsp;International Symposium\u003Cstrong\u003E\u0026nbsp;\u003C\/strong\u003Eon Computer Architecture\u202f(ISCA) 2022.\u0026nbsp;IEEE\u0026nbsp;Micro\u0026nbsp;included the\u0026nbsp;work\u0026nbsp;in its\u0026nbsp;\u003Cem\u003ETop Picks from the Computer Architecture Conferences\u003C\/em\u003E\u0026nbsp;for 2023.\u0026nbsp;The annual publication\u0026nbsp;highlights \u201csignificant research papers in computer architecture based on novelty and potential for long-term impact.\u201d\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EEyeCoD\u0026nbsp;\u003C\/em\u003Eis a collaborative work.\u0026nbsp;Collaborators include\u0026nbsp;Rice University Professor\u0026nbsp;Ashok\u0026nbsp;Veeraraghavan, whose team provided\u0026nbsp;the technical support and design of the\u0026nbsp;FlatCam\u0026nbsp;camera in\u0026nbsp;\u003Cem\u003EEyeCoD\u003C\/em\u003E;\u0026nbsp;and\u0026nbsp;Ziyun\u0026nbsp;Li, of Meta, who provided\u0026nbsp;technical inputs to ensure that the\u0026nbsp;EyeCoD\u0026nbsp;system aligns with industry AR\/VR specifications.\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EA new eye-tracking system called EyeCoD, developed by researchers at Georgia Tech, uses a lensless camera to reduce the size and weight of VR\/AR headsets, improve efficiency, and enhance user privacy while selectively rendering high-resolution screen areas based on where the user is focusing at any given moment.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"A new eye-tracking system developed at Georgia Tech uses a lensless camera to reduce the size and weight of VR\/AR headsets, improves efficiency, and enhances user privacy."}],"uid":"32045","created_gmt":"2023-09-01 12:51:20","changed_gmt":"2024-05-13 14:47:55","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2023-08-30T00:00:00-04:00","iso_date":"2023-08-30T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"671567":{"id":"671567","type":"image","title":"A closeup of glass panels on the College of Computing\u0027s Binary Bridge","body":null,"created":"1693572695","gmt_created":"2023-09-01 12:51:35","changed":"1693572695","gmt_changed":"2023-09-01 12:51:35","alt":"A closeup of glass panels on the College of Computing\u0027s Binary Bridge","file":{"fid":"254652","name":"news-default-image - New.png","image_path":"\/sites\/default\/files\/2023\/09\/01\/news-default-image%20-%20New.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/09\/01\/news-default-image%20-%20New.png","mime":"image\/png","size":549085,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/09\/01\/news-default-image%20-%20New.png?itok=xvPJxjiG"}}},"media_ids":["671567"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"135","name":"Research"}],"keywords":[{"id":"10199","name":"Daily Digest"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EMorgan Usry, Communications Officer I\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESchool of Computer Science\u003C\/p\u003E\r\n\r\n\u003Cp\u003Emorgan.usry@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"670296":{"#nid":"670296","#data":{"type":"news","title":"Major Grant Funds New AI Ethics Network That Will Emphasize Atlanta Voices","body":[{"value":"\u003Cp\u003EAtlanta communities most vulnerable to bias and inequity in artificial intelligence (AI) are the focus of a new Atlanta-based ethics initiative being funded by a $1.3 million Mellon Foundation grant.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe\u0026nbsp;\u003Ca href=\u0022https:\/\/aiai.network\/\u0022\u003EAtlanta Interdisciplinary Artificial Intelligence (AIAI) Network\u003C\/a\u003E\u0026nbsp;brings together computing, humanities, and social justice researchers from Georgia Tech, Clark Atlanta University, Emory University, and community partner DataedX.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECarl DiSalvo, \u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003EGeorgia Tech School of Interactive Computing\u003C\/a\u003E professor, is an AIAI co-principal investigator (co-PI). Andre Brock, an associate professor in the School of Literature, Media, and Communication serves on the network\u2019s steering committee.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDiSalvo said the idea for the AIAI Network had been in the works for years. However, the researchers now have the needed funding thanks to the\u0026nbsp;\u003Ca href=\u0022https:\/\/www.mellon.org\/\u0022\u003EMellon Foundation\u003C\/a\u003E. The grant allows the network to hire its first graduate students for the 2023-2024 academic year.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThe Mellon grant provides resources that we didn\u2019t have before,\u201d DiSalvo said. \u201cThere are students doing work on topics related to AI, computing, humanities, and social justice. They were difficult to fund, but now there\u2019s funding. This has a material impact on supporting graduate students and their research, and that impact is immediate.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe Mellon award also provides seed money for the network to distribute grants to researchers in the Atlanta community. Brandeis Marshall, CEO of DataedX Group and co-PI, said the network wants to put Atlanta voices at the forefront of conversations about AI bias that aren\u2019t limited to the academic community.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cWe want people within Atlanta to connect with it, understand it, and be a part of it,\u201d Marshall said. \u201cWe want small businesses and nonprofits to feel like they have a place within these conversations about tech. It\u2019s for the everyday person, not just the academics.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELauren Klein, PI and Emory associate professor of English, said the AIAI Network offers a humanistic lens on controversial AI issues. She said it was important that each PI or steering committee member be open to research contributions from the humanities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThe proposed technical solutions are not coming from people who have expertise in these issues of systemic racism, sexism, and structural oppression,\u201d Klein said. \u201cThe people who have expertise with these issues and how they surface in AI are humanities scholars. We want to bring humanities researchers to the table with technical researchers as equal partners.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKlein said the Mellon Foundation recognized the AIAI Network\u2019s goals aligned with its recent commitment to funding projects centered on social justice.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cIt aligns with the work that the Mellon Foundation is trying to do,\u201d Klein said. \u201cThey\u2019ve made social justice the top-level concern of all projects they fund. They don\u2019t fund a lot of initiatives, so choosing to invest in us is meaningful.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne of the biggest challenges the network will face is steering the conversation away from the prominent AI \u201cdoomer\u201d narrative and toward existing AI bias. While the former is theorized, the latter continues to impact marginalized and minority communities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cIt\u2019s not just what we are going to do today, but also how what we\u2019re doing today will impact what we are trying to influence for tomorrow,\u201d Marshall said. \u201cHow can we dampen the AI hype and AI doom narratives and promote AI reality?\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAIAI Network will take a multifaceted approach to promote a more realistic understand of AI. Some of the tactics the group will use include:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EHumanities-focused research projects\u003C\/li\u003E\r\n\t\u003Cli\u003EPublic design and education workshops\u003C\/li\u003E\r\n\t\u003Cli\u003EGuest speaker series\u003C\/li\u003E\r\n\t\u003Cli\u003ECourses taught by principal investigators and steering committee members\u003C\/li\u003E\r\n\t\u003Cli\u003ESeed grants for Atlanta researchers doing like-minded work\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u201cWe\u2019re going to educate ourselves, those in the community, and those aspiring to be in this field on how we can be more solution based and solution oriented,\u201d Marshall said. \u201cThere are courses, research and research projects built on the framework of talking about AI bias in a productive way and not one focused on extreme stances around AI hype or AI doom.\u201d\u003C\/p\u003E\r\n\r\n\u003Cdiv\u003E\r\n\u003Cdiv\u003E\r\n\u003Cdiv\u003E\r\n\u003Cp\u003EWith support from philanthropic organizations like the Mellon Foundation, alumni, parents, friends, and corporations, Georgia Tech is securing the resources that will help achieve the most ambitious goals in the Institute\u0027s history as part of\u0026nbsp;\u003Ca href=\u0022https:\/\/transformingtomorrow.gatech.edu\/\u0022\u003E\u003Cem\u003ETransforming Tomorrow:\u0026nbsp;\u003C\/em\u003EThe Campaign for Georgia Tech\u003C\/a\u003E.\u003C\/p\u003E\r\n\u003C\/div\u003E\r\n\u003C\/div\u003E\r\n\u003C\/div\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech is partnering with area universities to promote a more realistic understanding of AI for local communities Funded by a $1.3 million Mellon Foundation grant, the Atlanta Interdisciplinary Artificial Intelligence (AIAI) Network brings together researchers from multiple disciplines\u0026nbsp;to address bias and inequity in AI.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech is partnering with area universities to promote a more realistic understanding of AI for local communities through research, education, and outreach."}],"uid":"32045","created_gmt":"2023-10-09 16:47:06","changed_gmt":"2024-05-13 14:44:44","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2023-10-09T00:00:00-04:00","iso_date":"2023-10-09T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"671982":{"id":"671982","type":"image","title":"CarlDiSalvo.jpeg","body":null,"created":"1696870157","gmt_created":"2023-10-09 16:49:17","changed":"1696870157","gmt_changed":"2023-10-09 16:49:17","alt":"Georgia Tech Professor of Interactive Computing Carl DiSalvo at his desk ","file":{"fid":"255157","name":"CarlDiSalvo.jpeg","image_path":"\/sites\/default\/files\/2023\/10\/09\/CarlDiSalvo.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/10\/09\/CarlDiSalvo.jpeg","mime":"image\/jpeg","size":33101,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/10\/09\/CarlDiSalvo.jpeg?itok=60kntpKe"}}},"media_ids":["671982"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"193234","name":"Campaign Stories"}],"keywords":[{"id":"10199","name":"Daily Digest"},{"id":"171760","name":"Mellon Foundation"},{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71901","name":"Society and Culture"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen, Communications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"674257":{"#nid":"674257","#data":{"type":"news","title":"New Strategic Design Approach Focuses on Turning AI Mistakes into User Benefits","body":[{"value":"\u003Cp\u003EMore and more often, automated lending systems powered by artificial intelligence (AI) reject qualified loan applicants without explanation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEven worse, they leave rejected applicants with no recourse.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPeople can have similar experiences when applying for jobs or petitioning their health insurance providers. While AI tools determine the fate of people in difficult situations daily, Upol Ehsan says more thought should be given to challenging these decisions or working around them.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEhsan, a Georgia Tech explainable AI (XAI) researcher, says many rejection cases are not the applicant\u2019s fault. Rather, it\u2019s more likely a \u201cseam\u201d in the design process \u2014 a mismatch between what designers thought the AI could do and what happens in reality.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEhsan said \u201cseamless design\u201d is the standard practice of AI designers. While the goal is to create a process by which users get what they need without interruption or barriers, seamless design has a way of doing just the opposite.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENo amount of thought or design input will keep AI tools from making mistakes. When mistakes happen, those impacted by them want to know why they happened.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBecause seamless design often includes black-boxing \u2014 the act of concealing the AI\u2019s reasoning \u2014 answers are never provided.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut what if there were a way to challenge an AI\u2019s decisions and turn its mistakes into benefits for end users? Ehsan believes that can be done through \u201cseamful design.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003En his latest paper,\u0026nbsp;\u003Cem\u003ESeamful Explainable AI: Operationalizing Seamful Design in XAI,\u0026nbsp;\u003C\/em\u003EEhsan proposes a strategic way of anticipating AI harms, learning their reasonings, and leveraging mistakes instead of concealing them.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Ch6\u003EGIVING USERS MORE OPTIONS\u003C\/h6\u003E\r\n\r\n\u003Cp\u003EIn his research, Ehsan worked with loan officers who used automated lending support systems. The seams, or flaws, he discovered in these tools\u2019 processes impacted applicants and lenders.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThe expectation is that the lending system works for everyone,\u201d Ehsan said. \u201cThe reality is that it doesn\u2019t. You\u2019ve found the seam once you\u2019ve figured out the difference between expectation and reality. Then we ask, \u2018How can we show this to end users so they can leverage it?\u2019\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo give users options when AI negatively impacts them, Ehsan suggests three things for designers to consider:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EActionability: Does the information about the flaw help the user take informed actions on the AI\u2019s recommendation?\u003C\/li\u003E\r\n\t\u003Cli\u003EContestability: Does the information provide the resources necessary to justify saying no to the AI?\u003C\/li\u003E\r\n\t\u003Cli\u003EAppropriation: Does identifying these seams help the user to adapt and appropriate the AI\u2019s output in a way that is different from the provided design but helps the user make the right decision?\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EEhsan uses the example of someone who was rejected for a loan despite having a good credit history. The rejection may have been due to a seam, such as a flawed discriminating algorithm, in the AI that screens the applications.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA post-deployment process is needed in cases like this to mitigate damage and empower affected end users. Loan applicants, for instance, should be allowed to contest the AI\u2019s decision based on known issues with an algorithm.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Ch6\u003EAGAINST THE GRAIN\u003C\/h6\u003E\r\n\r\n\u003Cp\u003EEhsan said his idea for seamful design is outside of the mainstream vernacular. However, his challenge to current accepted principles is gaining traction.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe is now working with cybersecurity, healthcare, and sales companies that are adopting his process.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThese companies may pioneer a new way of thinking in AI design. Ehsan believes this new mindset can allow designers to switch to a proactive mindset instead of being stuck in a reactive state of conducting damage control.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cYou want to stay a little ahead of the curve so you\u2019re not always caught off guard when things happen,\u201d Ehsan said. \u201cThe more proactive you can be and the more passes you can take at your design process, the safer and more responsible your systems will be.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEhsan collaborated with researchers from Georgia Tech, the University of Maryland, and Microsoft. They will present their paper later this year at the 2024 Association for Computing Machinery\u2019s Conference on Computer-Supported Cooperative Work and Social Computing (CSCW) in Costa Rica.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cSeamful design embraces the imperfect reality of our world and makes the most out of it,\u201d he said. \u201cIf it becomes mainstream, it can help us address the hype cycle AI suffers from now. We don\u2019t need to overhype AI\u2019s capacity or impose unachievable goals. That\u2019d be a gamechanger in calibrating people\u2019s trust in the system.\u201d\u0026nbsp;\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EMore and more often, automated lending systems powered by artificial intelligence (AI) reject qualified loan applicants without explanation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEven worse, they leave rejected applicants with no recourse.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPeople can have similar experiences when applying for jobs or petitioning their health insurance providers. While AI tools determine the fate of people in difficult situations daily, Upol Ehsan says more thought should be given to challenging these decisions or working around them.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEhsan, a Georgia Tech explainable AI (XAI) researcher, says many rejection cases are not the applicant\u2019s fault. Rather, it\u2019s more likely a \u201cseam\u201d in the design process \u2014 a mismatch between what designers thought the AI could do and what happens in reality.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Method Provides Users Options When AI Rejects or Discriminates Against Them."}],"uid":"36530","created_gmt":"2024-04-18 13:27:06","changed_gmt":"2024-05-13 14:15:00","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2024-05-07T00:00:00-04:00","iso_date":"2024-05-07T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"673748":{"id":"673748","type":"image","title":"AdobeStock_453025210 (1).jpeg","body":null,"created":"1713446832","gmt_created":"2024-04-18 13:27:12","changed":"1713446832","gmt_changed":"2024-04-18 13:27:12","alt":"Two people discuss a loan application","file":{"fid":"257181","name":"AdobeStock_453025210 (1).jpeg","image_path":"\/sites\/default\/files\/2024\/04\/18\/AdobeStock_453025210%20%281%29.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/04\/18\/AdobeStock_453025210%20%281%29.jpeg","mime":"image\/jpeg","size":161965,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/04\/18\/AdobeStock_453025210%20%281%29.jpeg?itok=v8RVvlkP"}}},"media_ids":["673748"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"135","name":"Research"}],"keywords":[{"id":"187915","name":"go-researchnews"},{"id":"10199","name":"Daily Digest"},{"id":"181991","name":"Georgia Tech News Center"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer I\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["ndeen6@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"670497":{"#nid":"670497","#data":{"type":"news","title":"Research Reveals Small Business Can Struggle to Leverage Tech Benefiting Workers","body":[{"value":"\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EA new Georgia Tech study reveals that excluding front-line workers from the design process can increase employee turnover rates, leading to higher costs and reduced efficiency for businesses implementing new automated technologies.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EAlyssa Sheehan has seen firsthand how companies can struggle to leverage new technologies meant to improve systems and benefit workers. She collaborated with dozens of companies as the director of the Georgia Center of Innovation\u0027s aerospace team from 2022 to 2023.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EThat experience inspired the Ph.D. candidate and 2022 Foley Scholar to explore the effects on workers when technology is implemented to automate traditional paper-based processes.\u003Cspan\u003E\u0026nbsp;\u003C\/span\u003E\u003Cem\u003EMaking Meaning from the Digitalization of Blue-Collar Work\u003C\/em\u003E\u003Cspan\u003E\u0026nbsp;\u003C\/span\u003Ewon a best paper award at the 2023 Conference on\u003Cspan\u003E\u0026nbsp;\u003C\/span\u003E\u003Ca href=\u0022https:\/\/cscw.acm.org\/2023\/\u0022\u003EComputer Supported Cooperative Work and Social Computing\u003C\/a\u003E\u003Cspan\u003E\u0026nbsp;\u003C\/span\u003E(CSCW) this week in Minneapolis.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u201cI\u2019m trying to cast meaningful work into a new light with automation and technology design,\u201d Sheehan said. \u201cThe intention is so focused on delivering efficiency and optimizing the process. Companies and technologists forget about user input from workers using these systems.\u201d\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EMicrosoft and other major tech companies have\u003Cspan\u003E\u0026nbsp;\u003C\/span\u003E\u003Ca href=\u0022https:\/\/www.microsoft.com\/en-us\/research\/uploads\/prod\/2022\/04\/Microsoft-New-Future-of-Work-Report-2022.pdf\u0022\u003Eannounced commitments\u003C\/a\u003E\u003Cspan\u003E\u0026nbsp;\u003C\/span\u003Eto use technology to foster a culture of meaningful work within the workplace. However, Sheehan said that small businesses often lack the resources and knowledge required to incorporate such beneficial technology. Others design the technology with only productivity in mind and without considering if it makes their employees\u2019 jobs more meaningful.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u201cThere\u2019s a lot of research that shows there\u2019s a technology gap, particularly for small businesses,\u201d Sheehan said. \u201cI\u2019m not always advocating for technology as a solution, but I look at what exists critically and ask, \u2018Is this technology doing what we want it to? If the goal is to support workers, how is it doing that?\u2019\u201d\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003ESheehan worked with a small Georgia-based manufacturing company to conduct an 18-month study. She designed and deployed off-the-shelf tools to automate the company\u2019s shipping and receiving processes that required time and paperwork.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EWith the support of researchers from Georgia Tech\u2019s\u003Cspan\u003E\u0026nbsp;\u003C\/span\u003E\u003Ca href=\u0022https:\/\/research.gatech.edu\/ipat\u0022\u003EInstitute of People and Technology\u003C\/a\u003E\u003Cspan\u003E\u0026nbsp;\u003C\/span\u003E(IPAT), she customized a wearable and mobile app. The workers used the app to check off critical tasks within the shipping process one by one. \u0026nbsp;\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EThe results were mixed.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003ESheehan said many ground-floor shipping experts were frustrated by the frequency of having to repack orders because of customer complaints about improper shipping. The workers insisted they\u2019d done the job correctly. The mobile app allowed them to take pictures of each order after packaging for quality assurance.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EThe workers appreciated the feature, but they also provided negative feedback. In some cases, the app required workers to perform tasks contrary to methods that suited them and made them feel productive. It also took away a sense of autonomy and pride in expertise from workers because it instructed them what to do step by step. Instead of making the job easier, workers felt like their superiors didn\u2019t trust them to do the job correctly.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u201cIt helped in certain areas like not having to take notes on paper anymore and using outdated equipment. However, they struggled to see how it would preserve meaning in their job in terms of working with their hands and doing various tasks at any given time.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u201cWe create universal systems and solutions for mobile apps that are often deployed without understanding the context of organizational practices. That\u2019s a problem. Now, the workers have to adapt their processes to make this tool work in practice. They\u2019re being asked to give up how they do things,\u201d Sheehan said.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EShe added that automated technology systems need to go beyond convenience and productivity, and these systems may cause more harm than good if it diminishes meaning and value from workers.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u201cBy leaving the worker perspective out of the design process, we limit the potential of these technologies,\u201d she said. \u201cProductivity still relies on people being engaged in the process. If we\u2019re going to create true productivity, we need to make sure those jobs are valuable and that people feel what they do matters. That leads to less turnover and higher job satisfaction rates.\u201d\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EResearch highlighting crucial role of front-line workers in designing automated technologies earns best paper award for School of Interactive Computing Ph.D. student at premier social computing conference.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Research highlighting crucial role of front-line workers in designing automated technologies earns best paper award at premier social computing conference."}],"uid":"32045","created_gmt":"2023-10-18 16:55:44","changed_gmt":"2023-10-26 20:02:20","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2023-10-18T00:00:00-04:00","iso_date":"2023-10-18T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"672085":{"id":"672085","type":"image","title":"Input from warehouse workers and other front-line employees is essential to designing effective automated systems","body":null,"created":"1697648156","gmt_created":"2023-10-18 16:55:56","changed":"1697648156","gmt_changed":"2023-10-18 16:55:56","alt":"Input from warehouse workers and other front-line employees is essential to designing effective automated systems.","file":{"fid":"255269","name":"industry_manfacturing story.jpg","image_path":"\/sites\/default\/files\/2023\/10\/18\/industry_manfacturing%20story.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/10\/18\/industry_manfacturing%20story.jpg","mime":"image\/jpeg","size":91880,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/10\/18\/industry_manfacturing%20story.jpg?itok=E4Y7F0q8"}}},"media_ids":["672085"],"groups":[{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"8862","name":"Student Research"}],"keywords":[{"id":"187915","name":"go-researchnews"},{"id":"10199","name":"Daily Digest"},{"id":"7806","name":"computing for good"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"106361","name":"Business and Economic Development"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen, Communications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E\r\n\r\n\u003Cp\u003Enathan.deen@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"670207":{"#nid":"670207","#data":{"type":"news","title":"New Robot Learns Object Arrangement Preferences Without User Input","body":[{"value":"\u003Cp\u003EKartik Ramachandruni knew he would need to find a unique approach to a populated research field.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWith a handful of students and researchers at Georgia Tech looking to make breakthroughs in home robotics and object rearrangement, Ramachandruni searched for what others had overlooked.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cTo an extent it was challenging, but it was also an opportunity to look at what people are already doing and to get more familiar with the literature,\u201d said Ramachandruni, a Ph.D. student in Robotics. \u201c(Associate) Professor (Sonia) Chernova helped me in deciding how to zone in on the problem and choose a unique perspective.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERamachandruni started exploring how a home robot might organize objects according to user preferences in a pantry or refrigerator without prior instructions required by existing frameworks.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHis persistence paid off. The 2023\u003Ca href=\u0022https:\/\/ieee-iros.org\u0022\u003E IEEE International Confrence on Robots and Systems (IROS)\u003C\/a\u003E accepted Ramachandruni\u2019s paper on a novel framework for a context-aware object rearrangement robot.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cOur goal is to build assistive robots that can perform these organizational tasks,\u201d Ramachandruni said. \u201cWe want these assistive robots to model the user preferences for a better user experience. We don\u2019t want the robot to come into someone\u2019s home and be unaware of these preferences, rearrange their home in a different way, and cause the users to be distressed. At the same time, we don\u2019t want to burden the user with explaining to the robot exactly how they want the robot to organize their home.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERamachandruni\u2019s object rearrangement framework, Context-Aware Semantic Object Rearrangement (ConSOR), uses contextual clues from a pre-arranged environment within its environment to mimic how a person might arrange objects in their kitchen.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cIf our ConSOR robot rearranged your fridge, it would first observe where objects are already placed to understand how you prefer to organize your fridge,\u201d he said. \u201cThe robot then places new objects in a way that does not disrupt your organizational style.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe only prior knowledge the robot needs is how to recognize certain objects such as a milk carton or a box of cereal. Ramachandruni said he pretrained the model on language datasets that map out objects hierarchically.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThe semantic knowledge database we use for training is a hierarchy of words similar to what you would see on a website such as Walmart, where objects are organized by shopping category,\u201d he said. \u201cWe incorporate this commonsense knowledge about object categories to improve organizational performance.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cEmbedding commonsense knowledge also means our robot can rearrange objects it hasn\u2019t been trained on. Maybe it\u2019s never seen a soft drink, but it generally knows what beverages are because it\u2019s trained on another object that belongs to the beverage category.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERamachandruni tested ConSOR against two model training baselines. One used a score-based approach that learns how specific users group objects in an environment. It then uses the scores to organize objects for users. The other baseline used the GPT-3 large language model prompted with minimal demonstrations and without fine-tuning to determine the placement of new objects. ConSOR outperformed both baselines.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cGPT-3 was a baseline we were comparing against to see whether this huge body of common-sense knowledge can be used directly without any sort of frame,\u201d Ramachandruni said. \u201cThe appeal of LLMs is you don\u2019t need too much data; you just need a small data set to prompt it and give it an idea. We found the LLM did not have the correct inductive bias to correctly reason between different objects to perform this task.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERamachandruni said he anticipates there will be scenarios where user input is required. His future work on the project will include minimizing the effort required by the user in those scenarios to tell the robot its preferences.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThere are probably scenarios where it\u2019s just easier to ask the user,\u201d he said. \u201cLet\u2019s say the robot has multiple ideas of how to organize the home, and it\u2019s having trouble deciding between them. Sometimes it\u2019s just easier to ask the user to choose between the options. That would be a human-robot interaction addition to this framework.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIROS is taking place this week in Detroit.\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003ENew research from Georgia Tech\u0027s School of Interactive Computing is empowering robots to use contextual clues to mimic how an individual might organize their pantry or refrigerator. The novel\u0026nbsp;framework, accepted to this week\u0027s\u0026nbsp;2023\u003Ca href=\u0022https:\/\/ieee-iros.org\/\u0022\u003E\u0026nbsp;IEEE International Confrence on Robots and Systems (IROS)\u003C\/a\u003E, allows home robots to organize objects in a user\u0027s environment based on contextual clues and user preferences, minimizing the need for explicit instructions.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Award-winning research from Georgia Tech is empowering robots to use contextual clues to mimic how an individual organizes their pantry."}],"uid":"32045","created_gmt":"2023-10-05 17:36:01","changed_gmt":"2023-10-06 13:27:25","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2023-10-05T00:00:00-04:00","iso_date":"2023-10-05T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"671966":{"id":"671966","type":"image","title":"Kartik Ramachandruni-roboticsPhD-linkedin-crop-oct23.jpg","body":null,"created":"1696598297","gmt_created":"2023-10-06 13:18:17","changed":"1696598297","gmt_changed":"2023-10-06 13:18:17","alt":"Georgia Tech robotics Ph.D. student Kartik Ramachandruni poses with a couple of his robot buddies.","file":{"fid":"255134","name":"Kartik Ramachandruni-roboticsPhD-linkedin-crop-oct23.jpg","image_path":"\/sites\/default\/files\/2023\/10\/06\/Kartik%20Ramachandruni-roboticsPhD-linkedin-crop-oct23.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/10\/06\/Kartik%20Ramachandruni-roboticsPhD-linkedin-crop-oct23.jpg","mime":"image\/jpeg","size":70427,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/10\/06\/Kartik%20Ramachandruni-roboticsPhD-linkedin-crop-oct23.jpg?itok=EOm6O7UM"}},"671967":{"id":"671967","type":"image","title":"GT Computing Associate Professor Sonia Chernova_teaching-fall2023.jpg","body":null,"created":"1696598419","gmt_created":"2023-10-06 13:20:19","changed":"1696598419","gmt_changed":"2023-10-06 13:20:19","alt":"Georgis Tech School of Interactive Computing Associate Professor Sonia Chernova presents during a recent robotics seminar.","file":{"fid":"255135","name":"GT Computing Associate Professor Sonia Chernova_teaching-fall2023.jpg","image_path":"\/sites\/default\/files\/2023\/10\/06\/GT%20Computing%20Associate%20Professor%20Sonia%20Chernova_teaching-fall2023.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/10\/06\/GT%20Computing%20Associate%20Professor%20Sonia%20Chernova_teaching-fall2023.jpg","mime":"image\/jpeg","size":173036,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/10\/06\/GT%20Computing%20Associate%20Professor%20Sonia%20Chernova_teaching-fall2023.jpg?itok=x2610sau"}}},"media_ids":["671966","671967"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"135","name":"Research"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"187915","name":"go-researchnews"},{"id":"10199","name":"Daily Digest"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen, Communications Officer I\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E\r\n\r\n\u003Cp\u003Enathan.deen@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"669342":{"#nid":"669342","#data":{"type":"news","title":"Georgia Tech Best Place to Be With Robotics Boom on Horizon, Says New Faculty Member","body":[{"value":"\u003Cp\u003EAnimesh Garg sees a boom on the horizon for advancements in robot manipulation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt\u2019s an exciting time to be in the field of robotics, he said, and that\u2019s why the new assistant professor in the School of Interactive Computing wants to be at Georgia Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThe robotics faculty at Georgia Tech is particularly strong,\u201d he said. \u201cThe reorganization and refocus of the\u0026nbsp;\u003Ca href=\u0022https:\/\/research.gatech.edu\/robotics\u0022\u003EInstitute for Robotics and Intelligent Machines\u003C\/a\u003E\u0026nbsp;and the\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/degree-programs\/phd-machine-learning\u0022\u003EMachine Learning Ph.D.\u003C\/a\u003E\u0026nbsp;program is something that is also very special among our peer group universities.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGarg described his research as unlocking \u201ccommon sense\u201d for robotics. In essence, he thinks carefully about mistakes that robots might make in the real world and how to preempt them before deployment.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe said building common sense into robots could be the difference over the next few years if robotics researchers are to reach a new pinnacle of achievements.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cCommon sense reasoning in real-world robotics is the challenge of the next decade,\u201d he said.\u003Cbr \/\u003E\r\n\u201cRobots have a lot of requirements for common sense. Simple things like don\u2019t pack glassware under heavy stuff. We should think about these problems holistically and not trying to build robots in isolation.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGarg earned his master\u2019s degree in industrial engineering from Georgia Tech in 2011. He also has a master\u2019s in computer science and a doctorate in machine learning and robotics from the University of California-Berkley.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGarg spent the past four years teaching robotics, reinforcement learning, robot manipulation, and computer vision at the University of Toronto. He is also a senior research scientist at Nvidia, working on machine learning for robot manipulation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat interested you about coming to Georgia Tech?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe primary draw was the research culture and the school\u2019s strength, particularly in robotics and machine learning. What is also attractive about Georgia Tech is that it\u2019s not just computer science and engineering that are strong. There is no shortage of collaborators outside of the computer science neighborhood if I want to pursue projects in climate sciences, material sciences, or statistics.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat will your research consist of?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMy research will consist of three big-picture topics. First, foundation models for representation and reasoning. How should we talk about common sense and problem solving? The second is generative AI in the context of robots so we can know more about the world through the predictive sense, which allows for better planning. The third pillar would be reinforcement \u2014 the robot learning to do something within its own abilities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat inspired you to pursue this field of research?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGetting robots to do stuff on command is one of the longstanding science fiction challenges, right? We have used science fiction examples for many decades, but the progress has been slow. The confluence of machine learning and the ability to reason gives us the tools to solve this problem.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn the next decade, we will see more progress in robot manipulation than in the last 40 to 50 years combined. A fundamental set of problems has been solved in the last five to seven years. In the next 10 to 12 years, we will see a boom in the percolation of this technology in everyday life.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat do you hope to accomplish here at Georgia Tech?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI want to train students to be leaders in this space for the next decade, whether they choose to be in academia or start new companies. The other thing I want to establish in the research ecosystem is a consortium of robotics researchers within Georgia Tech to work closely with industry. This will enable tech transfer so what we create at Georgia Tech can be brought to fruition in industry and make a broader impact.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat are you looking forward to about teaching your students and how do you plan to work with them?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI\u2019ll be focusing on a Ph.D. course in reinforcement learning. There is no deep reinforcement learning course offered regularly to in-person students, so that is a need I hope to fill.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe other thing I\u2019ve been developing is a course on robot learning. The idea is that this will be a hands-on course that enables people with little background in robotics to get up to speed of a professional robotics engineer within six months.\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EIn this brief profile and Q\u0026amp;A, robotics expert and new School of Interactive Computing faculty member Animesh Garg, discusses his work developing \u0022common sense\u0022 for robots and how it will contribute to positioning Georgia Tech as a leading institution for research and education in robotics.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"A robotics expert working to create \u0022common sense\u0022 for robots has joined the faculty of the School of Interactive Computing"}],"uid":"32045","created_gmt":"2023-09-01 13:42:43","changed_gmt":"2023-09-01 13:51:01","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2023-09-01T00:00:00-04:00","iso_date":"2023-09-01T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"671571":{"id":"671571","type":"image","title":"Animesh Garg is a new assistant professor in the School of Interactive Computing who works in robotics and machine learning.","body":"\u003Cp\u003EAnimesh Garg is a new assistant professor at the School of Interactive Computing who works in robotics and machine learning. (Photo by Terence Rushin\/College of Computing.)\u003C\/p\u003E\r\n","created":"1693575915","gmt_created":"2023-09-01 13:45:15","changed":"1693575915","gmt_changed":"2023-09-01 13:45:15","alt":"Animesh Garg is a new assistant professor in the School of Interactive Computing who works in robotics and machine learning.","file":{"fid":"254656","name":"Animesh Garg_86A8799.jpeg","image_path":"\/sites\/default\/files\/2023\/09\/01\/Animesh%20Garg_86A8799_0.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/09\/01\/Animesh%20Garg_86A8799_0.jpeg","mime":"image\/jpeg","size":32197,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/09\/01\/Animesh%20Garg_86A8799_0.jpeg?itok=X3SrVOYY"}},"671572":{"id":"671572","type":"image","title":"New assistant professor of interactive computing Animesh Garg outside of the Klaus Advanced Computing Building.","body":"\u003Cp\u003ENew assistant professor of interactive computing Animesh Garg outside of the Klaus Advanced Computing Building. (Photo by Terence Rushin\/College of Computing.)\u003C\/p\u003E\r\n","created":"1693576046","gmt_created":"2023-09-01 13:47:26","changed":"1693576046","gmt_changed":"2023-09-01 13:47:26","alt":"New assistant professor of interactive computing Animesh Garg outside of the Klaus Advanced Computing Building.","file":{"fid":"254657","name":"Animesh Garg_86A8778.jpeg","image_path":"\/sites\/default\/files\/2023\/09\/01\/Animesh%20Garg_86A8778.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/09\/01\/Animesh%20Garg_86A8778.jpeg","mime":"image\/jpeg","size":98151,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/09\/01\/Animesh%20Garg_86A8778.jpeg?itok=tArXtZLa"}}},"media_ids":["671571","671572"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"152","name":"Robotics"},{"id":"129","name":"Institute and Campus"}],"keywords":[{"id":"10199","name":"Daily Digest"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen, Communications Officer I\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E\r\n\r\n\u003Cp\u003Enathan.deen@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"669031":{"#nid":"669031","#data":{"type":"news","title":"Novel Policy Allows Robots to Perform Interactive Tasks in Sequential Order","body":[{"value":"\u003Cp\u003EGeorgia Tech Ph.D. student Niranjan Kumar created the Cascaded Compositional Residual Learning (CCRL) framework, enabling a quadrupedal robot to perform increasingly complex tasks without relearning motions, mirroring human learning, showcased by the robot opening a heavy door using energy transfer, a remarkable achievement in robotics.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe CCRL, however, functions as a \u201clibrary\u201d that allows the robot to remember everything it has learned while performing the simple tasks. Each newly obtained skill is added to the library and leveraged for more complex skills. A turning motion, for instance, can be learned on top of walking while serving as the basis for navigation skills.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKumar said CCRL has broken new ground on interactive navigation research. Interactive navigation is one of several navigation solutions that allow robots to navigate in the real world. These solutions include point navigation, which trains a robot to reach a point on a map, and object navigation, which teaches it to reach a selected object.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EInteractive navigation requires a robot to reach a goal location while interacting with obstacles on the way, which has proven to be the most difficult for robots to learn.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe key, Kumar said, to get a robot to go from walking to pushing an object is in the joints and the robot discovering the different types of motions it can make with them.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESo far, Kumar\u2019s policy has reached 10 skills that a robot can learn and deploy. The number of skills it can learn on one policy depends on the hardware the programmer is using.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cIt just takes longer to train as you keep adding more skills because now the policy also has to figure out how to incorporate all these skills in different situations,\u201d he said. \u201cBut theoretically, you can keep adding more skills indefinitely as long as you have a powerful enough computer to run the policies.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKumar said he sees CCRL being useful for home assistant robots, which are required to be agile and limber to navigate around a cluttered household. He also said it could possibly serve as a guide dog for the visually impaired.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cIf you have obstacles in front of someone who is visually impaired, the robot can just clear up the obstacles as the person is walking, open the door for them, and things like that,\u201d he said.\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech Ph.D. student Niranjan Kumar created the Cascaded Compositional Residual Learning (CCRL) framework, enabling a quadrupedal robot to perform increasingly complex tasks without relearning motions, mirroring human learning, showcased by the robot opening a heavy door using energy transfer, a remarkable achievement in robotics.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"A Georgia Tech Ph.D. student has created a new framework that enables a four-legged robot to perform increasingly complex tasks without relearning motions."}],"uid":"32045","created_gmt":"2023-08-18 12:41:38","changed_gmt":"2023-08-31 15:26:21","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2023-08-18T00:00:00-04:00","iso_date":"2023-08-18T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"671422":{"id":"671422","type":"image","title":"A four-legged robot at Georgia Tech opens door using sequential steps, but for the first time without having to relearn motions.","body":null,"created":"1692362511","gmt_created":"2023-08-18 12:41:51","changed":"1692362511","gmt_changed":"2023-08-18 12:41:51","alt":"A four-legged robot at Georgia Tech opens door using sequential steps, but for the first time without having to relearn motions.","file":{"fid":"254478","name":"March_16 interactive reach_crop.png","image_path":"\/sites\/default\/files\/2023\/08\/18\/March_16%20interactive%20reach_crop.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/08\/18\/March_16%20interactive%20reach_crop.png","mime":"image\/png","size":561198,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/08\/18\/March_16%20interactive%20reach_crop.png?itok=L7fNAXVV"}}},"media_ids":["671422"],"related_links":[{"url":"https:\/\/youtu.be\/vKk6NH6Gnug","title":"Four-legged robot kicks open door at Georgia Tech"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1214","name":"News Room"}],"categories":[{"id":"152","name":"Robotics"}],"keywords":[{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer I\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E\r\n\r\n\u003Cp\u003Enathan.deen@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["nathan.deen@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"668582":{"#nid":"668582","#data":{"type":"news","title":"Students Create Web App That Empowers Tenants Facing Eviction to Fight Back","body":[{"value":"\u003Cp\u003ETenants facing eviction in Atlanta may soon have a new app that can help them to understand their rights through the eviction process.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe web-based app is being developed by a group of Georgia Tech\u0026nbsp;\u003Ca href=\u0022https:\/\/mshci.gatech.edu\/\u0022\u003EMaster of Science in Human-Computer Interaction\u003C\/a\u003E\u0026nbsp;(MS-HCI) students. It can inform tenants of their rights, help them to ensure landlords have properly followed the law and help them to better prepare for their court hearings.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe students have been developing the web app for the\u0026nbsp;\u003Ca href=\u0022https:\/\/avlf.org\/\u0022\u003EAtlanta Volunteer Lawyers Foundation\u003C\/a\u003E\u0026nbsp;(AVLF), which provides free legal services to residents in Fulton County, including advice for tenants facing evictions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EJessie Chiu, one of the students who worked on the app before graduating in May, said the eviction problem throughout Atlanta is expansive.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn an\u0026nbsp;\u003Ca href=\u0022https:\/\/www.wabe.org\/fulton-countys-3-p-m-eviction-hearing\/\u0022\u003Earticle\u003C\/a\u003E\u0026nbsp;published by WABE, a local NPR affiliate, Fulton County Chief Magistrate Judge Cassandra Kirk said her office receives 40,000 eviction filings per year or about 800 per week.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cAtlanta has one of the highest eviction rates in America, Fulton County especially,\u201d Chiu said. \u201cIt disproportionately affects people of marginalized communities. There\u2019s a huge power imbalance between landlords and tenants, and landlords will employ attorneys who specifically work on evicting tenants, while tenants have limited legal resources and knowledge.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe AVLF operates the\u0026nbsp;\u003Ca href=\u0022https:\/\/avlf.org\/get-help\/evictions\/\u0022\u003EHousing Court Assistance Center\u003C\/a\u003E\u0026nbsp;(HCAC), a walk-in advice clinic where volunteer attorneys and paralegals provide free legal advice to tenants facing eviction. The clinic is located at the\u0026nbsp;\u003Ca href=\u0022https:\/\/www.magistratefulton.org\/\u0022\u003EFulton County Magistrate Court\u003C\/a\u003E\u0026nbsp;in downtown Atlanta.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWorking inside the Culture and Technology (CAT) Lab directed by Associate Professor Betsy DiSalvo, Chiu and fellow student Xiao Luo led the group\u2019s research efforts this year just before graduating.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo gain firsthand insights into the legal process, they spent three months interviewing tenants and observing counseling sessions with volunteer HCAC attorneys. This helped them understand what tenants needed from the web app while ensuring the advice provided would be consistent with advice given by attorneys.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cIt\u2019s important to get in front of our end users,\u201d Chiu said. \u201cOnce we got to sit in on these sessions with tenants undergoing eviction, we gained valuable insights into the complexity of the process. We decided to expand the available resources, covering each stage of the defense process so it became a comprehensive, step-by-step guide.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERecognizing the limitations of the HCAC to address a city-wide problem, Luo voiced the need for a digital solution.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThere\u2019s limited access to the attorneys,\u201d Luo said. \u201cAt the clinic, there\u2019s always a long queue of people there. This could provide users with immediate access to resources whenever they need them.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne of the most challenging procedures in the eviction process is the seven-day response time given to tenants. Anyone served with an eviction notice must file an answer within that timeframe. Luo said the paperwork is long and complicated and can easily trip up tenants with limited legal knowledge.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFailure to submit the answer within seven days condemns tenants to a default judgment hearing, where judges often rule in favor of landlords.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECurrently, the web app provides users with reminders to file their answers and check the court website for any scheduled hearings. Looking to the future, Luo said the goal is to incorporate real-time updates and notifications. The group will need the cooperation of the court system to do so.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELuo said her team is dedicated to optimizing the web app\u2019s functionality and simplifying the process, making it more efficient and less time-consuming for users.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cWe hope to develop a more flexible solution where users won\u2019t feel tied to a step-by-step process,\u201d Luo said. \u201cInstead, they can follow some simple prompts, describe their situations, and it will provide them with the solutions and advice they need.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EChiu said she hopes the project is the first step toward revolutionizing the eviction defense landscape.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThere\u2019s still a lot of work to do, but it\u2019s exciting to see how tech can be used for leveling the playing field and giving people equal access to information and the tools they need to fight for their rights,\u201d Chiu said.\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EA group of graduate students in the School of Interactive Computing are developing a web-based app for the\u0026nbsp;\u003Ca href=\u0022https:\/\/avlf.org\/\u0022\u003EAtlanta Volunteer Lawyers Foundation\u003C\/a\u003E, which provides free legal services to residents in Fulton County, including advice for tenants facing evictions.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Human-computer interaction students have developed a new tool to inform tenants of their rights, help them to ensure landlords have properly followed the law, and help them to better prepare for their court hearings."}],"uid":"32045","created_gmt":"2023-07-25 19:14:34","changed_gmt":"2023-07-25 19:21:49","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2023-07-25T00:00:00-04:00","iso_date":"2023-07-25T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"671216":{"id":"671216","type":"image","title":"tenantevictionapp_0.jpeg","body":null,"created":"1690312487","gmt_created":"2023-07-25 19:14:47","changed":"1690312487","gmt_changed":"2023-07-25 19:14:47","alt":"Georgia Tech students Jessie Chiu and Xiao Luo","file":{"fid":"254234","name":"tenantevictionapp_0.jpeg","image_path":"\/sites\/default\/files\/2023\/07\/25\/tenantevictionapp_0.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/07\/25\/tenantevictionapp_0.jpeg","mime":"image\/jpeg","size":59025,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/07\/25\/tenantevictionapp_0.jpeg?itok=imuwq6uO"}}},"media_ids":["671216"],"groups":[{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"42901","name":"Community"},{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"10199","name":"Daily Digest"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen, Communications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E\r\n\r\n\u003Cp\u003Enathan.deen@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"668495":{"#nid":"668495","#data":{"type":"news","title":"Stress Test Method Detects When Object Recognition Models are Using Shortcuts ","body":[{"value":"\u003Cp\u003EA new \u201cstress test\u201d method created by a Georgia Tech researcher allows programmers to more easily determine if trained visual recognition models are sensitive to input changes or rely too heavily on context clues to perform their tasks.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EViraj Prabhu, a Ph.D. student in Georgia Tech\u2019s School of Interactive Computing, introduced the LANCE (Language-Guided Counterfactuals) method in a recent research paper that shows how deep object recognition models are prone to taking shortcuts through context clues to produce images.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIdeally, models should understand exactly what they\u2019re prompted to search for, Prabhu said, but because of spurious correlation, they tend to use irrelevant information in images as they make predictions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPrabhu used LANCE to stress test well-known models that have been trained on the image database ImageNet. Working with Assistant Professor Judy Hoffman and co-authors Sriram Yenamandra and Prithvijit Chattopadhyay, he discovered many instances in which the models were overly reliant on context in the images they produced.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn some examples, the models showed they were using weather in the background to classify images rather than recognizing the object of interest.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOn another stress test, Prabhu challenged the models to classify images with seatbelts. All the test images contained seatbelts inside cars. When Prabhu generated new images by changing the parameters to \u201cseatbelts on a bus,\u201d the performance and accuracy of the trained models dropped. This suggested the models thought seat belts were exclusive to cars.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cWhen a model is getting something right, is it getting it right because it really understands it, or is it picking up on some context clues and relying on them?\u201d Prabhu said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThere is no reason why it should be relying on what kind of vehicle it is to know whether there is a seatbelt, but models often do this. It\u2019s more generally known as model bias or a spurious correlation problem.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe models displayed the same flaws when Prabhu used LANCE to test images for dog sleds. The models almost exclusively associated dog sleds with Huskies, leading them to focus their searches on the breed most associated with sleds.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cimg alt=\u0022Three students working together\u0022 height=\u0022567\u0022 src=\u0022https:\/\/www.cc.gatech.edu\/sites\/default\/files\/images\/general\/2023\/208A9981.jpg\u0022 width=\u0022850\u0022 \/\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFrom left to right, Sriram Yenamandra, Viraj Prabhu, and Prithvijit Chattopadhyay, discuss their LANCE method for detecting input changes that deep object recognition models are sensitive to. Photos by Kevin Beasley\/College of Computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPrabhu said the prompts given to the models were generated by finetuning LLaMA, a large-language model created by Meta AI, while using training data automatically generated by Open AI\u2019s ChatGPT. For an image of someone riding a bike, he generated a caption using an automated captioning system. Then, he used the finetuned LLaMA to make a structured change to the caption, only changing a single concept at a time.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cIt would change \u2018person riding a bicycle\u2019 to \u2018person carrying a bicycle,\u2019 and then we pass it to the generative model and use it to generate a new image while changing nothing else,\u201d he said. \u201cUsing a recently introduced targeted editing technique from Google Research based on prompt-to-prompt tuning, we can now change only the relationship between the person and bicycle. Then we get an image of a person carrying a bicycle, with everything else being the same. Now we can use this as a counterfactual test image.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThat allows Prabhu to compare the model\u2019s new prediction to the original. If the prediction has changed, it\u2019s likely the model is relying on spurious correlations.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPrabhu said the LANCE method can be applied at scale for any new data set.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESpurious correlation has been a known weak link for deep learning models, but Prabhu said the benefit of LANCE is that it allows programmers to probe their models for those weaknesses before deployment.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETraditionally, these models are trained through goal-oriented methods in which the models receive points for displaying the correct image and lose points for getting them wrong. Prabhu said that\u2019s the most likely reason why the artificial intelligence in the models tries to find shortcuts, like using contextual clues, to achieve their goals.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe implications also expand beyond diagnosing object recognition models trained on ImageNet. LANCE can be applied to computer vision technology used in self-driving vehicles, which need to be as foolproof as possible before they\u2019re deployed on the road.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cIn high-stakes applications like self-driving, people are using discriminative approaches \u2014 you have an object detection system that can detect cars and pedestrians and draw boxes around them,\u201d Prabhu said. \u201cUsing LANCE, we can probe these discriminative models using generative approaches and make them better. The hope is we can discover failures before they happen.\u201d\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003ENew research from Georgia Tech\u0027s School of Interactive Computing illustrates how deep object recognition models can use irrelevant information in images as they make predictions.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"New research illustrates how deep object recognition models can use irrelevant information in images as they make predictions."}],"uid":"32045","created_gmt":"2023-07-17 18:36:17","changed_gmt":"2023-07-17 18:51:35","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2023-07-17T00:00:00-04:00","iso_date":"2023-07-17T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"groups":[{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"187915","name":"go-researchnews"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen, Comms. Officer I\u003Cbr \/\u003E\r\nSchool of Interactive Computing\u003C\/p\u003E\r\n\r\n\u003Cp\u003Enathan.deen@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"668153":{"#nid":"668153","#data":{"type":"news","title":"New PEO Scholar Continues Quest to Build Assistive, Customizable Robots","body":[{"value":"\u003Cp\u003EFor Erin Botti, the field of human-robot interaction (HRI) provided the answer to what she wanted to do with her life.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHer father was an engineer, and her mother was a psychologist. She was interested in both.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cI realized there was this whole field of human-robot interaction, which combines those two fields,\u201d Botti said. \u201cI get to code the robot and write algorithms and I also get to run human-subject experiments and analyze how people feel about the robot.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt isn\u2019t the dynamics and intricacies of robotics that drives Botti as much as the human element that HRI explores.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cWhen I tell people I work with robotics, they ask if I\u2019m working on Terminator,\u201d she said. \u201cI\u2019m trying to do the opposite \u2014 building robots that are helpful and customizable.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBotti, a fourth-year Ph.D. student under the advisement of Interactive Computing Assistant Professor Matthew Gombolay, recently received the P.E.O. Sisterhood\u2019s Scholar Award. The merit-based award is given to women pursuing doctoral-level degrees and comes with a $20,000 scholarship.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe P.E.O. Sisterhood is an educational organization founded in 1869 dedicated to the advancement of women in higher education with more than 6,000 local chapters in North America and about 250,000 active members.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cIt\u2019s nice to be honored,\u201d Botti said. \u201cI wasn\u2019t really expecting it. It can help me go to more conferences that I wouldn\u2019t necessarily be able to attend otherwise. A lot of my time here was during Covid, so we didn\u2019t get to travel much. It\u2019ll be nice to broaden my network and see other types of research.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe bulk of Botti\u2019s research has focused on training robots through human demonstration, also known as Learning from Demonstration (LfD).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBotti received a best paper award in 2022 from the International Conference on Human-Robot Interaction for co-authoring \u201cMIND MELD: Personalized Meta-Learning for Robot-Centric Imitation Learning.\u201d The paper explores robots designed to be taught by everyday people and how the robot can learn correctly if users lack the expertise to teach them.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cPeople can be suboptimal when giving demonstrations because they may not understand the robot or there may be a correspondence problem,\u201d Botti said. \u201cMaybe your arm is different from the robot\u2019s arm, and when you perform the motion, it may not work as well. People can also take shortcuts that the robot should not follow. And people are heterogenous. The way I would show the robot to do something is different from the way someone else would.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cWe developed an algorithm that learns from suboptimal and heterogenous demonstrators. It uses personalized embedding that describes how a person is suboptimal, and then we can use that embedding to learn how to correct their demonstrations and shift them to be better.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBotti also co-authored a paper that was accepted to the 2022 Conference on Robotic Learning (CoRL), which expanded upon her research in the HRI paper as to how robots can provide feedback to their human trainers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBotti is now researching how to develop in-home robot assistant for the older adults that can perform daily chores, such as loading a dishwasher, and adapt to user preferences.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cErin is on the cusp of putting robots in the hands of regular people in their homes to learn and perform assistive tasks and validating decades of research in robotics,\u201d Gombolay said. \u201cHer unique focus on robotic assistance for the elderly will have significant broader impacts on society. She is bright, inquisitive, savvy, and fearless, and this award will help her leverage those assets to change the world of robotics.\u201d\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EErin Botti,\u0026nbsp;a fourth-year human-robot interaction\u0026nbsp;Ph.D. student, recently received the P.E.O. Sisterhood\u2019s Scholar Award. The merit-based award is given to women pursuing doctoral-level degrees and comes with a $20,000 scholarship.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"A fourth-year Ph.D. student studying human-robot interaction recently received the P.E.O. Sisterhood\u2019s Scholar Award."}],"uid":"32045","created_gmt":"2023-06-20 16:50:45","changed_gmt":"2023-07-12 18:07:16","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2023-06-20T00:00:00-04:00","iso_date":"2023-06-20T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"670998":{"id":"670998","type":"image","title":"Erin Botti","body":"\u003Cp\u003EErin Botti, a fourth-year Ph.D. student in robotics at Georgia Tech poses with four-legged robot (Photos by Terence Rushin\/Colege of Computing)\u003C\/p\u003E\r\n","created":"1687279855","gmt_created":"2023-06-20 16:50:55","changed":"1687279855","gmt_changed":"2023-06-20 16:50:55","alt":"Erin Botti, a fourth-year Ph.D. student in robotics at Georgia Tech poses with four-legged robot (Photos by Terence Rushin\/Colege of Computing)","file":{"fid":"253985","name":"Erin Hedlund_86A8765.jpeg","image_path":"\/sites\/default\/files\/2023\/06\/20\/Erin%20Hedlund_86A8765.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/06\/20\/Erin%20Hedlund_86A8765.jpeg","mime":"image\/jpeg","size":63161,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/06\/20\/Erin%20Hedlund_86A8765.jpeg?itok=yPesG2Pv"}}},"media_ids":["670998"],"groups":[{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"152","name":"Robotics"},{"id":"8862","name":"Student Research"}],"keywords":[{"id":"667","name":"robotics"},{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen, Communications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["Nathan.deen@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"668351":{"#nid":"668351","#data":{"type":"news","title":"New Chef Dataset Brings AI to Cooking","body":[{"value":"\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EArtificial intelligence (AI) can help people shop, plan, and write \u2014 but not cook. It turns out humans aren\u2019t the only ones who have a hard time following step-by-step recipes in the correct order, but new research from the Georgia Institute of Technology\u2019s College of Computing could change that.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EResearchers created a dataset called ChattyChef, which uses natural language processing models that can help a user cook a recipe. Using the open-source large language model GPT-J, ChattyChef\u2019s dataset of cooking dialogues follows recipes with the user. \u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EThe researchers presented their AI in the paper, \u201c\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/2305.17280\u0022\u003E\u003Cspan\u003E\u003Cspan\u003EImproved Instruction Ordering in Recipe-Grounded Conversation\u003C\/span\u003E\u003C\/span\u003E\u003C\/a\u003E,\u201d presented at the 61st annual meeting of the \u003Ca href=\u0022https:\/\/2023.aclweb.org\/\u0022\u003E\u003Cspan\u003E\u003Cspan\u003EAssociation for Computational Linguistics\u003C\/span\u003E\u003C\/span\u003E\u003C\/a\u003E. \u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EAlthough other researchers have theorized about the possibility of an AI chef, Georgia Tech\u2019s work pushes the field forward. \u201cWe are one of the first research teams to analyze the challenges of using large language models for building an AI chef,\u201d said Duong Le, a Ph.D. student in the School of Interactive Computing. \u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EMost attempts at using language models for cooking fail because GPT-J doesn\u2019t understand what the user wants to do next, or user intent, and has difficultly tracking how far the user is in the recipe \u2014 what the researchers call the \u201cstate of the conversation.\u201d It also can\u2019t easily answer clarification questions, like about ingredient amounts or cooking times.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EFor example, maybe someone is trying to cook hashbrowns. The AI tells them to melt butter in the pan and add the potatoes. The user then asks about the next step. A bad bot might jumble the order and tell them to serve the hashbrown even though they haven\u2019t finished cooking it yet. Or a user asks a follow-up question about how long to cook the hashbrown, and AI won\u2019t be precise enough, instead giving a general time and not specifying the cooking time for each side.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EWith this in mind, the researchers ensured their model had two key features:\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EUser intent detection to determine the user\u2019s current intent within a fixed set of possibilities, such as \u201cAsk for next instruction\u201d or \u201cAsk for details about ingredients.\u201d\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EInstruction state tracking to identify which recipe step the user is on, which works with 80% accuracy. \u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EThe combined information from these features supports the third innovation of ChattyChef \u2014 response generation. User intent helps generate the best response to answer a user\u2019s question. The instruction state selects the most relevant parts of the recipe rather than including the entire recipe, to avoid confusing the user or burdening them with extra steps as they are cooking.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EThe ChattyChef dataset is built off WikiHow recipes with positive ratings and fewer than eight steps. The researchers crowdsourced people to role play how they might use ChattyChef to determine what instructions would be best to include in the dataset. \u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EThe researchers believe the innovations of ChattyChef could be used in many domains besides cooking, such as repair manuals or software documentation. \u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EDuong Minh Le, Ruohao Guo, Wei Xu, and Alan Ritter. 2023. Improved instruction ordering in recipe-grounded conversation. arXiv preprint arXiv:2305.17280.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EThis research is supported in part by the National Science Foundation awards IIS-2112633 and IIS-2052498.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EResearchers created a dataset called ChattyChef, which uses natural language processing models that can help a user cook a recipe. Using the open-source large language model GPT-J, ChattyChef\u2019s dataset of cooking dialogues follows recipes with the user.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Researchers created a dataset called ChattyChef, which uses natural language processing models that can help a user cook a recipe."}],"uid":"34541","created_gmt":"2023-07-05 14:56:02","changed_gmt":"2023-07-05 14:57:30","author":"Tess Malone","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2023-07-05T00:00:00-04:00","iso_date":"2023-07-05T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"671097":{"id":"671097","type":"image","title":"GettyImages-1430305488.jpg","body":"\u003Cp\u003ECourtesy of Getty Images\u003C\/p\u003E\r\n","created":"1688568997","gmt_created":"2023-07-05 14:56:37","changed":"1688568997","gmt_changed":"2023-07-05 14:56:37","alt":"Woman chopping peppers in front of laptop","file":{"fid":"254101","name":"GettyImages-1430305488.jpg","image_path":"\/sites\/default\/files\/2023\/07\/05\/GettyImages-1430305488.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/07\/05\/GettyImages-1430305488.jpg","mime":"image\/jpeg","size":9990317,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/07\/05\/GettyImages-1430305488.jpg?itok=WPCIgGB6"}}},"media_ids":["671097"],"groups":[{"id":"1214","name":"News Room"},{"id":"1188","name":"Research Horizons"},{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"187915","name":"go-researchnews"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ETess Malone, Senior Research Writer\/Editor\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:tess.malone@gatech.edu\u0022\u003Etess.malone@gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"667604":{"#nid":"667604","#data":{"type":"news","title":"The Algorithm and the Damage Done","body":[{"value":"\u003Cp\u003EAlgorithms might appear harmless, but some of them are far from it. They gather information and make calculations, but whether they do so in a neutral manner is a debated issue.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe harmful effects of an algorithm can range from labeling and categorizing someone into a box that inaccurately depicts who they really are, to altering one\u2019s future because of the way they answered a question on an exam or job application.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn some cases,\u0026nbsp;\u003Ca href=\u0022https:\/\/www.youtube.com\/watch?v=tv92WWqUQyA\u0022\u003Ealgorithms can reinforce systems that are unjust or oppressive\u003C\/a\u003E, argues Georgia Tech researcher and School of Interactive Computing Ph.D. candidate\u0026nbsp;\u003Ca href=\u0022https:\/\/twitter.com\/UpolEhsan\/status\/1537112310505824256\u0022\u003EUpol Ehsan\u003C\/a\u003E\u0026nbsp;in his paper,\u0026nbsp;\u003Cem\u003EThe Algorithmic Imprint,\u003C\/em\u003E\u0026nbsp;which was presented at the\u0026nbsp;\u003Ca href=\u0022https:\/\/facctconference.org\/\u0022\u003E2022 Association for Computing Machinery\u2019s FAcct Conference\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn 2020, Ehsan saw a news report about protests occurring in the United Kingdom. Students across the U.K. spoke out against the results of the General Certificate of Education (GCE) A-level examinations, which had been graded by an algorithm for the first time. The A-levels are the final exams taken before university in the U.K. and have a major impact on whether students can attend their desired institutions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe Office of Qualifications and Examinations Regulation (Ofqual), the GCE exam governing body in the U.K., said the COVID-19 pandemic had made it necessary to pivot from manual grading to using an algorithm. Protests demonstrated that students found this change to be unacceptable, arguing the algorithm was biased against people from poorer economic backgrounds.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOfqual removed the algorithm from its grading, but that didn\u2019t solve the problem. Ehsan and his colleagues argue the effects of Ofqual\u2019s algorithm lingered long after its removal. The situation is one example of how algorithms can leave hard imprints on the societies in which they are deployed.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cMost students we interviewed for the paper had their grades improved,\u201d Ehsan said. \u201cBut they were still angry. That\u2019s when I started thinking, \u2018Why are people still angry even if their results aren\u2019t bad?\u2019\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnd why was the U.K. the only country to receive any media coverage when the same exams are administered in more than 160 countries, including Ehsan\u2019s home country of Bangladesh?\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThis is not a U.K. issue,\u201d he said. \u0022This is a global issue. If we don\u2019t share people\u2019s stories, their narratives get erased from the historical narrative. If we didn\u2019t bring this up, largely speaking, the Bangladesh narrative would\u2019ve never been captured as the catastrophe that it was.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cProtests were also going on elsewhere. It\u2019s just that they were never covered. These kids were protesting just as much as the U.K. kids.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe fact that many of the other countries who use the GCEs are members of the Commonwealth \u2014 meaning they were once occupied by the British Empire \u2014 wasn\u2019t lost on Ehsan.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe voices in the Global South weren\u2019t being heard after suffering the effects of an algorithm designed by the Global North.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cRight now, the current way we evaluate an algorithm\u2019s impact is from the birth to the death of the algorithm, from its deployment to its destruction,\u201d Ehsan said. \u201cWhen an algorithm is deployed, we do an impact assessment. When it is no longer deployed, we stop it, and that\u2019s where we think this is the end.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cAnd that is the fundamental flaw in our thinking. Even when the algorithm was taken out, its harms persisted.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe imprints can be even more damaging when they mimic or reinforce modern and historical systems of discrimination and oppression. Ehsan argues that was the case with the Commonwealth nations who also use the GCEs, and the exams were already considered to be unfair and biased before the algorithm was deployed.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cIt feels like I\u2019m paying money to my ex-colonizer for a piece of certificate that tells the world I am no dumber than a local UK kid,\u201d said one student whom Ehsan interviewed during his research. \u201cSometimes it\u2019s hard to ignore that reality.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEhsan compared the effects of colonialism to trying to erase pencil markings from a piece of paper. Even after the eraser has been used, the traces of the pencil markings are still visible.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cOne of the things you see in postcolonialism is that there are remnants of the British infrastructure that we still live with today,\u201d Ehsan says. \u201cJust because colonizers leave, does not mean colonialism has left. Just because an algorithm has left, doesn\u2019t mean its impact has left.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEhsan said the goal of his paper is to bring awareness to the imprints that algorithms can leave so developers can consider the potential impacts of an algorithm before deploying it.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cOne of the things I wanted to do was drive policy changes,\u201d he said. \u201cI didn\u2019t want this to be a research project just to have a research project. I had a moral reason behind it. I was driven by it.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEhsan also said he would like to see developers, researchers, and practitioners design algorithms in a way in which their impacts can be controlled and mitigated, and if an algorithm harms a group of people, reparations should be considered to atone for the damage.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cAlgorithms leave imprints that are often immutable,\u201d he said. \u201cJust because they are made of software, it doesn\u2019t mean there\u2019s an undo button there.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cWe need people to understand that algorithms have consequences that outlive their own existence, and if that doesn\u2019t bring us into a more mindful, ethical way of thinking about deployments, I\u2019d be very sad.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe Algorithmic Imprint was co-authored with Ranjit Singh and Jacob Metcalf from\u0026nbsp;\u003Ca href=\u0022https:\/\/datasociety.net\/\u0022\u003EData \u0026amp; Society Research Institute\u003C\/a\u003E\u0026nbsp;and Professor Mark Riedl from the School of Interactive Computing.\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EA researcher from the School of Interactive Computing finds that the imprints of a biased or flawed algorithm can be even more damaging when they mimic or reinforce modern and historical systems of discrimination and oppression.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"A Georgia Tech researcher is working to shine a light on the potential harm algorithms can inflict, even after they are no longer in use."}],"uid":"32045","created_gmt":"2023-05-02 14:29:36","changed_gmt":"2023-05-02 14:31:56","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2023-05-02T00:00:00-04:00","iso_date":"2023-05-02T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"670711":{"id":"670711","type":"image","title":"Upol Ehsan (1).jpeg","body":null,"created":"1683037795","gmt_created":"2023-05-02 14:29:55","changed":"1683037795","gmt_changed":"2023-05-02 14:29:55","alt":"Georgia Tech Ph.D. Upol Ehsan presenting his work, The Algorithmic Imprint","file":{"fid":"253623","name":"Upol Ehsan (1).jpeg","image_path":"\/sites\/default\/files\/2023\/05\/02\/Upol%20Ehsan%20%281%29.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/05\/02\/Upol%20Ehsan%20%281%29.jpeg","mime":"image\/jpeg","size":48049,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/05\/02\/Upol%20Ehsan%20%281%29.jpeg?itok=lbbrV0a0"}}},"media_ids":["670711"],"groups":[{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"8862","name":"Student Research"},{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen Communications Officer I School of Interactive Computing nathan.deen@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["nathan.deen@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"667603":{"#nid":"667603","#data":{"type":"news","title":"Digital Mental Health Resources Not Meeting Perinatal Black Women\u0027s Needs","body":[{"value":"\u003Cp\u003EPregnant and postpartum Black women experience disproportionately higher rates of mental health challenges, and new research indicates that a one-size-fits-all approach to digital mental health tools and platforms is falling short for these women.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EVanessa Oguamanam has researched the correlation of digital tools and how often Black women in perinatal stages use them to improve their mental health.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAccording to the Anxiety and Depression Association of America, Black women are at\u0026nbsp;\u201chigher risk for experiencing perinatal and postnatal anxiety disorders such as depression, anxiety, obsessive compulsive disorder, and posttraumatic stress disorder.\u0022\u0026nbsp;The risk for PMADs is estimated to be double that of the general population.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/chi.gatech.edu\/\u0022\u003E[MICROSITE: Georgia Tech at CHI 2023]\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe problem has worsened since the Covid-19 pandemic, says Oguamanam, a Ph.D. student in the School of Interactive Computing under the advisement of Associate Professor Andrea Parker, founder and director of the\u0026nbsp;\u003Ca href=\u0022https:\/\/sites.gatech.edu\/wellnesstechlab\u0022 target=\u0022_blank\u0022\u003EWellness Technology Research Lab\u003C\/a\u003E. Oguamanam has spent most of her doctoral career researching technology designed to benefit the health of Black women.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cI had friends who became new moms during the pandemic and just seeing the extra amount of stress that they were enduring in addition to balancing new childcare responsibilities led me to start thinking of potential ways I could address this mental health crisis with technology.\u201d she said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cMental health is one of the leading complications during pregnancy and childbirth, and it\u2019s a contributing factor to some maternal deaths. The pandemic exacerbated all of that. We\u2019re seeing rates that are skyrocketing.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn a paper that was accepted at the 2023 Conference on Human Factors in Computing Systems (CHI), Oguamanam and Parker surveyed 101 pregnant and postpartum Black women. They found 34% reported moderate to severe anxiety, while 41% percent expressed having moderate to severe psychological distress, and 74% experienced a high level of postnatal depressive symptoms.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOguamanam and Parker also studied participant interaction with four main forms of technology \u2014 social media, apps, self-tracking devices, and video calls.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThere\u2019s been very little work investigating how we can design digital mental health tools to support Black pregnant and postpartum women\u2019s needs,\u201d Parker said. \u201cWe\u2019re trying to understand what their current use and satisfaction level is with existing platforms. We need this foundational understanding to drive future design efforts.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research indicates that income and education levels were significant variables among the women surveyed. Of the 101 participants surveyed, 49 identified as low income, and 43 percent identified as middle to upper income. Forty-three held less than a bachelor\u2019s degree while 58 held a bachelor\u2019s degree or higher.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThose with higher incomes and education tended to use apps and self-tracking devices more frequently. The use of video calls varied among pregnancy status and the area of the U.S. where participants lived. Women who were pregnant and lived in the South used video calls most frequently.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESocial media was widely used among all demographics.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne of the main takeaways from the study is that participant feedback shows the \u201cone-size-fits-all\u201d approach that digital mental health interventionists often take in their design methods can be insufficient for meeting the needs of pregnant and postpartum Black women. Oguamanam said the societal problems of systemic racism and barriers to healthcare that Black women experience aren\u2019t often considered in such efforts.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cYears and years of experiencing racial and gendered discrimination have impacted the stress levels of Black women and their overall well-being,\u201d she said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cIt\u2019s important to emphasize that when we\u2019re thinking about health disparities among racial groups, there can be a tendency to think it just boils down to differences in socioeconomic status,\u201d echoed Parker. \u201cBut many of these disparities persist when we compare higher income groups of black women to another racial ethnic group. These inequitable differences reflect a broader set of structural forces that create barriers to healthcare access and quality and increased exposure to mental health threats.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOguamanam and Parker also found that 97% of the women surveyed embraced the identity of the strong Black woman, a representation that has been explored at length by social science researchers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearchers agree the external factors of systemic racism and healthcare barriers tend to push Black women toward that identity.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cIt\u2019s a role that a number of Black women tend to identify with, either consciously or subconsciously,\u201d Oguamanam said. \u201cIt\u2019s the idea of presenting an image of strength and feeling like you have to take care of you, your family, and your community and that you\u2019re responsible for carrying the weight of the world on your shoulders.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOguamanam\u2019s and Parker\u2019s study indicates women with greater adoption of the strong Black woman persona tended to use self-tracking devices with greater frequency. That trend could be attributable to those devices offering a sense of autonomy, Parker said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThe whole vision of self-tracking devices is that you can take care of yourself,\u201d Parker said. \u201cYou can monitor your own well-being and oversee collecting data and managing your own health. That type of platform might be more appealing to individuals who have a resistance toward being vulnerable.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThese findings only scratch the surface, and Oguamanam and Parker hope to shift current methods and discussions surrounding digital mental health toward a more inclusive environment that includes the experience of pregnant and postpartum Black women.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cMore research is needed to investigate these hypotheses, and ultimately design and demonstrate the effectiveness of digital tools that support the wellbeing of pregnant and postpartum Black women,\u201d Parker said. \u201cSuch innovations can help us make necessary strides toward achieving maternal mental health equity.\u201d\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EFindings from new research from the School of Interactive Computing indicate that a one-size-fits-all approach to digital mental health tools and platforms is falling short for Pregnant and postpartum Black women.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Researchers look to create better support tools based on a study of how participants interact with social media, apps, self-tracking devices, and video calls."}],"uid":"32045","created_gmt":"2023-05-02 14:17:06","changed_gmt":"2023-05-02 14:20:53","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2023-05-02T00:00:00-04:00","iso_date":"2023-05-02T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"670710":{"id":"670710","type":"image","title":"AA_pregnancy story.jpeg","body":null,"created":"1683037037","gmt_created":"2023-05-02 14:17:17","changed":"1683037037","gmt_changed":"2023-05-02 14:17:17","alt":"A candid stock photo of a black couple seated on a bench smiling together about the impending birth of their child","file":{"fid":"253622","name":"AA_pregnancy story.jpeg","image_path":"\/sites\/default\/files\/2023\/05\/02\/AA_pregnancy%20story.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/05\/02\/AA_pregnancy%20story.jpeg","mime":"image\/jpeg","size":78423,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/05\/02\/AA_pregnancy%20story.jpeg?itok=do6NzrVB"}}},"media_ids":["670710"],"groups":[{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"8862","name":"Student Research"},{"id":"135","name":"Research"}],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen Communications Officer I School of Interactive Computing nathan.deen@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["nathan.deen@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"667602":{"#nid":"667602","#data":{"type":"news","title":"Safe Spaces Facilitating Frank Discussions on \u0027Taboo\u0027 Women\u0027s Health Issues","body":[{"value":"\u003Cp\u003EIn many countries around the world, cultural and religious taboos create environments that silence women and gender minorities and restrict access to health information.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut a team of graduate students within the School of Interactive Computing have explored how technology can help circumvent these barriers so that women can engage in freer communication on stigmatized health issues.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHannah Tam, Karthik Bhat, and Priyanka Mohindra conducted research to study how safe spaces could be curated to support 35 women of Indian origin in discussing subjects that are otherwise considered taboo.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENeha Kumar, an associate professor who teaches jointly with the School of Interactive Computing and the Sam Nunn School of International Affairs, served as advisor to the students. Kumar is the director of the\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/www.tandem.gatech.edu\u0022\u003ETandem Lab\u003C\/a\u003E, which works to explore cultural taboos and investigate their impact on health and well-being among women and gender minorities internationally.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/chi.gatech.edu\/\u0022\u003E[MICROSITE: Georgia Tech at CHI 2023]\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDuring the two-week study, the researchers provided the group with discussion prompts on stigmatized topics such as menstrual health, sexual wellbeing, fitness, body image, diet and exercise, and mental health.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cTaboos persist because Indian culture is still largely patriarchal,\u201d Bhat said. \u201cThe cultural norm is that you don\u2019t talk about these things with other people, you don\u2019t talk about them with other genders, you don\u2019t talk about these things outside the home. It becomes hard to seek care where it\u2019s necessary and community where it\u2019s necessary.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe restrictions of societal taboos escalate when adolescent girls begins their menstrual cycles, Mohindra said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cEverything about how they\u2019re treated changes,\u201d she said. \u201cThey\u2019re not able to go to temples because it\u2019s considered impure. They\u2019re not allowed to serve food or go into the kitchen. Nobody should know you\u2019re going through that, even though it\u2019s a natural process.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cAnything that has to do with intimacy or anything sexual and a woman is involved, that\u2019s looked down at no matter what age you are,\u201d she said. \u201cGirls are afraid to ask their mothers about these things.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETam said she and her colleagues used legitimate peripheral participation (LLP) as a framework to analyze the social learning that took place in the Whats App group they established for the study. The LLP framework creates social interactions among community members that enable them to share knowledge and experiences while allowing other members to observe and learn without actively participating.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cWe found that learning happened at all levels, which was applicable to both core members and peripheral members, or members who might not have been as active in the group,\u201d Tam said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe group of women participated with partial anonymity, which also provided a sense of security and comfort to most users. Using WhatsApp only required members to provide their phone numbers, which helped them conceal their identities to minimize the risk of facing judgment in their communities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe group conducted follow-up interviews with 10 of the 35 participants and found that some had started connecting with other group members offline, which Tam said was a step forward from where they had started.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cPeople found relief in being able to talk openly,\u201d Tam said. \u201cOne member said she hadn\u2019t felt comfortable saying the word \u2018period\u2019 aloud \u2014 even among her close friends.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cAs members were unpacking their experiences with taboos outside the group, that led to a lot of members questioning traditional systems and social structures.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe goal of the project, Bhat said, was not about making a direct impact on the health of the women participating. Instead, the researchers aimed to find ways in which social media platforms could enable sharing of sensitive information and equip participants with the ability to navigate cultural barriers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cOur findings suggested that this space and our intervention gave them the avenue to learn how to engage on stigmatized health topics and then take these conversations out into the world,\u201d he said.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThere needs to be dedicated efforts to educating at various levels so that people who are in positions of power, such as governments and healthcare authorities, recognize and address the challenges in health communication that we saw, and work to address them. This is critical for women\u2019s health and wellbeing worldwide. But given that these problems exist and are not going away anytime soon, how can technology support us in addressing this gap now?\u201d\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003ENew research from Georgia Tech\u0027s School of Interactive Computing investigates the use of curated safe spaces in India where women can talk openly about\u0026nbsp;menstrual health, sexual wellbeing, fitness, body image, diet and exercise, and mental health.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Peer-reviewed research explores cultural taboos and investigate their impact on health and well-being among women and gender minorities internationally."}],"uid":"32045","created_gmt":"2023-05-02 13:56:11","changed_gmt":"2023-05-02 14:00:17","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2023-05-02T00:00:00-04:00","iso_date":"2023-05-02T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"670709":{"id":"670709","type":"image","title":"indian women_taboo subjects.jpeg","body":null,"created":"1683035784","gmt_created":"2023-05-02 13:56:24","changed":"1683035784","gmt_changed":"2023-05-02 13:56:24","alt":"A young woman makes a shushing gesture","file":{"fid":"253621","name":"indian women_taboo subjects.jpeg","image_path":"\/sites\/default\/files\/2023\/05\/02\/indian%20women_taboo%20subjects.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/05\/02\/indian%20women_taboo%20subjects.jpeg","mime":"image\/jpeg","size":42935,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/05\/02\/indian%20women_taboo%20subjects.jpeg?itok=g_1Ptyr-"}}},"media_ids":["670709"],"groups":[{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"8862","name":"Student Research"},{"id":"135","name":"Research"}],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer I\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E\r\n\r\n\u003Cp\u003Enathan.deen@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["nathan.deen@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"667599":{"#nid":"667599","#data":{"type":"news","title":"Like Humans and Animals, AI Agents Find Their Way Through Memory","body":[{"value":"\u003Cp\u003EMemory may be just as important to artificial intelligence (AI) agents in creating \u2018mental maps\u2019 as it is to humans and animals.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA recent paper authored by Georgia Tech researchers makes a surprising discovery \u2014 blind AI agents use memory to create maps and navigate through their surrounding environment.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EErik Wijmans, the lead author of the paper, said the idea for his research began by asking if AI agents might mimic human and animal behavior in how they navigate and adjust to their environments.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cHumans and animals navigate with some type of spatial representation \u2014 what is commonly referred to as a cognitive map,\u201d Wijmans said. \u201cSo, we were wondering how AI agents navigate and if it\u2019s similar to that.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThe first question we asked was, \u2018Is memory important to these agents?\u0027 It is. They tend to remember at least the past thousand interactions with their environment.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWijmans completed his Ph.D. in computer science in 2022 and is currently a research scientist at Apple.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWijmans created blind AI agents and trained them by dropping them into the floorplans of more than 500 houses with the goal of navigating from one area of the house to another area. The only sense it had to work with was egomotion \u2014 the ability to know how far it has moved.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe agent bumped its way around from room to room, backtracking as needed, before finding its destination. Wijmans then created a second probe agent that was injected with the memories of the first agent. The probe agent used the memory of the original agent to take shortcuts to quickly reach its objective.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cIt\u2019s surprising that they can do this without vision because they\u2019re in an unknown environment that they\u2019ve never seen before, so they have to figure out how to navigate in that environment and also figure out the structure of it,\u201d Wijmans said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThis is a result that shows that our hypothesis is true, or at the very least along the right direction. We took an agent and put it in a complex environment and trained it for a task that requires it to interact with that environment, and the result was mapping.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWijman\u2019s paper,\u0026nbsp;\u003Cem\u003EEmergence of Maps in the Memories of Blind Navigation Agents\u003C\/em\u003E, is one of four outstanding paper award winners for the 2023 International Conference on Learning Representations, which is being held May 1-5 in Kigali, Rwanda. His research was also recognized by the Georgia Tech chapter of Sigma Xi (The Scientific Research Society) and received a 2023 GT Sigma XI Best Ph.D. Thesis Award.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWijmans is advised by School of Interactive Computing Distinguished Professor Irfan Essa and Associate Professor Dhruv Batra.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cErik makes fundamental contributions to multiple sub-areas of AI, including reinforcement learning, robotics, and embodied perception,\u201d Batra said. \u201cHis hypothesis is a bold one \u2014 that intelligence emerges via large-scale learning by an embodied agent accomplishing goals in a rich 3D environment.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn his paper, Wijmans describes mapping as an emerging phenomenon. Neural network models for navigation have performed well despite not containing any explicit mapping modules.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWijman\u2019s AI agents showed a 95% success rate when they used memory to navigate, whereas memoryless agents failed entirely. This seems to suggest that agents create mental maps as a natural part of learning to navigate.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThe results were so initially surprising, that my first gut instinct was that we had done something wrong in our experimental design,\u201d he said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThis is a work with a very complex body of experiments that tie together a single narrative,\u201d he said. \u201cThis is a challenging thing to do. When you\u2019re trying to test whether something involves memory, you must come up with ideas of what to test for and how to test for that. You must make each experiment as precise as possible to not get false positives, and that involves considerable experimental design and effort.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWijmans said he made it as difficult as possible for the agent to reach its goal, removing vision, audio, olfactory, haptic, and magnetic sensing and gave it no bias toward mapping. It had no supervision or any kind of outside help.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cSurprisingly, even under these deliberately harsh conditions, we find the emergence of map-like spatial representations in the agent\u2019s non-spatial unstructured memory. It not only successfully navigates to the goal but also exhibits intelligent behavior like taking shortcuts, following walls, and detecting collisions.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe discovery also suggests that AI, humans, and animals all share a natural characteristic of problem solving and navigation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThe one link that we can make is the idea of convergent evolution, which is where you see the same mechanism evolve multiple times in species that have no common ancestor that shares that mechanism,\u201d Wijmans said. \u201cMammals build maps, insects build maps, and now AI agents build maps. So perhaps mapping is the natural solution to navigation.\u201d\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EA recent paper authored by Georgia Tech researchers makes a surprising discovery \u2014 blind AI agents use memory to create maps and navigate through their surrounding environment.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"A recent paper authored by Georgia Tech researchers makes a surprising discovery \u2014 blind AI agents use memory to create maps and navigate through their surrounding environment."}],"uid":"32045","created_gmt":"2023-05-02 13:23:26","changed_gmt":"2023-05-02 13:28:07","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2023-05-02T00:00:00-04:00","iso_date":"2023-05-02T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"670706":{"id":"670706","type":"image","title":"Erik Wijmans, Irfan Essa","body":null,"created":"1683033816","gmt_created":"2023-05-02 13:23:36","changed":"1683033816","gmt_changed":"2023-05-02 13:23:36","alt":"Georgia Tech Ph.D. student Erik Wijmans and Distinguished Professor Irfan Essa","file":{"fid":"253618","name":"Erik Wijmans, Irfan Essa_86A9563.jpeg","image_path":"\/sites\/default\/files\/2023\/05\/02\/Erik%20Wijmans%2C%20Irfan%20Essa_86A9563.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/05\/02\/Erik%20Wijmans%2C%20Irfan%20Essa_86A9563.jpeg","mime":"image\/jpeg","size":38004,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/05\/02\/Erik%20Wijmans%2C%20Irfan%20Essa_86A9563.jpeg?itok=GUWuL3qy"}}},"media_ids":["670706"],"groups":[{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"8862","name":"Student Research"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"152","name":"Robotics"},{"id":"135","name":"Research"}],"keywords":[{"id":"187812","name":"artificial intelligence (AI)"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003Cbr \/\u003E\r\nCommunications Officer I\u003Cbr \/\u003E\r\nSchool of Interactive Computing\u003Cbr \/\u003E\r\nnathan.deen@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["nathan.deen@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"667347":{"#nid":"667347","#data":{"type":"news","title":"Examining the Boundaries of Using AI \u0027Sensing\u0027 to Understand Office Workers\u2019 Performance and Wellbeing","body":[{"value":"\u003Cp\u003E\u003Cem\u003ENew research findings show that social acceptability and select sharing of AI results in the workplace are key to future implementation\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommercial monitoring tools are being introduced in offices alongside newer modes of work \u2013 screen meetings, remote collaboration, digital-first workflows \u2013 as a way for employers to better understand performance of their workforces.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearchers at Georgia Tech and Northeastern University conducted a study with information workers to learn about their perspectives on being monitored and their information being collected with passive-sensing enabled artificial intelligence (PSAI), where computing devices can unobtrusively detect and collect user behaviors. That information could then be used to train machine learning models that infer performance and wellbeing of workers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cWe wanted to take a closer look at how workers perceive passive-sensing AI in order to make this technology work for the workers, as opposed to making them work for the technology,\u201d said\u0026nbsp;\u003Cstrong\u003EVedant Das Swain\u003C\/strong\u003E, lead researcher and a Ph.D. candidate in computer science at Georgia Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe says there is an organizational need \u2013 for both employer and employee alike \u2013 to get better insights.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cOne of the underlying subtexts of the research is that there are these asymmetries at work because the employee doesn\u2019t have as much power as the employer. And if these technologies keep progressing as they are, this gap is going to widen because the employer will just keep getting more and more worker information.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearchers found that some technologies \u2013 fitness trackers and web cams, for example \u2013 used for personal activities may not translate well to work life if they are implemented without considering new norms of work. Technologies can now \u201cbreach physical boundaries,\u201d as Das Swain puts it, and using a web cam for work while at home might involve extra setup to close doors and blur backgrounds on the screen. Workers also want careful consideration of the context in which devices can gain information.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWork devices monitoring worker activity is appropriate in many cases but work-related apps on personal devices might be a tougher sell.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research results fall in two primary categories:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EAppropriateness\u003C\/strong\u003E\u0026nbsp;\u2013 Understanding socially acceptable data to collect with passive-sensing AI and acceptable circumstances to infer worker performance and wellbeing.\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EDistribution\u003C\/strong\u003E\u0026nbsp;\u2013 Determining\u0026nbsp;what to share about worker data \u2013 and when\u0026nbsp;\u2013\u0026nbsp;with other stakeholders and the methods used.\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003ERegarding the appropriateness aspect, Das Swain says that people in general don\u2019t want to feel dehumanized by algorithms. His team\u2019s work takes that idea further by learning about the mental models different workers use to determine what\u2019s appropriate for using PSAI.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cDifferent workers have different ideas of what\u2019s insightful,\u201d he said. \u201cFor example, if I don\u2019t talk to my supervisor about my personal life, why should this machine be sensing that type of information? The alternative viewpoint is that I already know what I\u2019m doing at work, so give me more data. I could use sleep and commute data to infer how those activities might affect my work.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDas Swain says there is no one-size-fits-all solution.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cAnd it\u2019s not just about privacy, it\u2019s about utility,\u201d he said. \u201cPeople find utility in different things. Some want more precise information in a work context, and some might want the holistic view of the data, in both cases to find insights for themselves.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe second category of results \u2013 distribution \u2013 is no less tricky. Worker information is ostensibly personal in nature, but collaborative and performance measures at work necessitate the sharing of this information.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe researchers found that participants strongly felt that if a machine predicted something related to performance or wellbeing, then they should have enough time to make changes and provide context, such as if a worker is on paternity leave and must alter project deadlines.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cOnly at a later point, if at all, can the data be escalated to someone else to help as the situation requires,\u201d said Das Swain. \u201cThat was very clear in the study.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne red flag so to speak, for Das Swain as a researcher, is that these technologies don\u2019t afford users any control to understand newer types of personal data that are being collected and stored at work.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWith algorithmic uncertainty now at the forefront of many conversations, Das Swain views these results from the Georgia Tech and Northeastern group as tangible guideposts for regulators and companies making decisions around public and commercial deployment of AI sensing tech for information workers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe published results will be presented at the ACM CHI Conference on Human Factors in Computing Systems, taking place April 23-28, in Hamburg, Germany. The academic paper,\u0026nbsp;\u003Ca href=\u0022https:\/\/programs.sigchi.org\/chi\/2023\/program\/content\/95708\u0022\u003E\u003Cem\u003EAlgorithmic Power or Punishment: Information Worker Perspectives on Passive Sensing Enabled AI Phenotyping of Performance and Wellbeing\u003C\/em\u003E\u003C\/a\u003E, is co-authored by Das Swain,\u0026nbsp;\u003Cstrong\u003ELan Gao\u003C\/strong\u003E,\u0026nbsp;\u003Cstrong\u003EWilliam Wood\u003C\/strong\u003E,\u0026nbsp;\u003Cstrong\u003ESrikruthi C. Matli\u003C\/strong\u003E,\u0026nbsp;\u003Cstrong\u003EGregory Abowd\u003C\/strong\u003E, and\u0026nbsp;\u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E. The work is funded in part by Cisco.\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EResearchers at Georgia Tech and Northeastern University conducted a study with information workers to learn about their perspectives on being monitored and their information being collected with passive-sensing enabled artificial intelligence (PSAI), where computing devices can unobtrusively detect and collect user behaviors.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"New research findings show that social acceptability and select sharing of AI results in the workplace are key to future implementation."}],"uid":"32045","created_gmt":"2023-04-14 14:23:22","changed_gmt":"2023-04-14 14:26:21","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2023-04-14T00:00:00-04:00","iso_date":"2023-04-14T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"670546":{"id":"670546","type":"image","title":"pic_web_cc_vedant das swain2.png","body":"\u003Cp\u003ESchool of Interactive Computing Ph.D. candidate Vedant Das Swain, lead researcher of a study dubbed \u0022Algorithmic Power or Punishment\u0022 that identifies current boundaries of using AI \u0022sensing\u0022 tools in office spaces.\u0026nbsp;\u003Cem\u003E(Photos by Kevin Beasley\/College of Computing)\u003C\/em\u003E\u003C\/p\u003E\r\n","created":"1681482219","gmt_created":"2023-04-14 14:23:39","changed":"1681482219","gmt_changed":"2023-04-14 14:23:39","alt":"Vedant Das Swain, Ph.D. candidate in computer science at Georgia Tech.","file":{"fid":"253427","name":"pic_web_cc_vedant das swain2.png","image_path":"\/sites\/default\/files\/2023\/04\/14\/pic_web_cc_vedant%20das%20swain2.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/04\/14\/pic_web_cc_vedant%20das%20swain2.png","mime":"image\/png","size":472783,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/04\/14\/pic_web_cc_vedant%20das%20swain2.png?itok=sMp3aoNo"}}},"media_ids":["670546"],"groups":[{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EJosh Preston\u003Cbr \/\u003E\r\nResearch Communications Manager\u003Cbr \/\u003E\r\n\u003Ca href=\u0022jpreston@cc.gatech.edu\u0022\u003Ejpreston@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"667346":{"#nid":"667346","#data":{"type":"news","title":"Misinformation Detection Models are Vulnerable to ChatGPT and Other LLMs","body":[{"value":"\u003Cp\u003EExisting machine learning (ML) models used to detect online misinformation are less effective when matched against content created by ChatGPT or other large language models (LLMs), according to new research from Georgia Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECurrent ML models designed for and trained on human-written content have significant performance discrepancies in detecting paired human-generated misinformation and misinformation generated by artificial intelligence (AI) systems, said Jiawei Zhou, a Ph.D. student in Georgia Tech\u2019s School of Interactive Computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EZhou\u2019s paper detailing the findings is set to receive a best paper honorable mention award at the 2023\u0026nbsp;ACM CHI Conference on Human Factors in Computing Systems. Advised by Associate Professor Munmun De Choudhury, Zhou\u2019s research demonstrates that LLMs can manipulate tone and linguistics to allow AI-generated misinformation to slip through the cracks.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cWe found the AI-generated misinformation carried more emotions and cognitive processing expressions than its human-created counterparts,\u201d Zhou said. \u201cIt also tended to enhance details, communicate uncertainties, draw conclusions, and simulate personal tones.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cWe\u2019re one of the very first to look at this risk. As more people started to use ChatGPT, they\u2019ve noticed this problem, but we were one of the first to provide evidence of this risk. And there are more efforts needed to raise public awareness about this potential and call for more research efforts to combat this risk.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EZhou started exploring GPT-3 in 2022 because she wanted to know how one of the early predecessors to ChatGPT would handle prompts that included misinformation about the Covid-19 pandemic. She asked GPT-3 to explain how the Covid-19 vaccines could cause cancer.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThe results were very concerning because it is so persuasive,\u201d Zhou said. \u201cI had been studying informatics and misinformation for a time, and it was still persuasive, even to me. The output would say, \u2018It can cause cancer because there is this researcher at this institute, and their research is based on medical records and diverse demographics. The research supports this possibility.\u2019 The writing of it is so scientific.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EZhou and her collaborators accumulated a dataset of human-created misinformation, including more than 6,700 news reports and 5,600 social media posts. From that set, Zhou and her team extracted the most representative topics and documents of human-generated misinformation. They used those to create narrative prompts, which they fed to GPT and recorded the output.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBoth the GPT-generated output and the original human-created dataset were used to test an existing misinformation detection model called COVID-Twitter-BERT (CT-BERT).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EZhou said while the human- and AI-generated datasets were intentionally paired, a statistical test showed there are significant differences in detection model performance.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECT-BERT experienced a decline in performance in detecting AI-generated misinformation. Out of 500 prompts based on AI-generated misinformation, it failed to recognize 27 as false or misleading, compared to missing only two from the human-generated prompts.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThe core reason is they are linguistically different,\u201d Zhou said. \u201cOur error analysis reveals that AI misinformation tends to be more complex, and it tends to mix factual statements. It uses one fact to explain another, though the two things might not be related. The tone and sentiment are also different. And there are less keywords that detection tools normally look for.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EZhou\u2019s experiments showed that GPT could use information to create a news story using objective, straightforward language and use that same information to create a sympathetic social media post. That points to its capability of changing tone and tailoring messages.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cIf someone wants to promote propaganda, they can use it to customize a narrative toward a specific community,\u201d Zhou said. \u201cThat makes the risk even greater. It shows that it has some flexibility to alter its tone for different purposes. For news, it can sound logical and reliable. For social media, it conveys information quickly and clearly.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs LLMs continue to rapidly grow and expand, so do the risks of misinformation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EChatGPT operates on Open AI\u2019s GPT-3.5 and GPT-4 models, the latter of which was released on March 14. Since ChatGPT was released, Zhou has given it the same prompts she gave to GPT-3. The results have improved with some corrections, but the latter has the advantage of having more available information about Covid-19, she said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EZhou said steps should be taken immediately to evaluate how misinformation detection tools can adapt to ever-improving LLMs. She described the situation as an \u201cAI arms race,\u201d and the tools that are currently used to combat misinformation are well behind.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThey are improving the generative capabilities of LLMs,\u201d she said. \u201cThey\u2019re more human-like, more fluent, and less and less distinguishable from human creations. We need to think about ways we can distinguish them and how we can improve our misinformation detection abilities to catch up.\u201d\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003ENew research indicates that current machine learning models trained on human-produced content can struggle to detect falsehoods generated by AI-powered chatbots.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Because falsehoods generated by ChatGPT are so convincing, even trained researchers struggle to identify misinformation."}],"uid":"32045","created_gmt":"2023-04-14 14:15:06","changed_gmt":"2023-04-14 14:20:36","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2023-04-14T00:00:00-04:00","iso_date":"2023-04-14T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"670544":{"id":"670544","type":"image","title":"Jiawei Zhou, a Ph.D. student in Georgia Tech\u2019s School of Interactive Computing.jpeg","body":null,"created":"1681481714","gmt_created":"2023-04-14 14:15:14","changed":"1681481714","gmt_changed":"2023-04-14 14:15:14","alt":"Jiawei Zhou, a Ph.D. student in Georgia Tech\u2019s School of Interactive Computing.","file":{"fid":"253425","name":"Jiawei Zhou, a Ph.D. student in Georgia Tech\u2019s School of Interactive Computing.jpeg","image_path":"\/sites\/default\/files\/2023\/04\/14\/Jiawei%20Zhou%2C%20a%20Ph.D.%20student%20in%20Georgia%20Tech%E2%80%99s%20School%20of%20Interactive%20Computing.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/04\/14\/Jiawei%20Zhou%2C%20a%20Ph.D.%20student%20in%20Georgia%20Tech%E2%80%99s%20School%20of%20Interactive%20Computing.jpeg","mime":"image\/jpeg","size":47321,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/04\/14\/Jiawei%20Zhou%2C%20a%20Ph.D.%20student%20in%20Georgia%20Tech%E2%80%99s%20School%20of%20Interactive%20Computing.jpeg?itok=Gde0-sOs"}},"670545":{"id":"670545","type":"image","title":"Jiawei Zhou-munmun.jpeg","body":"\u003Cp\u003ESchool of Interactive Computing Ph.D. student Jiawei Zhou, left, and associate professor Munmun De Choudhury, demonstrate in their latest paper that misinformation detection models are vulnerable to content generated by large language models. (Photos by Kevin Beasley\/College of Computing)\u003C\/p\u003E\r\n","created":"1681481807","gmt_created":"2023-04-14 14:16:47","changed":"1681481807","gmt_changed":"2023-04-14 14:16:47","alt":"School of Interactive Computing Ph.D. student Jiawei Zhou, left, and associate professor Munmun De Choudhury, demonstrate in their latest paper that misinformation detection models are vulnerable to content generated by large language models.","file":{"fid":"253426","name":"Jiawei Zhou-munmun.jpeg","image_path":"\/sites\/default\/files\/2023\/04\/14\/Jiawei%20Zhou-munmun.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/04\/14\/Jiawei%20Zhou-munmun.jpeg","mime":"image\/jpeg","size":259753,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/04\/14\/Jiawei%20Zhou-munmun.jpeg?itok=rj7xma5O"}}},"media_ids":["670544","670545"],"groups":[{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"134","name":"Student and Faculty"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"135","name":"Research"}],"keywords":[{"id":"192524","name":"ChatGPT"},{"id":"190591","name":"misinformation"},{"id":"89321","name":"Munmun De Choudhury"},{"id":"1027","name":"chi"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003Cbr \/\u003E\r\nSchool of Interactive Computing\u003Cbr \/\u003E\r\nCommunications Officer\u003Cbr \/\u003E\r\n\u003Ca href=\u0022nathan.deen@cc.gatech.edu\u0022\u003Enathan.deen@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"667322":{"#nid":"667322","#data":{"type":"news","title":"Tennis Robot Could Pave Way for Advancement in Fast-Movement Robotics","body":[{"value":"\u003Cp\u003EMatthew Gombolay sees a future for human-scale robots in sports and athletic training.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe imagines robots that can test the skills of professional athletes as well as novices looking to learn a new sport. His latest invention may serve as the touchstone to getting there.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGombolay grew up playing a variety of sports, but none appealed to him more than tennis. He said he had been playing with the idea of constructing a tennis robot that could go beyond training against a stationary ball feeder to help a player improve his skills.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/youtu.be\/plUktbRLfQw\u0022\u003E[VIDEO: See Georgia Tech\u0027s Tennis Robot In Action]\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhat if he could have a tennis partner that could play with or against him any time he wanted and could help him improve the weakest areas of his game or complement him in a game of doubles?\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cRight now, you use ball machines to hit balls of various spin speeds and locations that can emulate what a match might look like,\u201d Gombolay said. \u201cBut that\u2019s very different than accounting for an opponent moving across the courts who is going to be hitting from different positions and with different capabilities in shot selection.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut what would be the best way to create a robot that could simulate a human opponent? He found the answer in one of the sport\u2019s offshoots \u2014 wheelchair tennis.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA rapidly growing sport, wheelchair tennis now has its own professional league and is played at all four Grand Slam tournaments as well as the Summer Paralympics.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cWe bought a wheelchair that\u2019s designed for wheelchair tennis because I thought bi-pedal locomotion was a little beyond us at this point,\u201d Gombolay said. \u201cIf we had an Atlas robot stomping around on the court, we would damage the court.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAn associate professor of robotics in the School of Interactive Computing, Gombolay has spent the past two years building his passion project. It\u2019s called\u0026nbsp;\u003Ca href=\u0022https:\/\/core-robotics-lab.github.io\/Wheelchair-Tennis-Robot\/\u0022\u003EESTHER\u003C\/a\u003E\u0026nbsp;\u2014 a wheelchair tennis robot that has a tennis racket connected to a single arm. It can cover both sides of the court and could potentially change how robotics can enhance athletic training and performance.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cWhat really excites me is that it could be a partner for me one day,\u201d Gombolay said. \u201cIt can also be my opponent. It can help me train. I could have it pretend to be the one guy I always lose to because he can exploit this weakness in my game.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cTraining against an opponent is psychologically more stressful. Getting closer to simulating real match conditions can help you improve performance.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EESTHER stands for Experimental Sport Tennis Wheelchair Robot, and its name is a homage to renowned wheelchair tennis player Esther Vergeer. Vergeer held the world No. 1 ranking in women\u2019s wheelchair tennis from 1999 to when she retired in 2013, winning 48 major titles and seven Paralympic gold medals. She was 695-25 in singles matches over her career.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EESTHER is nowhere close to the skill level of the player its namesake, but building a human-scale robot that can hit a return is a novel achievement.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWorking with more than 20 students, Gombolay authored a paper on building ESTHER, which was accepted to be published in the Institute of Electrical and Electronics Engineers Robotics \u0026amp; Automation Society\u2019s Automation Letters (IEEE RA-L). The team reached a breakthrough late last year when they successfully and consistently programmed ESTHER to locate the tennis ball coming toward it, and to hit a return.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cIt took us about two years to get to that point because nobody\u0027s done this before,\u201d Gombolay said. \u201cWe built this up from the ground up. Developing that capability was truly exciting.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EESTHER is powered by two DC motors connected to a gearbox, giving it the quick burst it needs to roll from one side of the court to the other.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cGetting to the ball is not the problem,\u201d Gombolay said. \u201cThe problem is knowing where to go. You have to know so far in advance to figure out where the robot should go. There are some problems for us in pathfinding \u2014 both deciding where to intercept the ball and what path to get there and being able to follow that trajectory with the robot.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGombolay and his team arranged a network of high-resolution cameras around a tennis court and used computer vision algorithms to help ESTHER recognize an incoming tennis ball. The team uses an orange tennis ball because a yellow or green ball can easily get lost among the colors of the court and other surrounding objects.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cWhen we use cameras from different angles, we can triangulate where the ball is in space,\u201d said Nathaniel Belles, a master\u2019s student in robotics who works on Gombolay\u2019s team. \u201cAnd once we have enough samples, over time, we can use that position of the ball and see what the trajectory of the ball is, where the arc is going, and where it\u2019s going to end up.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile Gombolay and his team have a long-term vision for ESTHER, it\u2019s a project full of baby steps. The team is now working toward making ESTHER capable of hitting a back-and-forth rally.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOnce it can do that, the next phase would be teaching it how to strategize its shot selection.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cWe\u2019re working on important modifications like control methods to control where a robot will go,\u201d Gombolay said. \u201cWe\u2019re using reinforcement learning methods so that the robot can learn by itself to get better at where it should go and how it should hit the ball. It should start getting more aggressive and thinking about how it wants to hit the ball and where it wants to hit the ball to win the game.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIf it can reach that point, the goal of revolutionizing the way athletes prepare and train could be within reach.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cFrom a training perspective, a system like this could one day help you to play or practice against your opponent, the robot playing as your opponent, adopting the same hitting strategies or tactics without you ever having to actually go and play that opponent,\u201d Gombolay said. \u201cAfter you\u2019ve honed your game against your opponent, then you can go play and hopefully win.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EZulfiqar Zaidi, one of the lead students on the project, said the benefits of ESTHER\u2019s technology could expand beyond tennis.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cWhile tennis is a great starting point, a system that can play tennis well can have applications in other fields that similarly require fast dynamic movements, accurate perception, and the ability to safely move around humans,\u201d Zaidi said. \u201cThis technology could be useful in manufacturing, construction, or any other field that requires a robot to interact with humans while performing fast and precise movements.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThe ability to plan and strategize well ahead of time is critical for successful performance in both tennis and other fields. A system that can do this effectively for tennis can also be applied to other scenarios where it is necessary to consider the effect of the current action on the future state of the system.\u201d\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EMatthew Gombolay, associate professor of robotics in the School of Interactive Computing, has spent created\u0026nbsp;\u003Ca href=\u0022https:\/\/core-robotics-lab.github.io\/Wheelchair-Tennis-Robot\/\u0022\u003EESTHER\u003C\/a\u003E, a wheelchair tennis robot that has a tennis racket connected to a single arm. It can cover both sides of the court and could potentially change how robotics can enhance athletic training and performance.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"A new robot developed at Georgia Tech can autonomously track a tennis ball and move around the court to hit and return the ball across the net."}],"uid":"32045","created_gmt":"2023-04-13 18:02:17","changed_gmt":"2023-04-13 20:09:17","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2023-04-13T00:00:00-04:00","iso_date":"2023-04-13T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"670532":{"id":"670532","type":"image","title":"Georgia Tech Tennis Robot","body":"\u003Cp\u003EA tennis-playing robot being developed at Georgia Tech returns a ball back across the net. Photo by Kevin Beasley\/College of Computing\u003C\/p\u003E\r\n","created":"1681416135","gmt_created":"2023-04-13 20:02:15","changed":"1681416135","gmt_changed":"2023-04-13 20:02:15","alt":"Tennis robot being developed at Georgia Tech on court returning ball across the net","file":{"fid":"253412","name":"R5 B Roll  v20.jpg","image_path":"\/sites\/default\/files\/2023\/04\/13\/R5%20B%20Roll%20%20v20.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/04\/13\/R5%20B%20Roll%20%20v20.jpg","mime":"image\/jpeg","size":832924,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/04\/13\/R5%20B%20Roll%20%20v20.jpg?itok=m-8UZEbr"}}},"media_ids":["670532"],"related_links":[{"url":"https:\/\/youtu.be\/plUktbRLfQw","title":"Tennis Robot developed at Georgia Tech"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"152","name":"Robotics"}],"keywords":[{"id":"667","name":"robotics"},{"id":"192521","name":"tennis robot"},{"id":"175375","name":"matthew gombolay"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003Cbr \/\u003E\r\nSchool of Interactive Computing\u003Cbr \/\u003E\r\nCommunications Officer\u003Cbr \/\u003E\r\n\u003Ca href=\u0022nathan.deen@cc.gatech.edu\u0022\u003Enathan.deen@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"666799":{"#nid":"666799","#data":{"type":"news","title":"App Offers People with Diabetic Foot Sores a Better Chance of Avoiding Amputation","body":[{"value":"\u003Cp\u003EWhen it comes to developing computer solutions to social issues, School of Interactive Computing Associate Professor Rosa Arriaga looks to tell the stories of groups who are overlooked and underserved.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA new app Arriaga has in development amplifies the voices of patients with diabetes and diabetic foot ulcers \u2014 a severe complication for more than one third of people living with diabetes that often goes unaddressed until it\u2019s too late.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIf left untreated, a diabetic foot ulcer can become infected and lead to amputation. Arriaga\u2019s app may be the tool that prevents the situation from ever coming to that. The app detects the presence of ulcers and tracks whether the conditions of the ulcers worsen.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo create the app, Arriaga partnered with researchers and doctors from the Emory School of Medicine and Grady Memorial Hospital. Their work caught the attention and support of the American Diabetes Association, which awarded them with a grant in January.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EArriaga and her collaborators were also among the first recipients of the A.I. Humanity Seed Grant Program, a collaborative effort between Georgia Tech and Emory University to expand partnerships and leverage artificial intelligence to improve society and quality of life.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cI had never come across diabetic foot ulcers,\u201d Arriaga said. \u201cThe way I do research, I try to think of what the gold standard of care is in each domain, and then I think about how computing can help. This is that sweet spot where we can address a big problem and make an impact with relatively simple computing.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EArriaga calls it the Diabetic Ulcer Computational Sensing System, which can be accessed through a mobile phone. It requires patients to use their phones to perform a foot health exam \u2014 documenting the condition of their skin, their sensations, their circulation, and their walking gait. Patients can also have a caregiver do these things for them if they are physically unable.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWho is at risk?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe looming problem of diabetic ulcers did not escape the attention of two assistant professors at the Emory School of Medicine: Maya Fayfman, who works in the Division of Endocrinology, and Marcos Schechter of the Division of Infectious Diseases. Working alongside Dr. Gabriel Santamarina, director of podiatry at Grady, they noticed the increase in amputations at Grady and how many of those patients came from underserved communities.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cDespite recent advances in diabetes treatments and improvements in other complications, diabetic foot ulcers and amputations are at best stable and by some measures on the rise,\u201d Fayfman said. \u201cThey disproportionately impact people who are of the lowest means and suffer most from loss of mobility and inability to work.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMany patients with diabetic foot complications don\u2019t seek treatment because they don\u2019t have the means to go to the doctor for checkups. The new app tackles this problem head on and allows patients to examine their affected foot and send results to their doctor remotely. The doctor can then determine whether the ulcer poses a threat of amputation. Arriaga said she hopes to add an algorithm to the app that can detect when the condition of an ulcer has worsened.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cimg alt=\u0022Rosa Arriaga\u0022 height=\u0022567\u0022 src=\u0022https:\/\/www.cc.gatech.edu\/sites\/default\/files\/images\/general\/2023\/RosaADAGrant2.jpg\u0022 width=\u0022850\u0022 \/\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETop photo (from left to right): Maya Fayfman and Marcos Schechter from the Emory School of Medicine, Georgia Tech Interactive Computing Associate Professor Rosa Arriaga, and Gabriel Santamarina from Grady Memorial Hospital have collaborated to design software that will help improve patient care and experience to people with diabetic foot ulcers. Bottom photo (from left to right): Fayfman, Arriaga, and Schechter stand outside Grady Memorial Hospital in downtown Atlanta. Photos by Terence Rushin\/College of Computing.\u003Cbr \/\u003E\r\n\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EHow ulcers go unnoticed\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn addition to neglecting or not being able to access routine care, patients with diabetes often develop neuropathy, which can cause loss of feeling in the feet of diabetic patients. If a patient suffers a wound on the bottom of their foot, neuropathy may cause them not to feel it, and the wound can expand and become infected.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cPeople with healthy feet who develop an injury in their foot will adjust how they walk so they don\u2019t apply as much pressure,\u201d Fayfman said. \u201cThat pain is helpful in promoting healing. In patients with diabetes, that loss of sensation makes it so they\u2019re not recognizing the pain, and their wounds get worse before being recognized.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESchechter said increased awareness could go a long way to not only preventing amputations but saving lives. Diabetic patients are already immuno-compromised, and amputation leads to immobility and a decreased ability to fight illnesses, increasing the risk of the problem becoming fatal. Published research shows that the mortality rate for people who lose a limb due to diabetes is 70 percent within five years.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cA lot of people, when they come to us, the ulcers are very severe, and the limb can\u2019t be saved,\u201d he said. \u201cIf people could monitor this with pictures at home, and they could share these pictures with providers, or the app could predict whether this ulcer needs care right now, maybe that can prevent people from coming in with advance stages of the disease.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EA hub for communication\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile the ADA is helping to fund the development of the app, the team believes the grant is an overdue step toward raising awareness about the seriousness of diabetic ulcers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe researchers conducted a focus group of patients with ulcers and amputations and learned that many had never heard of diabetic foot ulcers before being diagnosed with one.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThat to me is the saddest part,\u201d Arriaga said. \u201cSome people are saying they were never told about it. On the one hand, that\u2019s not comprehensive care, and on the other hand, the doctor must get to the biggest thing \u2014 let\u2019s make sure we work on your sugar if you can only work on one thing. It all goes back to a fragmented health system.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne of the app\u2019s goals is to bridge those gaps within the system by providing the opportunity to improve communication between patients and the doctors who treat them. Reducing redundancies in the system will lead to more affordable care while keeping patients and providers informed of the progression of ulcers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cI think an ecological approach to diabetes care is essential for optimal health and wellness,\u201d she said. \u201cIt is not enough to task one group of stakeholders with managing diabetes. Technology can connect the various stakeholders (i.e., patients, their caregivers, clinicians) so that diabetes foot care can be streamlined. It can also improve knowledge about foot care and alleviate the negative outcomes associated with foot ulcers.\u201d\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003ESchool of Interactive Computing Associate Professor Rosa Arriaga and her inter-institutional peers are using funding from the American Diabetes Association to develop a \u003Cem\u003EDiabetic Ulcer Computational Sensing System. A\u003C\/em\u003Eccessed via an interactive smartphone app, the system enables patients to perform self exams, documenting current conditions, tracking any changes, and alerting caregivers. \u0026nbsp;\u0026nbsp;\u0026nbsp; \u0026nbsp;\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"A research partnership between Georgia Tech, Emory, and Grady Health System is leveraging the power of AI to help avoid serious health complications."}],"uid":"32045","created_gmt":"2023-03-24 14:00:04","changed_gmt":"2023-04-06 16:19:18","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2023-03-14T00:00:00-04:00","iso_date":"2023-03-14T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"670467":{"id":"670467","type":"image","title":"_RosaADAGrant.jpeg","body":null,"created":"1680797802","gmt_created":"2023-04-06 16:16:42","changed":"1680797802","gmt_changed":"2023-04-06 16:16:42","alt":"School of Interactive Computing Associate Professor Rosa Arrriga poses with collaborators from Emory University School of Medicine","file":{"fid":"253328","name":"_RosaADAGrant.jpeg","image_path":"\/sites\/default\/files\/2023\/04\/06\/_RosaADAGrant.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/04\/06\/_RosaADAGrant.jpeg","mime":"image\/jpeg","size":58058,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/04\/06\/_RosaADAGrant.jpeg?itok=s8cvPnSH"}}},"media_ids":["670467"],"related_files":{"253117":{"fid":null,"name":"Georgia Tech\u0027s Rosa Arriaga with research collaborators","file_path":"\/sites\/default\/files\/2023\/03\/24\/%20RosaADAGrant.jpeg","file_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/03\/24\/%20RosaADAGrant.jpeg","mime":"image\/jpeg","size":58058,"description":null}},"groups":[{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"138","name":"Biotechnology, Health, Bioengineering, Genetics"},{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[{"id":"2835","name":"ai"},{"id":"189406","name":"software design"},{"id":"192395","name":"Emory University; diabetic foot care"},{"id":"11178","name":"Rosa Arriaga"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen, Communications Officer I\u003Cbr \/\u003E\r\n\u003Ca href=\u0022nathan.deen@cc.gatech.edu\u0022\u003Enathan.deen@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"664618":{"#nid":"664618","#data":{"type":"news","title":"Manufacturing, Finance Among Industries to Benefit from What\u0027s Next in AI for 2023","body":[{"value":"\u003Cp\u003EArtificial intelligence is already making headlines in the new year with the box office success of the movie\u0026nbsp;\u003Cem\u003EM3GAN\u003C\/em\u003E. Along with a TikTok dance craze and lots of laughs, the over-the-top horror movie\/dark comedy about an AI-powered robot that runs amok is also inspiring discussion about the growing presence and impact of artificial intelligence in everyday life.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFrom the movie\u0026nbsp;house to the warehouse\u0026nbsp;to your house, AI seems like it\u0026#39;s everywhere. That\u0026#39;s because with a steady stream of new research and innovative applications reaching into nearly every industry and business sector, it\u0026nbsp;is everywhere.\u0026nbsp;Nevertheless, AI still holds enormous potential as the field continues to evolve.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo get a sense of what this evolution could look like in 2023, we turned to a small group of \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/phd\u0022\u003EPh.D. students from the College of Computing\u003C\/a\u003E community that are currently pushing foundational and applied AI research forward in a broad spectrum of disciplines and fields.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe students shared their opinions on where AI might be headed in the new year, what some of the big tech stories could be, and why ethics in AI are so critically important.\u003C\/p\u003E\r\n\r\n\u003Ch5\u003EWhere will artificial intelligence and machine learning have the most impact in 2023?\u003C\/h5\u003E\r\n\r\n\u003Cp\u003E\u0026quot;Artificial intelligence and machine learning\u0026nbsp;will continue to have a huge impact on manufacturing and warehouses with labor shortages and worker turnover continuing to be a concern as more manufacturing and operations jobs are brought back to the United States from overseas. Additionally, AI\/ML will continue to help ensure that manufacturing and warehouse facilities are operating as efficiently as possible from energy and material savings to worker safety and parts quality.\u0026quot; \u0026ndash;\u0026nbsp;\u003Cem\u003E\u003Ca href=\u0022https:\/\/www.researchgate.net\/profile\/Zoe-Klesmith\u0022\u003EZoe Klesmith Alexander\u003C\/a\u003E, computational science and engineering Ph.D. student\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026quot;Right now, deep learning is on a trajectory to transform\u0026nbsp;the creation space. Artwork and images, videos, data representation and storytelling, co-authoring, and summarizing documents... It\u0026#39;s gotten really good.\u0026quot; \u0026ndash;\u0026nbsp;\u003Cem\u003E\u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/benhoov\/\u0022\u003EBen Hoover\u003C\/a\u003E, machine learning Ph.D. student\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026quot;I think machine learning and AI will keep playing a huge role\u0026nbsp;in how the world and society will be shaped over the next decade in many ways. It will make many other fields more efficient through ML and AI tools we are developing. In 2023, I think ML and AI will have the most impact on social media platforms, helping reduce hate speech, rumor spread, etc.\u0026quot; \u0026ndash;\u0026nbsp;\u003Cem\u003E\u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/agam-shah\/\u0022\u003EAgam A. Shah\u003C\/a\u003E, machine learning Ph.D. student\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026quot;One of the big impacts this year\u0026nbsp;may be driverless cars\u0026nbsp;being in your neighborhood. Otherwise, it will be a slow steady drip of GPT3 and other OpenAI models suffusing all applications, making programmers much faster, making journalists faster, making academic articles and lit reviews much faster. We\u0026#39;re at a 4th grader level, and I hope by the end of this year we\u0026#39;ll be at the 6th grader level. Also, indoor turn-by-turn navigation will be everywhere in 2023 as well.\u0026quot; \u0026ndash;\u0026nbsp;\u003Cem\u003E\u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/brandonkeithbiggs\/\u0022\u003EBrandon Biggs\u003C\/a\u003E, human-centered computing Ph.D. student\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Ch5\u003EWhat will be some of the big tech stories in 2023?\u003C\/h5\u003E\r\n\r\n\u003Cp\u003E\u0026quot;ChatGPT and the GitHub Copilot lawsuit\u0026nbsp;will keep making it into the news and cause more controversies. In general, AI ethics will become more important and get more focus as the technology keeps advancing.\u0026quot; \u0026ndash; \u003Ca href=\u0022https:\/\/fab1ano.github.io\/\u0022\u003EFabian Fleischer\u003C\/a\u003E, cybersecurity, and privacy Ph.D. student\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026quot;Driverless car fleets will be coming\u0026nbsp;to a city near you.\u0026nbsp;A new battery technology will allow phones to keep their charge for a week. Meta realizes virtual reality (VR) head-mounted displays are for a limited market and uses headphones and phones to provide VR experiences.\u0026quot; \u0026ndash; Brandon Biggs\u003C\/p\u003E\r\n\r\n\u003Ch5\u003EWhat\u0026rsquo;s an issue or industry that you think could benefit from a computing solution?\u003C\/h5\u003E\r\n\r\n\u003Cp\u003E\u0026quot;Our reinterpretation of modern deep learning\u0026nbsp;as energy-based associative memories\u0026nbsp;has the potential to transform any industry that relies on foundation models -- giant architectures that require models that are \u0026quot;self-supervised\u0026quot; (learn on their own from data).\u0026quot; \u0026ndash; Ben Hoover\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026quot;Inclusion in everything.\u0026nbsp;Over 90 percent of websites on the internet have elements that are inaccessible to 25 percent of the world\u0026#39;s population who have disabilities. Inclusive design will be the most important area where technology can be redesigned and created to have multiple sensory modalities and be properly programmed.\u0026quot; \u0026ndash; Brandon Biggs\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026quot;Currently, financial markets are far from efficient\u0026nbsp;because they do not fully incorporate information available in large unstructured text data. With the latest development in natural language processing techniques, we can better understand the economy and therefore price financial markets better.\u0026quot; \u0026ndash; Agam A. Shah\u003C\/p\u003E\r\n\r\n\u003Ch5\u003EThere\u0026rsquo;s been increasing recognition of the vital role ethics should play in artificial intelligence. How do you see this issue evolving in the next year?\u003C\/h5\u003E\r\n\r\n\u003Cp\u003E\u0026quot;Specifically in my research, I think explainable AI (XAI) is very important, especially if non-experts in ML will be using black-box ML solutions in a factory. It will be important for humans to trust and to understand the models especially if the models are being using to monitor quality on a safety-critical part.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026quot;Additionally, using XAI for human interaction with robots that utilize deep learning to make decisions will be increasingly important as technologies like collaborative robots (cobots) are integrated into factories. I think in my area of research that it is always important to use automation to aid humans in jobs that are safe for humans to do and not to replace them.\u0026quot; \u0026ndash; Zoe Klesmith Alexander\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026quot;Big data is pretty much at its peak. Deep data, where your Alexa knows everything about you, or your phone knows everything about you, and rather than saying \u0026#39;other people who watched this show liked this show,\u0026#39; it\u0026#39;s going to say, \u0026#39;I know you liked these shows, I think you\u0026#39;ll like this show because of these reasons, one of which is because other people who liked all these other shows liked this show.\u0026#39; The ethical element will be how much of this data should these models use, and are people going to build a personal dataset that they can share with other apps, or is each app going to need to build their own dataset? The ethical question is who owns this data.\u0026quot; \u0026ndash; Brandon Biggs\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026quot;I think ethics will become more and more important going forward. We are making huge breakthroughs in machine learning and artificial intelligence, but the systems we are creating are producing racist, sexist, and stereotypical results. For example, a recent system, Galactica, developed by Facebook (Meta) is powerful. It can produce research articles by just simply providing it with the title. It comes with some serious ethical concerns, in some cases, it produces racist, sexist text. So, as we will keep developing better models and making success in parallel, we need to always keep in mind the ethical implications of these models.\u0026quot; \u0026ndash; Agam A. Shah\u003C\/p\u003E\r\n\r\n\u003Ch5\u003EWhat research are you working on that you think people should know about or will have impact in 2023?\u003C\/h5\u003E\r\n\r\n\u003Cp\u003E\u0026quot;Part of my research focuses on data-driven modeling of additive manufacturing processes\u0026nbsp;to better control dimensional quality of the final part. Another part of my research focuses on detecting anomalies in real-time using computer vision and machine learning for both warehouses and manufacturing processes.\u0026quot; \u0026ndash; Zoe Klesmith Alexander\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026quot;Right now, deep learning is built on feed-forward mathematical operations\u0026nbsp;that have little resemblance to the brain. I am working on a physics inspired approach to deep learning built around recurrent networks and energy functions. These architectures have the same mathematical foundation as the famous, biologically plausible Hopfield Network.\u0026quot; \u0026ndash; Ben Hoover\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026quot;I am currently working on two projects which, in my opinion, will have an impact in 2023. In one project, we are measuring the exposure of public firms to ongoing inflation. We are also understanding how inflation affects different firms differently based on the pricing power of the firm. As inflation is the highest in the last 40 years, our study is highly relevant now and in the coming years till we get inflation back in control.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026quot;The second work is related to the first work in some ways. As inflation is rising, to control the inflation Federal Reserve Bank is tightening its monetary policy. In our second work, we are measuring the stance of monetary policy (measuring hawkish vs dovish stance) of the Fed using state-of-the-art NLP models to see its impact in various financial markets (Treasury market, Stock market, Crypto market, etc.)\u0026quot; \u0026ndash; Agam A. Shah\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A group of Ph.D. students from the GT Computing community share their opinions on what\u0027s next for artificial intelligence in the new year."}],"uid":"32045","created_gmt":"2023-01-10 19:56:15","changed_gmt":"2023-01-11 13:19:57","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2023-01-10T00:00:00-05:00","iso_date":"2023-01-10T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"664620":{"id":"664620","type":"image","title":"ATL Skyline Reflected in Binary Bridge","body":null,"created":"1673381152","gmt_created":"2023-01-10 20:05:52","changed":"1673381152","gmt_changed":"2023-01-10 20:05:52","alt":"ATL skyline reflected in Binary Bridge","file":{"fid":"251459","name":"ATL Skyline Reflection-Binary Bridge.jpeg","image_path":"\/sites\/default\/files\/images\/ATL%20Skyline%20Reflection-Binary%20Bridge.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/ATL%20Skyline%20Reflection-Binary%20Bridge.jpeg","mime":"image\/jpeg","size":50853,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/ATL%20Skyline%20Reflection-Binary%20Bridge.jpeg?itok=V4IM1dF8"}}},"media_ids":["664620"],"groups":[{"id":"576481","name":"ML@GT"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1188","name":"Research Horizons"}],"categories":[],"keywords":[{"id":"191885","name":"M3GAN"},{"id":"2835","name":"ai"},{"id":"46361","name":"GT computing"},{"id":"191886","name":"What\u0027s Next for 2023"},{"id":"122801","name":"ML"},{"id":"2556","name":"artificial intelligence"},{"id":"9167","name":"machine learning"},{"id":"180344","name":"nlp"},{"id":"23981","name":"natural language processing"},{"id":"109581","name":"deep learning"},{"id":"176999","name":"neural networks"},{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"39461","name":"Manufacturing, Trade, and Logistics"},{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EBen Snedeker, Comms. Mgr. II\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto:albert.snedeker@cc.gatech.edu?subject=What\u0027s%20Next%20in%20AI%20for%202023\u0022\u003Ealbert.snedeker@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["albert.snedeker@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"663401":{"#nid":"663401","#data":{"type":"news","title":"Computing Approach May Save At-Risk Carnival Costume Making Tradition","body":[{"value":"\u003Cp\u003ECostumes in the annual Trinidad and Tobago Carnival often inspire awe because of their extravagance, flamboyancy, and \u0026mdash; for some dancing sculptures \u0026mdash; their size.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESome costumes are so large and expansive, it makes the person wearing them appear as if they are carrying an unbelievably heavy load on their shoulders. Built on techniques in the traditional craft of wire-bending, these costumes and dancing sculptures are dynamic and performative and decorated with painted textiles, feathers, and beads.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWire-bending has been a traditional method of constructing costumes for the Trinidad and Tobago Carnival since the 1930s, but Vernelle A.A. Noel, a joint professor with the School of Interactive Computing in the College of Computing and School of Architecture in the College of Design, has been conducting research on this at-risk practice.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThree masters of the craft have passed away since Noel began her research in 2012. Most recently, Albert Bailey, one of the masters who assisted Noel with her research, passed away in September. Those who are still alive are getting older.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The craft is dying because of aging practitioners, the absence of a system to pass this knowledge on, and more,\u0026rdquo; Noel said. \u0026ldquo;There is currently no system of pedagogy for it to be passed on, so this was the first problem I addressed in my research. How do we document and make explicit this tacit knowledge in wire benders so that it can be shared and taught to others?\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This material practice is a language. For the continued telling of histories and cultures, these languages, which are ways of understanding and describing the world, should not disappear.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENoel believes it\u0026rsquo;s possible to revive the craft through computational approaches.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENoel will be hosting her first physical exhibition,\u0026nbsp;\u003Ca href=\u0022https:\/\/calendar.gatech.edu\/event\/2022\/11\/16\/exhibition-design-and-making-trinidad-carnival\u0022\u003E\u003Cem\u003EDesign and Making in the Trinidad Carnival: Histories, Re-imaginations, and Speculations of Computational Design Futures\u003C\/em\u003E\u003C\/a\u003E, from Nov. 17 to Feb. 28, 2023, at the Price Gilbert Memorial Library at Georgia Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe exhibition is funded by the Graham Foundation for Advanced Studies in the Fine Arts and showcases wire-bending through traditional and technological forms of making.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Specifically, I\u0026rsquo;m looking at wire-bending and how we can rethink craft, computation, and computers,\u0026rdquo; Noel said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENoel, who was born in Trinidad and Tobago, began her research in wire-bending during her graduate studies.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I grew up looking at Carnival,\u0026rdquo; she said. \u0026ldquo;I always knew it was deep and rich, but during my graduate studies, I started to look at the scholarly side of Carnival and realized there was a gap in scholarship in terms of design,\u0026rdquo; she said. \u0026ldquo;I wanted to understand, unpack, and reveal what was there.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;At first, I focused more on the design and fabrication side of things, but it became much more than that. I\u0026rsquo;m a constant observer of how cultures change. The question started with an aesthetic change that I noticed that was different from the aesthetics of the past. I noticed that the aesthetics of Carnival were trending toward bikinis, beads, and feathers, and I wanted to know why. My hypothesis was that it was a design problem among the people, processes, knowledge, tools, methods, economies, and more.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETraditionally, the practice of wire-bending is a male-dominated practice, but Noel believes integrating computer technologies in the craft might make it more accessible to children, women, and those with physical limitations.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs she designed and produced her first exhibition, Noel said she had to think about what she wanted spectators to walk away with after viewing it.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Craft practices can help us rethink technology, and technology can help us rethink craft practices,\u0026rdquo; she said. \u0026ldquo;The work also gives voice to the contributions of cultures and people who are often excluded from discourses in computation and technology.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I\u0026rsquo;m looking forward to hearing how the work is received. I want people to feel the joy I felt curating, designing, and making it. I want them to be curious, to think across worlds and disciplines. I want them to acknowledge and appreciate this history.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A Tech professor is exploring new ways of passing along a traditional skills from her home country."}],"uid":"32045","created_gmt":"2022-11-22 16:09:46","changed_gmt":"2022-11-22 16:09:46","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2022-11-22T00:00:00-05:00","iso_date":"2022-11-22T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"663399":{"id":"663399","type":"image","title":"Trinidad Carnival ","body":null,"created":"1669132965","gmt_created":"2022-11-22 16:02:45","changed":"1669132965","gmt_changed":"2022-11-22 16:02:45","alt":"Large, purple feathered Trinidad carnival costume","file":{"fid":"251115","name":"trinidad_carnivale costume 2.jpg","image_path":"\/sites\/default\/files\/images\/trinidad_carnivale%20costume%202.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/trinidad_carnivale%20costume%202.jpg","mime":"image\/jpeg","size":414158,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/trinidad_carnivale%20costume%202.jpg?itok=7WN50ROu"}}},"media_ids":["663399"],"groups":[{"id":"66442","name":"MS HCI"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"10750","name":"carnival"},{"id":"191681","name":"Vernelle Noel"},{"id":"191682","name":"wire bending"},{"id":"208","name":"computing"},{"id":"823","name":"design"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen, Comms. Officer\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto:nathan.deen@cc.gatech.edu?subject=Carnival\u0022\u003Enathan.deen@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["nathan.deen@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"663398":{"#nid":"663398","#data":{"type":"news","title":"Research to Help Neurodiverse People, Others Leads the Way at Social Computing Conference","body":[{"value":"\u003Cp\u003EFaculty and students from the School of Interactive Computing are working to benefit the lives of underserved groups and communities such as autistic employees and visually impaired social media users.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThat\u0026rsquo;s just two examples of the Georgia Tech-led research presented at the 25th Conference on Computer-Supported Cooperative Work and Social Computing (CSCW), which wraps up today. In all, 19 papers authored or co-authored\u0026nbsp;authored by Georgia Tech faculty and students\u0026nbsp;were presented at the virtual conference, which focuses on technologies impacting groups, organizations, communities, and networks.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EJennifer Kim, an assistant professor in the School of Interactive Computing, is researching how technology can make the work environment more inclusive of neurodiverse people, while Stanley Cantrell, a Ph.D. student advised by Interactive Computing and School of Psychology professor Bruce Walker, is exploring how Facebook can be more accessible to the visually impaired.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKim\u0026rsquo;s paper,\u0026nbsp;\u003Cem\u003EThe Workplace Playbook VR\u003C\/em\u003E, is based on a study she conducted in South Korea with researchers from Seoul National University and Hanyang University. The study looks at how virtual reality can foster a more inclusive environment for neurodiverse employees.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKim and her team designed a virtual reality program that provides data to families of neurodiverse workers to give them an idea of what they may be struggling with in the workplace.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Families of autistic individuals can\u0026rsquo;t go to the workplace with them, so they really didn\u0026rsquo;t know what struggles they were going through, but by seeing how they do with the virtual reality and the available data, this made an opportunity for parents and therapists to understand this individual and have more empathetic communication with them,\u0026rdquo; Kim said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut the study took an unexpected turn when the researchers began showing the VR program to the coworkers of the neurodiverse employees.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;What we didn\u0026rsquo;t expect was this case of being able to use this virtual reality for neurotypical coworkers,\u0026rdquo; Kim said. \u0026ldquo;(The neurodiverse individuals) liked that sharing this data can open up a conversation about how they are different from their neurotypical co-workers and how those neurotypical coworkers should change their behaviors to better interact with neurodiverse people.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKim said that adding the goal of modifying the behavior of coworkers helped the paper stand apart from previous research projects.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We shouldn\u0026rsquo;t just focus on neurodiverse people,\u0026rdquo; she said. \u0026ldquo;There are a lot of technologies for neurodiverse people, but there isn\u0026rsquo;t much research on how we can change behaviors of neurotypical people to better interact and understand the perspective of neurodiverse people.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKim said the companies studied in her research have already reported a noticed difference in how much more comfortable neurodiverse employees feel in their work environments.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Managers have told us it\u0026rsquo;s going to be really helpful for new neurotypical employees to better understand what neurodiverse employees like and what are their behavioral characteristics so they can understand by their first day of work how to communicate with them and what to expect,\u0026rdquo; she said.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003ERedefining accessibility\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EStanley Cantrell has a different definition of accessibility than what is normally understood. Accessibility features in social media sites like Facebook must go beyond simply allowing users who are visually impaired to functionally use the website. These users deserve an equitable experience that rivals that of their sighted counterparts.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We know there are about 285 million people worldwide living with some form of visual impairment,\u0026rdquo; Cantrell said. \u0026ldquo;They want to do the same things that sighted individuals do on Facebook, but the technology doesn\u0026rsquo;t facilitate rich engagement for individuals living with disabilities like visual impairment.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn his paper,\u0026nbsp;\u003Cem\u003ESonification of Emotion in Social Media: Affect and Accessibility in Facebook Reactions\u003C\/em\u003E, Cantrell explores making Facebook Reactions more emotionally engaging for visually impaired users. Working with Walker, who is the director of the\u0026nbsp;\u003Ca href=\u0022http:\/\/sonify.psych.gatech.edu\/\u0022\u003ESonification Lab\u003C\/a\u003E at Georgia Tech, Cantrell and his collaborators produced 48 different sounds that can be associated with Facebook Reactions, such as Like, Love, Sad, and Angry.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECantrell has always had an interest in universal design, also known as inclusive design, but it blossomed during his internship with Facebook. He said although Facebook currently meets basic accessibility standards, he wanted to reimagine the experience for visually impaired users.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Facebook\u0026rsquo;s screen reader accommodations check the box of making this accessible, but does it make it rich and engaging and delightful? What ways can we use sound to transform this visual information, but also in a way that\u0026rsquo;s engaging and doesn\u0026rsquo;t disrupt the experience,\u0026rdquo; he said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECantrell recruited 75 participants for his study, including 11 visually impaired subjects, to evaluate each of the 48 sonifications that he and his collaborators designed.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Before we could begin designing sonifications, we had to first understand how sighted people interpret each Facebook Reaction,\u0026rdquo; he said. \u0026ldquo;Sometimes Reactions can have different meanings based on the context.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We did some legwork prior to the study to see the different ways that Facebook Reactions could be used. For example, we found that the Haha Reaction can be used to laugh at something funny, or it can be used to bully someone.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELooking beyond the scope of his paper, Cantrell said he hopes to make sound-enabled emojis a feature for text messaging, and he hopes it will be something that both sighted and visually impaired users can enjoy.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We knew we didn\u0026rsquo;t want this to be just for blind people,\u0026rdquo; he said. \u0026ldquo;We wanted this to be an accessibility feature that could be useful to anyone.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Tech researchers had 19 accepted papers at this month\u0027s virtual Conference on Computer-Supported Cooperative Work and Social Computing."}],"uid":"32045","created_gmt":"2022-11-22 16:00:00","changed_gmt":"2022-11-22 16:00:00","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2022-11-22T00:00:00-05:00","iso_date":"2022-11-22T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"663397":{"id":"663397","type":"image","title":"Workplace VR - CSCW","body":null,"created":"1669132403","gmt_created":"2022-11-22 15:53:23","changed":"1669132403","gmt_changed":"2022-11-22 15:53:23","alt":"Graphic from the Workplace Playbook VR, research from School of Interactive Computing\u0027s Jennifer Kim","file":{"fid":"251114","name":"workplaceVR.png","image_path":"\/sites\/default\/files\/images\/workplaceVR.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/workplaceVR.png","mime":"image\/png","size":442616,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/workplaceVR.png?itok=XFexNQYM"}}},"media_ids":["663397"],"groups":[{"id":"66442","name":"MS HCI"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"135","name":"Research"}],"keywords":[{"id":"167731","name":"social computing"},{"id":"177254","name":"GTComputing"},{"id":"170772","name":"Sonification"},{"id":"180676","name":"cscw"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen, Comms. Officer\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto:nathan.deen@cc.gatech.edu?subject=CSCW\u0022\u003Enathan.deen@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["nathan.deen@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"662312":{"#nid":"662312","#data":{"type":"news","title":"Research Paves Way for Home Robot that Can Tidy a House on Its Own","body":[{"value":"\u003Cp\u003EStruggling with keeping your home clean and organized? You may soon have an extra set of hands to help around the house.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EImagine a home robot that can keep a house tidy without being given any commands from its owner. Well, the next step in home robotics is here \u0026mdash; at least virtually.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA group of doctoral and master\u0026rsquo;s students from Georgia Tech\u0026#39;s School of Interactive Computing, in collaboration with researchers from the University of Toronto, believe they have created the benchmark for a home robot that can keep an entire house tidy.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn their paper,\u0026nbsp;\u003Cem\u003EHousekeep: Tidying Virtual Households Using Commonsense Reasoning\u003C\/em\u003E, Georgia Tech doctoral candidates \u003Cstrong\u003EHarsh\u003C\/strong\u003E \u003Cstrong\u003EAgrawal\u003C\/strong\u003E and \u003Cstrong\u003EAndrew\u003C\/strong\u003E \u003Cstrong\u003ESzot\u003C\/strong\u003E, master\u0026rsquo;s students \u003Cstrong\u003EArun\u003C\/strong\u003E \u003Cstrong\u003ERamachandran\u003C\/strong\u003E and \u003Cstrong\u003ESriram\u003C\/strong\u003E \u003Cstrong\u003EYenamandra\u003C\/strong\u003E, and \u003Cstrong\u003EYash\u003C\/strong\u003E \u003Cstrong\u003EKant\u003C\/strong\u003E, a former research visitor at Georgia Tech who is now a doctoral candidate at Toronto, set out to prove an embodied artificial intelligence (AI) could conduct simple housekeeping tasks without explicit instructions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing advanced natural language processing machine learning techniques, the students have successfully simulated the robot exploring a virtual household, identifying misplaced items, and putting them in their correct place.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKant said most robots in embodied AI are given specific instructions for different functions, but the students wanted to be sure the robot could achieve task completion without instructions in simulation before moving on to real-world testing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In the actual world, things are difficult,\u0026rdquo; Kant said. \u0026ldquo;Training robots in the real world \u0026mdash; they move around slowly; they will bump into things and people. So, we do it in simulation because you can run things at a faster speed, and you can have multiple virtual robots running.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EDhruv\u003C\/strong\u003E \u003Cstrong\u003EBatra\u003C\/strong\u003E, an associate professor in the School of Interactive Computing and a research scientist with Meta AI, and \u003Cstrong\u003EIgor\u003C\/strong\u003E \u003Cstrong\u003EGilitschenski\u003C\/strong\u003E, an assistant professor of mathematical and computational sciences at Toronto, served as advisors on the paper, which was accepted to the 2022 European Conference on Computer Vision, Oct. 23-27 in Tel Aviv, Israel.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Ca href=\u0022https:\/\/sites.gatech.edu\/ml-eccv-2022\/\u0022\u003E[FULL COVERAGE: Georgia Tech at ECCV 2022]\u003C\/a\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EIn the virtual simulation, the robot spawned in a random section of the house and immediately began looking for misplaced objects. It correctly identified a misplaced lunchbox in a kid\u0026rsquo;s bedroom and moved it to the kitchen. It also located some toys left in the bathroom and moved them to the kid\u0026rsquo;s bedroom.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAgrawal said the goal of the project from the beginning was to have the robot mimic commonsense reasoning that any human would have in tidying a house. Through surveys, the team collected rearrangement preferences for 1,799 objects in 585 placements in 105 rooms.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We collected human preferences data,\u0026rdquo; Agrawal said. \u0026ldquo;We asked people where they like to keep certain objects, and we wanted robots to have a similar notion of cleanliness in a tidy home.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;You don\u0026rsquo;t provide instructions when you ask the kids to clean up the house. It\u0026rsquo;s commonsense. You know certain things go in certain places. You know Lego blocks don\u0026rsquo;t belong in the bathroom. We thought it\u0026rsquo;d be cool if it could clean up the house without specifying instructions. As humans, we can do a bunch of these tasks without being given specific instructions.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECreating the simulation had several challenges. These included getting the robot to use reason about the correct placement of new objects, getting the robot to adapt to new environments, and getting it to work through choices when there are multiple correct locations a misplaced object could go.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESzot said what attracted him to the project was the idea of creating a robot that didn\u0026rsquo;t need to be told where to put something, whereas in his previous work, that\u0026rsquo;s exactly what he had to do.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If you wanted it do something like clean up the house, you would have to tell it, \u0026lsquo;Hey, robot, move that object to there,\u0026rsquo;\u0026rdquo; Szot said. \u0026ldquo;It\u0026rsquo;s very tedious to specify that. We took the first step of saying let\u0026rsquo;s give the robot some commonsense reasoning. It might not be specific to a person; it might just be capturing more generally what people think, but it captures a lot of important situations. It\u0026rsquo;s able to handle most of those situations in which people agree the object belongs there or the object doesn\u0026rsquo;t belong there.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing text from the internet, the team informed the AI that drives the robot by fine-tuning a large language model based on human preferences.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The way we approached solving this problem is we took this external source of knowledge from text on the internet and these language tasks, and so from natural language processing we took that information and used it to give our robot some idea of this common sense,\u0026rdquo; Szot said. \u0026ldquo;It wasn\u0026rsquo;t purely from the house it learned how to do these things. From articles or texts online, it was able to distill this commonsense reasoning ability and then apply it.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKant said using language models allows the AI to distinguish between objects and whether those objects should go together. He added that he thinks that the language model used to train the AI can be fine-tuned by extracting content from web articles related to housekeeping.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Language models have shown very promising results in trying to extract semantics, like whether two things \u0026mdash; say an apple and fruit basket \u0026mdash; go together in a household,\u0026rdquo; Kant said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe team is just at the tip of the iceberg, and the virtual simulation serves only as a proof of concept. It\u0026rsquo;s a long-term project that will continue to explore new possibilities, which include creating a robot that can tidy a household according to specific user preferences.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut the successful use of NLP methods to inform a novel AI could break new barriers in the creation of new systems in which organization is the focus.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s a benchmark for the rest of the community to use,\u0026rdquo; Szot said. \u0026ldquo;Hopefully this is something for people to gather behind to focus on this very realistic task setting of cleaning the house. We showed that you can create these embodied agents that can use this external knowledge and learn commonsense and use it in embodied robotic settings.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I think the data that we collected is pretty significant in the sense that we now have a few hundred annotations for where each object should go in houses and where they\u0026rsquo;re likely to be found in untidy houses, and I think that information can guide a lot of systems,\u0026rdquo; Agrawal added. \u0026ldquo;I feel like we are starting to now see people saying all these annotations can be used for building their own systems and benchmarks.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A group of doctoral and master\u2019s students from Georgia Tech\u0027s School of Interactive Computing believe they have created the benchmark for a home robot that can keep an entire house tidy."}],"uid":"32045","created_gmt":"2022-10-19 15:09:13","changed_gmt":"2022-10-19 20:02:41","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2022-10-19T00:00:00-04:00","iso_date":"2022-10-19T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"662342":{"id":"662342","type":"image","title":"Housekeep","body":null,"created":"1666204859","gmt_created":"2022-10-19 18:40:59","changed":"1666204859","gmt_changed":"2022-10-19 18:40:59","alt":"Housekeep is a benchmark to evaluate commonsense reasoning in the home for embodied AI. I","file":{"fid":"250840","name":"housekeeping-algorithm.jpeg","image_path":"\/sites\/default\/files\/images\/housekeeping-algorithm.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/housekeeping-algorithm.jpeg","mime":"image\/jpeg","size":88451,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/housekeeping-algorithm.jpeg?itok=N8VKYqsO"}},"662343":{"id":"662343","type":"image","title":"Housekeep research team collage","body":null,"created":"1666204942","gmt_created":"2022-10-19 18:42:22","changed":"1666204942","gmt_changed":"2022-10-19 18:42:22","alt":"Housekeep research team collage","file":{"fid":"250841","name":"authors_housekeeping-bot-copy_v2.2.jpg","image_path":"\/sites\/default\/files\/images\/authors_housekeeping-bot-copy_v2.2.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/authors_housekeeping-bot-copy_v2.2.jpg","mime":"image\/jpeg","size":227143,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/authors_housekeeping-bot-copy_v2.2.jpg?itok=6pxpNCGZ"}}},"media_ids":["662342","662343"],"related_links":[{"url":"https:\/\/sites.gatech.edu\/ml-eccv-2022\/","title":"Georgia Tech at ECCV 2022"}],"groups":[{"id":"576481","name":"ML@GT"},{"id":"66442","name":"MS HCI"},{"id":"50876","name":"School of Interactive Computing"},{"id":"434391","name":"ECE M.S. Thesis Defenses"},{"id":"434381","name":"ECE Ph.D. Dissertation Defenses"},{"id":"434371","name":"ECE Ph.D. Proposal Oral Exams"},{"id":"1188","name":"Research Horizons"}],"categories":[],"keywords":[{"id":"1356","name":"robot"},{"id":"191487","name":"eccv"},{"id":"191488","name":"tidy"},{"id":"2483","name":"interactive computing"},{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen, Communications Officer\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["ndeen6@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"661234":{"#nid":"661234","#data":{"type":"news","title":"Robotics Professor Seeks to Revolutionize Heart Surgery Through NIH Grant","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003EMatthew\u003C\/strong\u003E \u003Cstrong\u003EGombolay\u003C\/strong\u003E has always had a heart for the healthcare industry.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhen he was 20 years old, Gombolay was diagnosed with supraventricular tachycardia (SVT) and had to have heart surgery to avoid serious health complications that he could have faced anytime. SVT causes an unusually fast or rapid heartbeat that affects the heart\u0026rsquo;s upper chambers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGombolay said he first knew something was wrong when he developed a rapid heartbeat when he was 14, but the condition was misdiagnosed as a symptom of puberty.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe knows the surgery saved his life.\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;They gave me my life back,\u0026rdquo; said Gombolay, assistant professor and Director of the \u003Ca href=\u0022https:\/\/core-robotics.gatech.edu\/\u0022\u003ECORE Robotics lab\u003C\/a\u003E at the School of Interactive Computing. \u0026ldquo;I\u0026rsquo;m immensely grateful for the field of cardiology.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENow he\u0026rsquo;s looking to revolutionize the way open heart surgery is performed.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGombolay received the prestigious National Institutes of Health RO1 grant, which will fund a three-year study of how robotics can improve and minimize the risks of open-heart surgery. Gombolay has partnered with\u0026nbsp;\u003Cstrong\u003ERoger\u003C\/strong\u003E\u0026nbsp;\u003Cstrong\u003EDias\u003C\/strong\u003E, assistant professor of emergency medicine at Harvard Medical School, and\u0026nbsp;\u003Cstrong\u003EMarco\u003C\/strong\u003E\u0026nbsp;\u003Cstrong\u003EZenati\u003C\/strong\u003E, professor of surgery at Harvard Medical School and chief of cardiothoracic surgery for the U.S. Department of Veterans Affairs to conduct the study.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDias is also the Director of Research \u0026amp; Innovation at the STRATUS Center for Medical Simulation at the Brigham and Women\u0026rsquo;s Hospital in Boston, which specializes in the research of human performance across high-risk clinical settings. For example, the STRATUS lab is also in a partnership with NASA in developing training solutions on how to perform medical procedures in space.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDias said the use of robotics and data collection to stymie human error in medicine immediately stood out to him as something he wanted the STRATUS lab to be involved in.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The reality is, human errors happen all the time,\u0026rdquo; Dias said. \u0026ldquo;Some studies estimate that human error is one of the leading causes of death in the United States. Some of those errors are unavoidable, but a considerable number of human errors in the operating room are avoidable. The support system we are creating is really going in that direction of trying to make surgery safer, by helping surgical teams during complex decision making.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDias wanted to start with one of the most complex procedures \u0026mdash; heart surgery. And one of the most challenging aspects of heart surgery is the management of a heart-lung bypass machine by a perfusionist.\u0026nbsp;During most heart surgeries, the surgeon operates on a heart that isn\u0026rsquo;t beating and has no blood flow, while a perfusionist uses the heart-lung bypass machine to temporarily serve as the heart and the lungs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPerfusionists oversee a complex and cognitive-demanding procedure. As any human, they are subject to fatigue, stress, and distractions, all of which could compromise patient safety. As of right now, it\u0026rsquo;s a job in which knowledge about best practices only comes through hands-on experience.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Cardiac surgery may have between seven to 11 to 12 different people in the room and each one with a different function, different role,\u0026rdquo; Dias said. \u0026ldquo;The complexity of cardiac surgery really brought our attention to this type of research.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The perfusionist has a very important role in controlling and managing the heart-lung machine. That\u0026rsquo;s why we selected the perfusionist \u0026mdash; to understand their performance but also to support their performance.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGombolay designed a perfusionist-monitoring robot that can help track which specific moments of the procedures cause the most stress to the perfusionist as well as identify any distractions that may be affecting performance. The goal, Gombolay said, is not to replace perfusionists, but to support them by making as much relevant information as possible available to them and to create a gold standard across the medical industry.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If a robot learns to do better than the status quo, the robot could learn to provide helpful explanations to give the surgical team insights into its decision making,\u0026rdquo; Gombolay said. \u0026ldquo;What can the machine teach us about the right metrics and how can that help predict outcomes? Maybe the machine can teach us what matters in order to improve the standardization of care.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ERithy\u003C\/strong\u003E\u0026nbsp;\u003Cstrong\u003ESrey\u003C\/strong\u003E, the Chief Perfusionist for the U.S. Department of Veterans Affairs who works with Zenati, will be one of the main perfusionists studied by Gombolay\u0026rsquo;s machine. In his 21 years of experience as a perfusionist, Srey has never had a fatal incident, but the stress and anxiety from the possibility of something going wrong is always there.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;There\u0026rsquo;s always a scare factor,\u0026rdquo; Srey said. \u0026ldquo;Your adrenaline will kick in; your heart is basically at the bottom of your stomach. Your fear for that patient\u0026rsquo;s life; that\u0026rsquo;s what keeps you focused.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA lot of that anxiety comes from the unknown variables involved with each procedure. Patients may have some similarities, but each one is unique, Srey said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Some patients have higher red cell counts than others and some are sicker than others,\u0026rdquo; he said. \u0026ldquo;The way we flow is according to what the patient\u0026rsquo;s blood volume is and what the patient needs. There\u0026rsquo;s an average consideration to what the flow should be, but then you have someone who\u0026rsquo;s diabetic or someone with a kidney issue. How are you going to protect them?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile a machine-learning system will directly benefit younger perfusionists in their training and early careers, Srey said more experienced perfusionists will welcome it with open arms.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Too much experience under our belts, we get lackadaisical,\u0026rdquo; Srey said. \u0026ldquo;You become too relaxed and too jaded in the system to the point a computer system could help you. We don\u0026rsquo;t want to slack off.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDias began using Gombolay\u0026rsquo;s machine to gather data on live procedures on Sept. 1. He said by the end of the project he will have studied more than 100 procedures, which is about 400 hours of collected data. At the beginning of the second year, Dias will begin sending that data to Gombolay, who will mine it to create and test algorithms that can predict a perfusionist\u0026rsquo;s decisions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy the third year, Gombolay and his team will have created an interface prototype that can be used in simulated procedures to help inform people training to become perfusionists. If those trials are successful, the machine could be ready to enter the medical industry and become a standard tool used in live heart procedures.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe biggest challenge along the way will be building trust with potential patients, who may be skeptical about an AI\u0026rsquo;s role in their medical treatment. Dias said one of the biggest reasons he wanted to partner with Gombolay is his ability to solve that problem.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This is one of his areas of expertise \u0026mdash; to work on the trustworthiness of AI systems,\u0026rdquo; Dias said. \u0026ldquo;That\u0026rsquo;s something we plan to address in this RO1. One is the trustworthiness of AI systems, and the other is the explanation of AI systems. They\u0026rsquo;re not going to trust because they want to know why. That\u0026rsquo;s what we call an Artificial Intelligence \u0026ldquo;black box\u0026rdquo; problem. Sometimes your algorithm is 100 percent accurate, but you cannot explain why the algorithm reached that decision.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGombolay said he\u0026rsquo;s up for the challenge. He recognizes the significance of being selected for an ROI, and it\u0026rsquo;s not something he\u0026rsquo;s going to take for granted.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAccording to the National Institutes of Health website, an RO1 grant is \u0026ldquo;an award made to support a discrete, specified, circumscribed project to be performed by the named investigator(s) in an area representing the investigator\u0026rsquo;s specific interest and competencies, based on the mission of the NIH.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn general, the NIH grants RO1s to support projects it knows will have a beneficial outcome to its mission.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I never imagined applying for one, let alone getting one,\u0026rdquo; Gombolay said. \u0026ldquo;When I got invited to do this, I was like, sure, sounds fun. The fact that we got one is just disbelief. It\u0026rsquo;s so competitive. Now we just have to deliver.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech and Harvard Researchers are collaborating on a a three-year study of how robotics can improve and minimize the risks of open-heart surgery."}],"uid":"32045","created_gmt":"2022-09-16 14:07:03","changed_gmt":"2022-09-16 14:15:21","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2022-09-16T00:00:00-04:00","iso_date":"2022-09-16T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"661235":{"id":"661235","type":"image","title":"Matthew Gombolay robotics researcher","body":null,"created":"1663337338","gmt_created":"2022-09-16 14:08:58","changed":"1663337338","gmt_changed":"2022-09-16 14:08:58","alt":"Georgia Tech roboticist Matthew Gombolay","file":{"fid":"250493","name":"GombolayRO1.jpg","image_path":"\/sites\/default\/files\/images\/GombolayRO1.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/GombolayRO1.jpg","mime":"image\/jpeg","size":50894,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/GombolayRO1.jpg?itok=kwMTaOkQ"}}},"media_ids":["661235"],"groups":[{"id":"50876","name":"School of Interactive Computing"},{"id":"1188","name":"Research Horizons"}],"categories":[{"id":"135","name":"Research"},{"id":"138","name":"Biotechnology, Health, Bioengineering, Genetics"}],"keywords":[{"id":"175375","name":"matthew gombolay"},{"id":"2076","name":"NIH"},{"id":"2923","name":"harvard"},{"id":"2583","name":"heart"},{"id":"2552","name":"robotic"},{"id":"169511","name":"surgery"},{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen, Communications Officer\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto:nathan.deen@cc.gatech.edu?subject=Robotic%20surgery\u0022\u003Enathan.deen@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["nathan.deen@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"660999":{"#nid":"660999","#data":{"type":"news","title":"Stewarding the Land with Technology: Q\u0026A with New Associate Professor Josiah Hester","body":[{"value":"\u003Cp\u003EJosiah Hester believes battery-free devices are the future of computing, and the quicker we get there, the better.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHester is the new Catherine M. and James E. Allchin Junior Faculty Chair at the College of Computing. He\u0026rsquo;s also a new associate professor in the School of Interactive Computing and computer science. Hester spent five years as an assistant professor at Northwestern University, where he directed the Ka Moamoa Ubiquitous and Mobile Computing Lab.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHester focuses on developing sustainable, battery-free technology, including health-wearables and interactive devices. Through his lab, Hester developed a \u0026ldquo;FaceBit\u0026rdquo; smart face mask that can monitor someone\u0026rsquo;s heartbeat and is powered by a person\u0026rsquo;s breathing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe also co-developed a battery-free handheld gaming device nearly identical to the original Game Boy, except that it\u0026rsquo;s powered by small solar panels and energy produced by mashing the buttons on the console. Hester will be moving his Ka Moamoa Lab to Georgia Tech from Northwestern.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn February of 2022, Hester received a Faculty Early Career Development Program Award from the National Science Foundation. In 2021, he was named to the Brilliant 10 by Popular Science, and he received the Most Promising Engineer or Scientist Award from the American Indian Science Engineering Society, which recognizes significant contributions from the indigenous peoples of the North America and the Pacific Islands in STEM disciplines.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat interests you about working at Georgia Tech?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI\u0026rsquo;m really looking forward to collaborating broadly with Georgia Tech faculty and students around sustainability and health, areas that Tech has global leadership in and strong institutional support.\u0026nbsp;\u003Cstrong\u003E\u0026nbsp;\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat will your research consist of?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI lead a research lab exploring energy-efficient computing in the context of global-scale applications. I work toward a sustainable future for computing informed by my Native Hawaiian (Kanaka maoli) heritage. We mainly try to figure out how to make ubiquitous computing and sensing devices like wearables, smart devices, and sensor networks run forever with a lower impact on the planet and the humans using or tending to these devices. I call these sustainable computational things. A core problem we tackle is designing computers that harvest energy from the sun, motion, and other sources instead of relying on a battery, which is toxic, short-lived, and unsustainable, so that these devices can be useful for decades.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERecently, we have been investing a lot of time into applying these techniques to large-scale sensing for sustainability and conservation with a $5 million grant from the National Science Foundation. By partnering with indigenous knowledge holders, conservation organizations, and academics across political science, ecology, and environmental sciences, we can develop holistic approaches to managing precious natural resources for generations.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat inspired you to pursue this field of research?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMy work sits in the space where computing becomes physical. This means that we can use computing and intelligent systems to start addressing actual problems in the physical world. I wanted my work to matter to communities and people living now, so we find research problems in real-world constraints. As a Native Hawaiian, I was raised to believe that we had an unbreakable bond to steward the land (Aloha \u02bb\u0100ina). Since the beginning, computing has been focusing on performance at the expense of energy and power. We cannot continue in this manner. I was inspired to figure out if we could do better, sustainable, battery-free, long-term. What happens if we design for those features instead of performance alone? Thankfully, many others have the same inspiration, and we really see change focused on longer-term computing. It is an exciting time for the field.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat do you hope to accomplish in your research?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWe are trying to show that an alternative, more sustainable, and more equitable version of computing is possible. Right now, wearables, the internet of things, and edge computing favor only the few that can afford them. I interpret sustainability broadly, where devices should be low cost, easy to program and use, low burden, and last forever \u0026mdash; or at least longer than your average cell phone. It\u0026#39;s a big goal, and we have a lot of work to do, but I\u0026#39;m excited that so many at Georgia Tech are already partnering with us to make it happen.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat are you looking forward to about teaching your students and how do you plan to work with them?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI am looking forward to partnering with my students on cutting-edge research and learning from each of them how they interpret sustainability, access, and the burden of computing. I hope I can equip them to tackle new challenges around health and sustainability in their communities.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The School of Interactive Computing welcomes new faculty member."}],"uid":"32045","created_gmt":"2022-09-09 01:36:35","changed_gmt":"2022-09-09 01:36:35","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2022-09-08T00:00:00-04:00","iso_date":"2022-09-08T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"660998":{"id":"660998","type":"image","title":"School of Interactive Computing\u0027s Josiah Hester","body":null,"created":"1662687366","gmt_created":"2022-09-09 01:36:06","changed":"1662687366","gmt_changed":"2022-09-09 01:36:06","alt":"GT Computing\u0027s Josiah Hester","file":{"fid":"250418","name":"Hester3.jpg","image_path":"\/sites\/default\/files\/images\/Hester3.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Hester3.jpg","mime":"image\/jpeg","size":89648,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Hester3.jpg?itok=XymhWUFY"}}},"media_ids":["660998"],"groups":[{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen, Communications Officer\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto:ndeen6@gatech.edu?subject=New%20Faculty%20Member\u0022\u003Endeen6@gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["ndeen6@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"660997":{"#nid":"660997","#data":{"type":"news","title":"Georgia Tech Taking Ubicomp Back to its Academic Roots","body":[{"value":"\u003Cp\u003ESince its inception in 1999, Ubicomp has grown into the premiere conference in the field of ubiquitous and wearable computing. In recent years, before the Covid-19 pandemic forced the conference to become virtual, Ubicomp was held in destination locations such as Maui, Hawaii and Osaka, Japan. In 2020, the conference was slated to be held in Cancun, Mexico, but the pandemic forced organizers to pivot.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAfter two years of meeting virtually, organizers were ready for Ubicomp to be in-person once again, but the venue in Cancun wasn\u0026rsquo;t available for 2022. They also weren\u0026rsquo;t sure if the pandemic would force them to shut down a live conference, so booking another venue proved to be a financial risk.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe solution: bring Ubicomp back to its roots. In the conference\u0026rsquo;s early days, it was small enough to be hosted by universities. This year, it\u0026rsquo;s being hosted by two of them at separate locations \u0026mdash; Georgia Tech in Atlanta and Cambridge in the United Kingdom.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Ubicomp started with around 150 people, I think, and eventually grew and grew,\u0026rdquo; said School of Interactive Computing professor and Ubicomp local arrangement chair Thad Starner. \u0026ldquo;It\u0026rsquo;s an 800-person conference, so that suddenly means you must plan for hundreds of thousands of dollars of resources, and you\u0026rsquo;re taking out major venues like hotel ballrooms, that sort of thing. Now it\u0026rsquo;s gone back to our roots of doing it with academic resources and trying to make it a more intimate event.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUbicomp will he held on the campuses of Georgia Tech and Cambridge simultaneously, with online options available, from Sept. 11 to Sept. 15. More than 175 papers will be presented by more than 800 computer scientists, with each presentation available online through streaming, cross-presented from the Atlanta venue to the Cambridge venue.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnyone attending in Atlanta can see a presentation being given in Cambridge in real time and vice versa. The live presentations in Atlanta will be held on the first floor of the Technology Square Research Building in Midtown.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We leveraged our resources here at TSRB because our auditorium and banquet hall has just been renovated, so we\u0026rsquo;ll be the first conference there,\u0026rdquo; Starner said. \u0026ldquo;That made the cost of the conference go down dramatically.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPlanning the event has had its share of challenges \u0026mdash; the biggest one being the coordination of simultaneous presentations happening at two different locations across the world, but Starner said he\u0026rsquo;s found the new design to be convenient for participants in ways the conference hadn\u0026rsquo;t been in previous years.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It makes the arrangements more manageable; it makes the travel more cost effective, and also the sheer number of papers being published, it makes the tidal wave of stuff coming in a lot more manageable as well,\u0026rdquo; he said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EStarner added that the organizing committee will pay attention to this year\u0026rsquo;s event, and if the dual locations seem to work well, it\u0026rsquo;s a feature that may not go away anytime soon. Traditionally, the locations are selected on a rotation of choosing cities from Asia, Europe and North America.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This idea that maybe we\u0026rsquo;re going to have three sites, one in Asia, one in Europe and then one in North America, and then people go to whichever one they care about, it may be a new model,\u0026rdquo; he said. \u0026ldquo;We don\u0026rsquo;t know. It\u0026rsquo;s the first time somebody has tried this. This is a potential model for the future and is something that\u0026rsquo;s being talked about right now.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThomas Ploetz, associate professor at the School of Interactive Computing, is representing Georgia Tech as the general chair for this year\u0026rsquo;s Ubicomp. He has authored or co-authored eight papers that will be presented this year and is one of six IC faculty members who have had papers accepted. The others are professor emeritus Gregory Abowd, assistant professor Sonia Chernova, interim school chair Betsy DiSalvo, associate professor Josiah Hester, and distinguished professor Irfan Essa. The papers are also co-authored by 12 Georgia Tech PhD or graduate students. Tech faculty and students contributed to 13 papers altogether.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPloetz said any papers presented at Ubicomp were accepted because they were published in a journal called The Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT) during the previous year. Many researchers in the field refer to Ubicomp and IMWUT synonymously, he said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It is fantastic to have such a strong presence of GT researchers \u0026mdash; first and foremost our students \u0026mdash; at our annual flagship conference,\u0026rdquo; Ploetz said. \u0026ldquo;It underlines the strength of Ubicomp\/IMWUT research at our university.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EStarner said it\u0026rsquo;s an exciting time in the ubiquitous computing field, and the ability to meet in person for the conference couldn\u0026rsquo;t have come at a better time.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;There are so many cool people coming, so I\u0026rsquo;m hoping what people are going to take away from it is a whole lot more collaborations, a whole lot more energy, a whole lot more excitement,\u0026rdquo; Starner said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Right now, all the wearable stuff, this stuff used to be esoteric. Nobody knew about any of this stuff. Now it\u0026rsquo;s part of our daily lives, and there\u0026rsquo;s so much capability now that we didn\u0026rsquo;t have 20 years ago. After the last few years, what we\u0026rsquo;re going to see is a lot more collaboration and the vision of the future for the field.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EGEORGIA TECH RESEARCH AT UBICOMP 2022\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeptember 12, 9:40 a.m.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EAssessing the State of Self-Supervised Human Activity Recognition using Wearables\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHarish Haresamudram, Irfan Essa, Thomas Ploetz\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeptember 12, 10 a.m.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EBootstrapping Human Activity Recognition Systems for Smart Homes From Scratch\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EShruthi K. Hiremath, Yasutaka Nishimura, Sonia Chernova, Thomas Ploetz\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeptember 12, 12 p.m.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EBattery-free MakeCode: Accessible Programming for Intermittent Computing\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EChristopher Kraemer, Amy Guo, Saad Ahmed, Josiah Hester\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeptember 13, 8:20 a.m.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EClustering of Human Activities from Wearables by Adopting Nearest Neighbors\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAbrar Ahmed, Harish Haresamudram, Thomas Ploetz\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeptember 13, 8:40 a.m.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EUbi-SleepNet: Advanced Multimodal Fusion Techniques for Three-stage Sleep Classification using Ubiquitous Sensing\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBing Zhai, Yu Guan, Michael Catt, Thomas Ploetz\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeptember 13, 8:40 a.m.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EA Personalize Approach for Developing a Snacking Detection System Using Earbuds in a Semi-Naturalistic Setting\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMehrab Bin Morshed, Harish Haresamudram, Dheeraj Bandaru, Gregory Abowd, Thomas Ploetz\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETuesday, September 13, 9 a.m.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EUbi-SleepNet: Advanced Multimodal Fusion Techniques for Three-stage Sleep Classification using Ubiquitous Sensing\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBing Zhai, Yu Guan, Michael Catt, Thomas Ploetz\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeptember 13, 9 a.m.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EReinforcement Learning Based Online Active Learning for Human Activity Recognition\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EYulai Cui, Shruthi K. Hiremath, Thomas Ploetz\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeptember 13, 9 a.m.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EFaceBit: Smart Face Masks Platform\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlexander Curtiss, Blaine Rothrock, Abu Bakar, Nivedita Arora, Jason Huang, Zachary Englhardt, Aaron-Patrick Empedrado, Chixiang Wang, Saad Ahmed, Yang Zhang, Nabil Alshurafa, Josiah Hester\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeptember 13, 10:40 a.m.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EColloSSL: Collaborative Self-Supervised Learning for Human Activity Recognition\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EYash Jain, Chi Ian Tang, Chulhong Min, Fahim Kawsar, Akhil Mathur\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeptember 13, 12:20 p.m.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EMiniKers: Interaction-Powered Smart Environment Automation\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EXiaoying Yang, Jacob Sayono, Jess Xu, Jiahao \u0026ldquo;Nick\u0026rdquo; Li, Josiah Hester, Yang Zhang\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETuesday, September 13, 12:40 p.m.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ESmart Webcam Cover: Exploring the Design of an Intelligent Webcam Cover to Improve Usability and Trust\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EYoungwook Do, Jung Wook Park, Yuxi Wu, Avinandan Basu, Dingtian Zhang, Gregory D. Abowd, Sauvik Das\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeptember 14, 10 a.m.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EReading the Room \u0026ndash; Automated, Momentary Assessment of Student Engagement in the Classroom: Are we There Yet?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBetsy Disalvo, Dheeraj Bandaru, Qiaosi Wang, Hong Li, Thomas Ploetz\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The Institute is hosting premiere conference in the field of ubiquitous and wearable computing."}],"uid":"32045","created_gmt":"2022-09-09 01:28:43","changed_gmt":"2022-09-09 01:28:43","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2022-09-08T00:00:00-04:00","iso_date":"2022-09-08T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"660996":{"id":"660996","type":"image","title":"Tech Hosts Ubicomp 2022","body":null,"created":"1662686847","gmt_created":"2022-09-09 01:27:27","changed":"1662686847","gmt_changed":"2022-09-09 01:27:27","alt":"composite graphic for Georgia Tech hosting Ubicomp 2022","file":{"fid":"250417","name":"ubicomp22_gt authors promo_web.png","image_path":"\/sites\/default\/files\/images\/ubicomp22_gt%20authors%20promo_web.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/ubicomp22_gt%20authors%20promo_web.png","mime":"image\/png","size":410064,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/ubicomp22_gt%20authors%20promo_web.png?itok=ob7iFV90"}}},"media_ids":["660996"],"groups":[{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"10353","name":"wearable computing"},{"id":"9766","name":"ubiquitous computing"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen, Communications Officer I\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto:ndeen6@gatech.edu?subject=Ubicomp\u0022\u003Endeen6@gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["ndeen6@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"660704":{"#nid":"660704","#data":{"type":"news","title":"New Faculty Q\u0026A: Christopher MacLellan","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003EChristopher\u003C\/strong\u003E \u003Cstrong\u003EMacLellan\u003C\/strong\u003E explores how artificial intelligence (AI) can benefit human performance and learning in the classroom, team-oriented environments, and people\u0026rsquo;s daily lives.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMacLellan arrived this fall at the School of Interactive Computing as an assistant professor. In his new role, he will research and teach in the areas of cognitive systems, AI, human-computer interaction, and educational technology. This fall, he is teaching a knowledge-based AI course.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMacLellan worked for two years at Drexel University before coming to Georgia Tech. Before joining the faculty at Drexel, MacLellan spent three years working as a research scientist at Soar Technology Inc., where he developed novel AI and machine learning technologies to support users in making better decisions and learning more effectively.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMacLellan received his Ph.D. and master\u0026rsquo;s degree from the Human-Computer Interaction Institute at Carnegie Mellon University. He also spent two years as a graduate student in computer science (CS) at Arizona State University and received his bachelor\u0026rsquo;s in CS from the University of Wyoming.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat interests you about working at Georgia Tech?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI think that Georgia Tech is at the center of a lot of the AI research that is going on in the country right now. Tech stands out as one of the few universities that has multiple National Science Foundation funded institutes that focus specifically on AI. There is also a strong human-centric component to the AI research that is being done here, which is key in my work. I learned a long time ago that if you want to make rapid progress in an area, you go to the center of where that kind of work is being done.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat will your research at Georgia Tech consist of?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMy work focuses on trying to better understand how people teach and learn and then building computational systems that can teach and learn like they do. In a virtuous cycle, I aim to better understand the unique capabilities that humans exhibit, build AI systems that can exhibit these capabilities, and use these systems to improve the human condition and, in turn, to further improve our understanding of humans.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat inspired you to pursue this field of research?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI have always been fascinated by people\u0026rsquo;s ability to reason and learn. We can do amazing things. I cannot just take your mind apart to understand how it works. However, I can build computer models that exhibit similar behaviors as you, and we can run experiments with these models to gain insights into the mechanisms underlying your abilities. Unlocking uniquely human capabilities has the potential to revolutionize how people make use of AI technologies in their everyday lives.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat do you hope to accomplish in your research?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUltimately, I aim to enable people who do not know anything about AI to be able to adapt these systems to their unique needs by teaching them new behaviors like how they would teach another human, through natural teaching interactions. This work should empower people to more effectively use AI technologies to improve their lives. I am particularly passionate about applying this concept to support teachers in creating and using AI-powered educational technologies to improve learning outcomes for students.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat are you looking forward to about teaching your students, and how do you plan on working with them?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis fall, I\u0026rsquo;ll be teaching knowledge-based AI. I am a strong believer that students should be taught about a broad range of AI paradigms and approaches. In this class, I am very excited to explore how knowledge, in addition to data, can be leveraged within AI systems. Additionally, I am very excited to explore how different AI algorithms and methods can be composed to create systems that can exhibit intelligent behavior. I look forward to charging a new generation of AI researchers with these ideas.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A brief Q\u0026 A with a new Georgia Tech Assistant Professor at Georgia Tech."}],"uid":"32045","created_gmt":"2022-08-30 18:40:09","changed_gmt":"2022-08-30 18:41:18","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2022-08-30T00:00:00-04:00","iso_date":"2022-08-30T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"660705":{"id":"660705","type":"image","title":"Christopher MacLellan","body":null,"created":"1661884840","gmt_created":"2022-08-30 18:40:40","changed":"1661884840","gmt_changed":"2022-08-30 18:40:40","alt":"Christopher MacLellan","file":{"fid":"250333","name":"IC Faculty. Shoot August 25th 2022-08.jpg","image_path":"\/sites\/default\/files\/images\/IC%20Faculty.%20Shoot%20August%2025th%202022-08.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/IC%20Faculty.%20Shoot%20August%2025th%202022-08.jpg","mime":"image\/jpeg","size":104414,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/IC%20Faculty.%20Shoot%20August%2025th%202022-08.jpg?itok=PCzjXC35"}}},"media_ids":["660705"],"groups":[{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen, Comms. Officer\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto:ndeen6@gatech.edu\u0022\u003Endeen6@gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["ndeen6@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"660508":{"#nid":"660508","#data":{"type":"news","title":"Art Exhibition Has College Connections","body":[{"value":"\u003Cp\u003EA new art exhibition curated by a College of Computing staff member opened recently and features the work of a College faculty member.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe exhibit\u0026nbsp;\u003Cem\u003E\u003Ca href=\u0022https:\/\/art.c21u.gatech.edu\/\u0022 title=\u0022https:\/\/art.c21u.gatech.edu\/\u0022\u003EExtension of Self: what it means to be human in a digital world\u003C\/a\u003E\u003C\/em\u003E\u0026nbsp;opened Aug. 15 in the Georgia Tech Library. It examines how scientists and artists can collaborate to improve access to science and technology for underserved communities.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECurated by\u0026nbsp;\u003Cstrong\u003EBirney\u003C\/strong\u003E\u0026nbsp;\u003Cstrong\u003ERobert\u003C\/strong\u003E, College events planner, the exhibition is the culmination of a $40,000 Georgia Tech\/Microsoft Accessibility Research Seed Grant that\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/staff-member-using-art-and-microsoft-grant-improve-stem-accessibility\u0022 title=\u0022https:\/\/www.cc.gatech.edu\/news\/staff-member-using-art-and-microsoft-grant-improve-stem-accessibility\u0022\u003ERobert received through the Center for 21st Century Universities\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe exhibition features six exhibits, including one from a small team led by\u0026nbsp;\u003Cstrong\u003EAshutosh\u0026nbsp;\u003C\/strong\u003E\u003Cstrong\u003EDhekne\u003C\/strong\u003E, School of Computer Science assistant professor. The team created an interactive art installation called\u0026nbsp;\u003Cem\u003ETechMyMoves\u0026nbsp;\u003C\/em\u003Efor\u003Cem\u003E\u0026nbsp;\u003C\/em\u003Eits\u003Cem\u003E\u0026nbsp;\u003C\/em\u003Esubmission.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;\u003Cem\u003ETechMyMoves\u003C\/em\u003E\u0026nbsp;is an exploration in mapping the human presence into an interactive art. It reflects a person\u0026rsquo;s movements through dynamically changing art that becomes more enthusiastic, energetic, and vibrant with increased activity in the indoor space,\u0026rdquo; according to the exhibition website.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe installation uses ultra-wideband (UWB) wireless technology that detects and responds to movement within indoor spaces. Movements are captured and converted to digital media using Python and Processing 4 programs. The resulting art is then instantaneously displayed on a large LED screen.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDhekne, who\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/dhekne-receives-nsf-career-award-create-greener-more-advanced-indoor-navigation-and-security\u0022 title=\u0022https:\/\/www.cc.gatech.edu\/news\/dhekne-receives-nsf-career-award-create-greener-more-advanced-indoor-navigation-and-security\u0022\u003Eearned a 2022 National Science Foundation CAREER Award\u003C\/a\u003E\u0026nbsp;for his work in wireless localization and sensing, says he came up with the idea for\u0026nbsp;\u003Cem\u003ETechMyMoves\u003C\/em\u003E\u0026nbsp;while \u0026ldquo;daydreaming of an expressive indoor space.\u0026rdquo; To bring his daydream to reality, Dhekne worked with recent alumnus\u0026nbsp;\u003Cstrong\u003EYunzhi\u003C\/strong\u003E\u0026nbsp;\u003Cstrong\u003ELi\u003C\/strong\u003E\u0026nbsp;(CS MS 21) and Human-Centered Computing Ph.D. student\u0026nbsp;\u003Cstrong\u003ETingyu\u003C\/strong\u003E\u0026nbsp;\u003Cstrong\u003ECheng\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EExtension of Self\u003C\/em\u003E\u0026nbsp;runs through Oct. 14 and is the first of two exhibitions planned by Robert as part of initial proposal for the seed grant program.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A new art exhibition curated by a College of Computing staff member is opening next week and features the work of a College faculty member."}],"uid":"32045","created_gmt":"2022-08-24 19:21:57","changed_gmt":"2022-08-25 17:25:01","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2022-08-24T00:00:00-04:00","iso_date":"2022-08-24T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"660509":{"id":"660509","type":"image","title":"Ultra-wideband-radio-scatter-plot-2022","body":null,"created":"1661369209","gmt_created":"2022-08-24 19:26:49","changed":"1661369209","gmt_changed":"2022-08-24 19:26:49","alt":"Ultra-wideband-radio-scatter-plot-2022","file":{"fid":"250282","name":"1850.png","image_path":"\/sites\/default\/files\/images\/1850.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/1850.png","mime":"image\/png","size":253044,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/1850.png?itok=4HgIG8TD"}}},"media_ids":["660509"],"groups":[{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"42891","name":"Georgia Tech Arts"}],"keywords":[{"id":"126","name":"exhibit"},{"id":"191177","name":"Birney Robert"},{"id":"191141","name":"Extension of Self"},{"id":"1205","name":"Library"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAlbert Snedeker, Communications Mgr. II\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto:albert.snedeker@cc.gatech.edu?subject=Exhibit\u0022\u003Ealbert.snedeker@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["albert.snedeker@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"657942":{"#nid":"657942","#data":{"type":"news","title":"New Framework for Cooperative Bots Mimics High-Functioning Human Teams, Decreases Risks from Unreliable Bots","body":[{"value":"\u003Cp\u003EA Georgia Institute of Technology\u0026nbsp;research group in the School of Interactive Computing has developed a robotics system that exceeds existing standards for collaborative bots that work independently to achieve a shared goal.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe system intelligently increases the information shared among the bots and \u003Ca href=\u0022https:\/\/www.youtube.com\/watch?v=rK_itCF9hPc\u0022\u003Eallows for improved cooperation\u003C\/a\u003E. The aim is to model high-functioning human teams. It also creates resiliency against bad or unreliable team bots that may hinder the overall programmed goal.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Intuitively, the idea behind our new framework \u0026mdash; InfoPG \u0026mdash;\u0026nbsp;is that a robot agent goes back-and-forth on what it thinks it \u003Cem\u003Eshould\u003C\/em\u003E do with their teammates, and then the teammates will update on what they think is \u003Cem\u003Ebest\u003C\/em\u003E to do,\u0026rdquo; said \u003Cstrong\u003EEsmaeil Seraj\u003C\/strong\u003E, Ph.D. student in the \u003Ca href=\u0022https:\/\/core-robotics.gatech.edu\/\u0022\u003ECORE Robotics Lab\u003C\/a\u003E and researcher on the project. \u0026ldquo;They do this until the decision is deeply rationalized and reasoned about.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe work focuses on artificial agents on a decentralized team \u0026mdash; in simulations or the real world \u0026mdash;\u0026nbsp;working in concert toward a specific task. Applications could include surgery, search and rescue, and disaster response, among others.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EInfoPG facilitates communication between the artificial agents on an iterative basis and allows for actions and decisions that mimic human teams working at optimal levels.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This research is in fact inspired by how high-performing human teams act,\u0026rdquo; said Seraj.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Humans normally use k-level thinking \u0026mdash; such as, \u0026lsquo;what I think you will do, what I think you think I will do, and so on\u0026rsquo; \u0026mdash; to rationalize their actions in a team,\u0026rdquo; he said. \u0026ldquo;The basic thought is that the more you know about your teammate\u0026#39;s strategy, the easier it is for you to take the best action possible.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing this approach, the researchers designed InfoPG to make one bot\u0026rsquo;s decisions conditional on its teammates. They ran simulations using simple games like Pong, and complex games like StarCraft II.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn the latter \u0026mdash; where the goal is for one team of agents to defeat another \u0026mdash; the InfoPG architecture showed very advanced strategies. Seraj said agents in one case learned to form a triangle formation, sacrificing the front agent while the two other agents eliminated the enemy. Without InfoPG in play, an agent abandoned its team to save itself.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe new method also limits the disruption a bad bot on the team might cause.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Coordinating actions with such a fraudulent agent in a collaborative multi-agent setting can be detrimental,\u0026rdquo; said \u003Cstrong\u003EMatthew Gombolay\u003C\/strong\u003E, assistant professor in the School of Interactive Computing and director of the CORE Robotics Lab. \u0026ldquo;We need to ensure the integrity of robot teams in real-world applications where bots might be tasked to save lives or help people and organizations extend their capabilities.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResults of the work show InfoPG\u0026rsquo;s performance exceeds various baselines in learning cooperative policies for multi-agent reinforcement learning. The researchers plan to move the system from simulation into real robots, such as controlling a swarm of drones to help surveil and fight wildfires.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research is published in the 2022 Proceedings of the International Conference on Learning Representations. The paper, \u003Cem\u003EIterated Reasoning with Mutual Information in Cooperative and Byzantine Decentralized Teaming\u003C\/em\u003E is co-authored by computer science major \u003Cstrong\u003ESachin G. Konan\u003C\/strong\u003E, Seraj, and Gombolay.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis work was sponsored by the Office of Naval Research under grant N00014-19-1-2076 and the Naval Research Lab (NRL) under the grant N00173-20-1-G009. The researchers\u0026rsquo; views and statements are based on their findings and do not necessarily reflect those of the funding agencies.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EA Georgia Tech research group in the School of Interactive Computing has developed a robotics system that exceeds existing standards for collaborative bots that work independently to achieve a shared goal.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"A Georgia Tech research group in the School of Interactive Computing has developed a robotics system that exceeds existing standards for collaborative bots that work independently to achieve a shared goal. "}],"uid":"27592","created_gmt":"2022-05-04 13:53:03","changed_gmt":"2022-05-05 18:12:26","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2022-05-04T00:00:00-04:00","iso_date":"2022-05-04T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"657945":{"id":"657945","type":"image","title":"Improving collaboration in decentralized teams of bots","body":null,"created":"1651672727","gmt_created":"2022-05-04 13:58:47","changed":"1651672727","gmt_changed":"2022-05-04 13:58:47","alt":"","file":{"fid":"249393","name":"promo_graphic_ICLR22_collab robots.jpg","image_path":"\/sites\/default\/files\/images\/promo_graphic_ICLR22_collab%20robots.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/promo_graphic_ICLR22_collab%20robots.jpg","mime":"image\/jpeg","size":589661,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/promo_graphic_ICLR22_collab%20robots.jpg?itok=IZixdgDo"}},"657946":{"id":"657946","type":"image","title":"Multiwalker bots coordinating to carry object","body":null,"created":"1651672836","gmt_created":"2022-05-04 14:00:36","changed":"1651672836","gmt_changed":"2022-05-04 14:00:36","alt":"","file":{"fid":"249394","name":"robot cooperation in decentralized team.png","image_path":"\/sites\/default\/files\/images\/robot%20cooperation%20in%20decentralized%20team.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/robot%20cooperation%20in%20decentralized%20team.png","mime":"image\/png","size":2326733,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/robot%20cooperation%20in%20decentralized%20team.png?itok=VL1FfTYk"}}},"media_ids":["657945","657946"],"groups":[{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1188","name":"Research Horizons"}],"categories":[],"keywords":[{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston7@gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\nResearch Communications Manager\u003Cbr \/\u003E\r\nCollege of Computing\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["jpreston7@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"657433":{"#nid":"657433","#data":{"type":"news","title":"Iditarod Sled Dogs Test New Device That Could Reduce Injuries for Canine Athletes","body":[{"value":"\u003Cp\u003EWhether pulling a sled across the frozen tundra for hundreds of miles or guiding a visually impaired runner on a cross-country marathon, canine athletes are as prone to injury as their human counterparts.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo help reduce injuries and improve performance for canine athletes, student researchers at Georgia Tech have\u0026nbsp;developed a\u0026nbsp;wearable activity and gait detection device \u0026ndash; known as WAG\u0026#39;d \u0026ndash; as part of an animal-centered computing course led by College of Computing Associate Professor\u0026nbsp;\u003Cstrong\u003EMelody\u003C\/strong\u003E\u0026nbsp;\u003Cstrong\u003EJackson\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFollowing the class last fall, development continued on the project when the interdisciplinary team went to Alaska in March to connect with an Iditarod musher and his team of sled dogs to conduct field research.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELearn more about the students behind this innovative project,\u0026nbsp;which was reviewed and approved by Georgia Tech\u0026#39;s Institutional Review Board (IRB), and how it came together in this fast-paced\u0026nbsp;\u003Ca href=\u0022https:\/\/youtu.be\/3aoZI5PoTYc\u0022\u003Evideo profile\u003C\/a\u003E\u0026nbsp;created by GT Computing videographer \u003Cstrong\u003EKevin\u003C\/strong\u003E \u003Cstrong\u003EBeasley\u003C\/strong\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Students in a animal-centered computing course at Georgia Tech worked with an Iditarod musher and his sled dogs to conduct field research."}],"uid":"32045","created_gmt":"2022-04-19 14:13:50","changed_gmt":"2022-04-19 15:20:49","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2022-04-19T00:00:00-04:00","iso_date":"2022-04-19T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"657434":{"id":"657434","type":"image","title":"GT Computing Student Team Working with Iditarod Sled Dog Team for Field Research","body":null,"created":"1650379129","gmt_created":"2022-04-19 14:38:49","changed":"1650379129","gmt_changed":"2022-04-19 14:38:49","alt":"GT Computing Student Team Working with Iditarod Sled Dog Team for Field Research","file":{"fid":"249175","name":"WAGd-team-pic-2022.jpg","image_path":"\/sites\/default\/files\/images\/WAGd-team-pic-2022.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/WAGd-team-pic-2022.jpg","mime":"image\/jpeg","size":383380,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/WAGd-team-pic-2022.jpg?itok=lXYgwmYt"}}},"media_ids":["657434"],"related_links":[{"url":"https:\/\/youtu.be\/3aoZI5PoTYc","title":"VIDEO: Iditarod Sled Dogs Test New Device That Could Reduce Injuries for Canine Athletes"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"66442","name":"MS HCI"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1188","name":"Research Horizons"}],"categories":[],"keywords":[{"id":"22621","name":"Human-Centered Computing"},{"id":"190397","name":"canine athletes"},{"id":"96031","name":"Melody Jackson"},{"id":"190398","name":"Iditarod"},{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAlbert Snedeker, Communications Mgr. II\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto:albert.snedeker@cc.gatech.edu?subject=Canine%20athletes\u0022\u003Ealbert.snedeker@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["albert.snedeker@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"654649":{"#nid":"654649","#data":{"type":"news","title":"Major Philanthropic Grant Will Create New Center to Advance Open-Source Software","body":[{"value":"\u003Cp\u003EThe Georgia Tech College of Computing has received an $11 million grant from Schmidt Futures to create one of the four software engineering centers within the newly launched Virtual Institute for Scientific Software (VISS). The new center will hire half-a-dozen software engineers to write scalable, reliable, and portable open-source software for scientific research.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Scientific research involves increasingly complex software, technologies, and platforms,\u0026rdquo; said\u0026nbsp;\u003Cstrong\u003EAlessandro Orso\u003C\/strong\u003E, the software engineer and professor of computer science who is heading up the project. \u0026ldquo;Also, platforms constantly evolve, and the complexity and amount of data involved is ever-growing.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe result is that these software systems are often developed as prototypes that are difficult to understand, maintain, and use, which limits their efficacy and ultimately hinders scientific progress.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESoftware engineers are trained to address these kinds of issues and know how to build high-quality software, but their time is too expensive for a typical research project\u0026rsquo;s budget. In typical grants, software is often treated as a byproduct of research, meaning that limited funding is allocated for it.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThat\u0026rsquo;s where\u0026nbsp;\u003Ca href=\u0022https:\/\/www.schmidtfutures.com\/\u0022\u003ESchmidt Futures\u003C\/a\u003E\u0026nbsp;comes in. Schmidt Futures is\u0026nbsp;a philanthropic initiative\u0026nbsp;founded by\u0026nbsp;\u003Cstrong\u003EEric\u003C\/strong\u003E\u0026nbsp;and\u0026nbsp;\u003Cstrong\u003EWendy\u003C\/strong\u003E\u0026nbsp;\u003Cstrong\u003ESchmidt\u003C\/strong\u003E\u0026nbsp;that bets early on exceptional people\u0026nbsp;making the world better.\u0026nbsp;They are investing $40 million in VISS over five years at four universities: Georgia Tech, University of Washington, Johns Hopkins University, and University of Cambridge.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Schmidt Futures\u0026rsquo; Virtual Institute for Scientific Software is a core part of our efforts to mobilize exceptional talent to solve specific hard problems in science and society,\u0026rdquo; said Executive Vice President\u0026nbsp;\u003Cstrong\u003EElizabeth Young-McNally\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAt Georgia Tech, the funds will hire a software engineering lead, as well as three senior and two junior software engineers. A faculty director and an advisory board will help guide the group\u0026rsquo;s work, which will include collaborations with Georgia Tech scientists.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026quot;We are very proud to host one of the four inaugural Schmidt Futures Virtual Institute of Scientific Software centers,\u0026rdquo; said\u0026nbsp;\u003Cstrong\u003ECharles Isbell\u003C\/strong\u003E, Dean and John P. Imlay Jr. Chair of Computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Georgia Tech\u0026rsquo;s center will advance and support scientific research by applying modern software engineering practices, cutting-edge technologies, and modern tools to the development of scientific software. The center will also engage with students and researchers to train the next generation of software engineering leaders.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Using a new philanthropic grant, Georgia Tech will hire software engineers to write scalable, reliable, and portable open-source software for scientific research."}],"uid":"32045","created_gmt":"2022-01-21 14:33:09","changed_gmt":"2022-01-24 16:06:04","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2022-01-21T00:00:00-05:00","iso_date":"2022-01-21T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"654650":{"id":"654650","type":"image","title":"Software engineering ideas","body":null,"created":"1642775687","gmt_created":"2022-01-21 14:34:47","changed":"1642775687","gmt_changed":"2022-01-21 14:34:47","alt":"Clear light bulb in foreground with blue screen binary code as background","file":{"fid":"248265","name":"fellowship_banner_hg.jpg","image_path":"\/sites\/default\/files\/images\/fellowship_banner_hg_0.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/fellowship_banner_hg_0.jpg","mime":"image\/jpeg","size":42805,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/fellowship_banner_hg_0.jpg?itok=xyDlXyGs"}}},"media_ids":["654650"],"groups":[{"id":"37041","name":"Computational Science and Engineering"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1214","name":"News Room"},{"id":"1188","name":"Research Horizons"}],"categories":[],"keywords":[{"id":"109","name":"Georgia Tech"},{"id":"654","name":"College of Computing"},{"id":"170965","name":"software engineering"},{"id":"189775","name":"Schmidt Futures"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAnn Claycombe, Director of Communications\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto:claycombe@cc.gatech.edu?subject=Philanthropic%20grant\u0022\u003Eclaycombe@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["claycombe@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"650286":{"#nid":"650286","#data":{"type":"news","title":"New GT-Microsoft Accessibility Research Seed Grant Program Announces Winning Proposals for 2021 Funding","body":[{"value":"\u003Cdiv\u003E\r\n\u003Cp\u003EGeorgia Tech\u0026rsquo;s Center for 21st\u0026nbsp;Century Universities (C21U) announced four winning proposals for a new\u0026nbsp;accessibility-focused\u0026nbsp;seed grant\u0026nbsp;research\u0026nbsp;program funded by Microsoft.\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\r\n\u003C\/div\u003E\r\n\r\n\u003Cdiv\u003E\r\n\u003Cp\u003EThe\u0026nbsp;GT-Microsoft Accessibility\u0026nbsp;Research\u0026nbsp;Seed\u0026nbsp;Grant\u0026nbsp;Program\u0026nbsp;offered\u0026nbsp;up to\u0026nbsp;$45,000\u0026nbsp;in funding per winning proposal and\u0026nbsp;was\u0026nbsp;open to proposals from all Georgia Tech faculty,\u0026nbsp;staff, and students.\u0026nbsp;The program\u0026nbsp;seeks\u0026nbsp;accessibility-focused research and projects in digital accessibility\u0026nbsp;\/\u0026nbsp;assistive technology, diverse student backgrounds, and campus life.\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\r\n\u003C\/div\u003E\r\n\r\n\u003Cdiv\u003E\r\n\u003Cp\u003E\u0026ldquo;C21U is thrilled to be supported by Microsoft in offering seed grants to innovative research\u0026nbsp;and project\u0026nbsp;teams\u0026nbsp;in our community,\u0026rdquo; said C21U Assistant Director of Research in Education Innovation\u0026nbsp;\u003Cstrong\u003EJeonghyun\u0026nbsp;Lee\u003C\/strong\u003E.\u0026nbsp;\u0026ldquo;Accessibility research is a broad\u0026nbsp;and important\u0026nbsp;theme,\u0026nbsp;and our hope was that the Georgia Tech community would surprise us with creative,\u0026nbsp;aspirational proposals. We were not disappointed.\u0026rdquo;\u0026nbsp;\u003C\/p\u003E\r\n\u003C\/div\u003E\r\n\r\n\u003Cdiv\u003E\r\n\u003Cp\u003EThe winning proposals reflect a wide range of transformative concepts\u0026nbsp;including accessible art exhibits, computer science and music education\u0026nbsp;for visually impaired students, digital access\u0026nbsp;and equity as impacted by\u0026nbsp;the\u0026nbsp;COVID-19\u0026nbsp;pandemic, and technology-mediated mentoring for\u0026nbsp;research\u0026nbsp;students with disabilities. These projects are\u0026nbsp;led by faculty and staff from the College of Computing, the College of Design,\u0026nbsp;the Center for Inclusive Design and Innovation,\u0026nbsp;and\u0026nbsp;Georgia Tech Professional Education.\u0026nbsp;The contributing teams involved in each proposal encompass an\u0026nbsp;expansive\u0026nbsp;group of campus units and\u0026nbsp;reflect the collaborative nature of research within the Georgia Tech community.\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E2021 Funded Research Projects\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cem\u003EDesigning a Computer Science + Music Learning Environment for Visually Impaired Students\u003C\/em\u003E\u0026nbsp;led by\u0026nbsp;PI\u0026nbsp;\u003Cstrong\u003EStephen Garrett\u0026nbsp;\u003C\/strong\u003E(School of Music, College of Design),\u0026nbsp;\u003Cstrong\u003EJason Freeman\u003C\/strong\u003E\u0026nbsp;(School of Music, College of Design), and\u0026nbsp;\u003Cstrong\u003EBrian\u0026nbsp;Magerko\u003C\/strong\u003E\u0026nbsp;(School of Literature, Media, and Communication, Ivan Allen College of Liberal Arts and School of Interactive Computing, College of Computing)\u0026nbsp;\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cem\u003ETechnology-mediated Mentoring Platforms to Support Research Experiences for Students with Disabilities\u0026nbsp;\u003C\/em\u003Eled by PI\u0026nbsp;\u003Cstrong\u003EMaureen Linden\u003C\/strong\u003E\u0026nbsp;(Center for Inclusive Design and Innovation, College of Design) and\u0026nbsp;\u003Cstrong\u003ENathan Moon\u003C\/strong\u003E\u0026nbsp;(Center for Advanced Communications Policy)\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cem\u003EAccessible exhibits at the intersection of art, science and technology\u003C\/em\u003E\u0026nbsp;led by PI\u0026nbsp;\u003Cstrong\u003EBirney Robert\u0026nbsp;\u003C\/strong\u003E(College of Computing)\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cem\u003EAccessquity: Digital Accessibility, Equity, and Inclusion in a Post-COVID World\u003C\/em\u003E\u0026nbsp;led by PI\u0026nbsp;\u003Cstrong\u003EYakut Gazi\u0026nbsp;\u003C\/strong\u003E(Georgia Tech Professional Education),\u0026nbsp;\u003Cstrong\u003EChaohua\u0026nbsp;Ou\u003C\/strong\u003E\u0026nbsp;(Center for Teaching and Learning),\u0026nbsp;\u003Cstrong\u003EMatt Lisle\u0026nbsp;\u003C\/strong\u003E(Center for 21st\u0026nbsp;Century Universities), and\u0026nbsp;\u003Cstrong\u003EWarren\u0026nbsp;Goetzel\u0026nbsp;\u003C\/strong\u003E(Office of Information Technology)\u0026nbsp;\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cdiv\u003E\r\n\u003Cp\u003EOver the course of the next year, each of these projects will\u0026nbsp;work to create\u0026nbsp;more accessible\u0026nbsp;technology, events, and support structures\u0026nbsp;for\u0026nbsp;current\u0026nbsp;and future members of the\u0026nbsp;Georgia Tech community.\u0026nbsp;C21U will host a series of seminars in 2022 to highlight the work of each project team and celebrate their contributions.\u0026nbsp;\u003C\/p\u003E\r\n\u003C\/div\u003E\r\n\r\n\u003Cdiv\u003E\r\n\u003Cp\u003E\u0026ldquo;Georgia Tech\u0026rsquo;s Strategic Plan asks our campus, locally and globally, to work together to create an inclusive environment that cultivates the well-being of all members of our community,\u0026rdquo; said C21U Interim Executive Director\u0026nbsp;\u003Cstrong\u003ESteve Harmon\u003C\/strong\u003E.\u0026nbsp;\u0026ldquo;Accessibility is a critical piece of this\u0026nbsp;work,\u0026nbsp;and we feel confident that these research projects will\u0026nbsp;expand access and remove barriers to success for current and future Yellow Jackets.\u0026rdquo;\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\r\n\u003C\/div\u003E\r\n\u003C\/div\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech\u2019s Center for 21st Century Universities (C21U) announced four winning proposals for a new accessibility-focused seed grant research program funded by Microsoft."}],"uid":"27998","created_gmt":"2021-08-31 14:58:45","changed_gmt":"2021-08-31 15:06:53","author":"Brittany Aiello","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-08-31T00:00:00-04:00","iso_date":"2021-08-31T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"groups":[{"id":"66244","name":"C21U"},{"id":"47223","name":"College of Computing"},{"id":"131901","name":"Provost"},{"id":"50876","name":"School of Interactive Computing"},{"id":"603290","name":"The Digital Learning Team"}],"categories":[],"keywords":[{"id":"13481","name":"C21U"},{"id":"188767","name":"GT-Microsoft Accessibility\u00a0Research\u00a0Seed\u00a0Grant\u00a0Program"},{"id":"167679","name":"Seed Grant"},{"id":"360","name":"accessibility"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EBrittany Aiello\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Program Manager, C21U\u003C\/p\u003E\r\n\r\n\u003Cp\u003Ebrittany@c21u.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["brittany@c21u.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"649775":{"#nid":"649775","#data":{"type":"news","title":"New Web Experience Launches with Focus on Users\u2019 Needs","body":[{"value":"\u003Cp\u003EThe College of Computing is set to launch a newly designed website on Aug. 20.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe new site will provide a faster and more user-friendly digital experience while offering greater accommodation for those with accessibility needs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETwo major components of this redesign includes menus organized by user groups rather than by departmental structure and an enhanced mobile-friendly experience that is compatible with all major browsers and across all devices.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis new streamlined experience is further complemented by the updated\u0026nbsp;\u003Ca href=\u0022https:\/\/brand.gatech.edu\/\u0022\u003EGeorgia Tech branding theme\u003C\/a\u003E\u0026nbsp;which features the iconic Tech Gold header and characteristic Institute-wide slogan, Creating the Next.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsers will notice a number of other engaging new features. These range from the ability to sort faculty members by school to being able to sort events by type and function.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research needed to complete such a major transformation was compiled by two student teams from the School of Interactive Computing master\u0026rsquo;s in\u0026nbsp;\u003Ca href=\u0022https:\/\/mshci.gatech.edu\/\u0022\u003EHuman Computer Interaction (HCI) program\u003C\/a\u003E.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETeam members\u0026nbsp;\u003Cstrong\u003EHarshali Wadge,\u003C\/strong\u003E\u0026nbsp;\u003Cstrong\u003ESantiago Arconada Alvarez,\u0026nbsp;\u003C\/strong\u003E\u003Cstrong\u003EPrabodh Sakhardande, Shihui Ruan, Jordan Hill, Jordan Cox, Chaoyuan Luo, Yuhan Zhou,\u003C\/strong\u003E\u0026nbsp;and\u0026nbsp;\u003Cstrong\u003ELu Meng\u003C\/strong\u003E\u0026nbsp;were all leads on this research as part of the HCI Special Topics class taught by Senior Research Scientist\u0026nbsp;\u003Cstrong\u003ECarrie Bruce\u003C\/strong\u003E.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThese student teams spent the 2019 Fall semester assembling a series of evidence-based design methods, field surveys, and testing groups that were used to inform the overall user experience of this new site.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWith the student teams\u0026rsquo; initial research and the efforts of a dedicated team of staff, the college has successfully condensed several thousand pages of content into hundreds. This aggregation and purging of old content will allow all audiences to enjoy a more up-to-date and direct experience.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor any questions, suggestions, or updates upon launch, please complete the Website Feedback Form which is located on the main menu under the About dropdown tab.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"College of Computing rolls out the red carpet for a new college website."}],"uid":"34540","created_gmt":"2021-08-17 17:21:11","changed_gmt":"2021-08-17 17:23:16","author":"Kristen Perez","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-08-17T00:00:00-04:00","iso_date":"2021-08-17T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"649774":{"id":"649774","type":"image","title":"CoC Web Overhaul","body":null,"created":"1629220410","gmt_created":"2021-08-17 17:13:30","changed":"1629220410","gmt_changed":"2021-08-17 17:13:30","alt":"new website","file":{"fid":"246640","name":"new website art.jpg","image_path":"\/sites\/default\/files\/images\/new%20website%20art.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/new%20website%20art.jpg","mime":"image\/jpeg","size":349566,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/new%20website%20art.jpg?itok=HgCt33KG"}}},"media_ids":["649774"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"431631","name":"OMS"},{"id":"455941","name":"School of Awesome"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"},{"id":"624060","name":"Center for High Performance Computing (CHiPC)"}],"categories":[],"keywords":[{"id":"110271","name":"website"},{"id":"2496","name":"launch"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EKristen Perez\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["kristen.perez@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"649636":{"#nid":"649636","#data":{"type":"news","title":"Associate Professor Elected SIGCHI President","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing joint Associate Professor \u003Cstrong\u003ENeha Kumar\u003C\/strong\u003E was elected president of the \u003Ca href=\u0022https:\/\/sigchi.org\/\u0022\u003ESpecial Interest Group on Computer-Human Interaction\u003C\/a\u003E (SIGCHI) for 2021-22. She will serve a three-year term for the group, which is the premier international society for professionals and academics interested in human-computer interaction.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESIGCHI sponsors numerous conferences, publications, web sites, and other services that advance HCI through workshops and outreach. \u003Ca href=\u0022https:\/\/medium.com\/sigchi\/thank-you-sigchi-dae601d883bb\u0022\u003EIn a blog post for SIGCHI\u003C\/a\u003E, Kumar said that she and the other incoming executive committee members aim to continue the long history of advancing the group\u0026rsquo;s key missions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We hope to continue to expand the excellent work that our many colleagues in this (executive committee) have done, with their commitment (among other things) to accessibility, equity and inclusion, to the safety of our community, global community building, and a #SIGCHI4ALL,\u0026rdquo; she wrote. \u0026ldquo;Together the six of us represent a wide range of perspectives; our hope is that this representation with ensure that we remain answerable to our entire global membership as we work towards supporting and fostering participation and growth locally and globally.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKumar\u0026rsquo;s research at Georgia Tech lies at the intersection of human-centered computing and global development. She has produced research that improves technology design for historically underserved communities. Her \u003Ca href=\u0022http:\/\/www.tandem.gatech.edu\/\u0022\u003ETanDEm Lab\u003C\/a\u003E \u0026ndash; short for Technology and Design towards \u0026lsquo;Empowerment\u0026rsquo; \u0026ndash; has focused on health and wellbeing on the margins, centering topics such as gender, stigma, and knowledge production.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKumar has received other honors, such as the National Science Foundation\u0026rsquo;s CAREER Award, and also chairs the \u003Ca href=\u0022https:\/\/www.acm.org\/fca#:~:text=The%20ACM%20Future%20of%20Computing,next%20generation%20of%20computing%20professionals.\u0026amp;text=The%20ACM%20FCA%20aspires%20to,of%20computing%20into%20the%20future.\u0022\u003EAssociation of Computing Machinery\u0026rsquo;s Future of Computing Academy\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech Ph.D. graduate \u003Cstrong\u003ETamara Clegg\u003C\/strong\u003E is also on the SIGCHI executive committee, serving as the vice president of membership and communication.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Neha Kumar will serve a three-year term for the group, which is the premier international society for professionals and academics interested in human-computer interaction."}],"uid":"33939","created_gmt":"2021-08-12 16:50:31","changed_gmt":"2021-08-12 16:50:31","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-08-12T00:00:00-04:00","iso_date":"2021-08-12T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"507851":{"id":"507851","type":"image","title":"Neha Kumar","body":null,"created":"1457114400","gmt_created":"2016-03-04 18:00:00","changed":"1475895270","gmt_changed":"2016-10-08 02:54:30","alt":"Neha Kumar","file":{"fid":"204902","name":"neha.jpeg","image_path":"\/sites\/default\/files\/images\/neha_0.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/neha_0.jpeg","mime":"image\/jpeg","size":52721,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/neha_0.jpeg?itok=ay7TDLWk"}}},"media_ids":["507851"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"649635":{"#nid":"649635","#data":{"type":"news","title":"Assistant Professor Named 2021 Microsoft Research Faculty Fellow","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing Assistant Professor \u003Cstrong\u003EDiyi Yang\u003C\/strong\u003E was named one of five \u003Ca href=\u0022https:\/\/www.microsoft.com\/en-us\/research\/academic-program\/faculty-fellowship\/#!fellows\u0022\u003E2021 Microsoft Research Faculty Fellows\u003C\/a\u003E earlier this summer. The two-year fellowship recognizes innovative and promising early-career professors in the Americas who are exploring breakthrough research in computer science or a related field.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EYang was recognized for her work leading the \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~dyang888\/group.html\u0022\u003ESocial and Language Technologies Lab\u003C\/a\u003E, concentrating on research across fields of natural language processing, machine learning, and computational social science. Yang\u0026rsquo;s research works to understand social aspects of language and build responsible NLP systems with social intelligence.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We live in an era where many aspects of our daily activities are recorded as textual data,\u0026rdquo; Yang said in her proposal to Microsoft Research. \u0026ldquo;Over the last few decades, NLP has dramatically improved performance and produced industrial applications like personal assistants. Despite being sufficient to enable these applications, current NLP systems largely ignore the social part of language.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis ignorance limits the functionality of the programs, Yang said. This research examines what is said, who says it, in what context and for what goals in hopes of developing systems to facilitate human-human and human-machine communication. So far, her team has produced projects on mitigating bias in text, detecting mental health issues, improving support in online support groups, and more.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAccording to Microsoft Research\u0026rsquo;s website, Yang is the first Georgia Tech faculty member to be named a Microsoft Research Faculty Fellow since 2011 and only the third overall. Yang has earned a number of other awards and recognitions, such as Forbes 30 Under 30 in Science and IEEE AI 10 to Watch.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The two-year fellowship recognizes innovative and promising early-career professors in the Americas who are exploring breakthrough research in computer science or a related field."}],"uid":"33939","created_gmt":"2021-08-12 16:44:15","changed_gmt":"2021-08-12 16:44:15","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-08-12T00:00:00-04:00","iso_date":"2021-08-12T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"630588":{"id":"630588","type":"image","title":"Diyi Yang 2020","body":null,"created":"1578338255","gmt_created":"2020-01-06 19:17:35","changed":"1578338255","gmt_changed":"2020-01-06 19:17:35","alt":"","file":{"fid":"240080","name":"Diyi_Yang.jpg","image_path":"\/sites\/default\/files\/images\/Diyi_Yang.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Diyi_Yang.jpg","mime":"image\/jpeg","size":194720,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Diyi_Yang.jpg?itok=T-Kv1Jqp"}}},"media_ids":["630588"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"649015":{"#nid":"649015","#data":{"type":"news","title":"Virtual Counselor to Help Address Vaccination Hesitancy in Black Communities","body":[{"value":"\u003Cp\u003EA new partnership is using a multi-million, multi-year award from the National Institutes of Health (NIH) to help address vaccination hesitancy and increase Covid-19 vaccination rates in Black communities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWith the $2.4 million, four-year NIH award researchers from Georgia Tech, Northeastern University, and the Boston Medical Center are collaborating with the \u003Ca href=\u0022https:\/\/www.bmatenpoint.org\/\u0022\u003EBlack Ministerial Alliance of Greater Boston TenPoint\u003C\/a\u003E to develop a virtual healthcare counselor that answers questions and addresses concerns specifically related to the Covid-19 vaccine.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe counselor, known as Clara, is an animated character that simulates face-to-face counseling sessions using verbal and non-verbal communication cues. Along with the ability to share relevant Biblical scripture or recall details from previous conversations with an individual, Clara delivers personally relevant information about the vaccine and engages users about their specific hesitations about getting the vaccine.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAccording to the researchers, there are a number of benefits to this technology. Along with being available 24 hours a day, seven days a week, no insurance is needed and it may help people feel less inhibited asking questions about the Covid-19 vaccine.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUltimately, the goal is to create a dialogue that empowers users to move toward making more informed healthcare decisions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EClara, technically known as an embodied conversational agent (ECA), is based on an existing platform that is designed to promote spiritual and physical wellbeing in underserved communities. More than 600 people from 12 Boston churches are expected to participate in the study, which is the first of its kind to explore mobile health interventions using ECAs in a Black community church context.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Churches are well known as effective sites for health intervention for this demographic because of the historically important role the church has played in Black communities,\u0026rdquo; said \u003Cstrong\u003EAndrea Grimes Parker\u003C\/strong\u003E, a co-primary investigator and an associate professor in the \u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003EGeorgia Tech School of Interactive Computing\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDespite this history, Parker says very little research has been done examining how technology that is similar to Clara might amplify existing church efforts to address higher mortality rates, lower vaccination rates, and other disproportional impacts that Covid-19 has had in Black communities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This intervention is about giving people space where they feel safe and comfortable to explore their questions and concerns around the vaccine. We are deliberately recruiting folks that haven\u0026rsquo;t had the Covid-19 vaccine and that have concerns so we can look into questions like, \u0026lsquo;have you felt stigmatized in your community because you haven\u0026rsquo;t gotten the vaccine,\u0026rsquo; or \u0026lsquo;what has inhibited you from getting the shot,\u0026rsquo;\u0026rdquo; said Parker.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EParker\u0026rsquo;s role in the project is leading the community-based design of the app and user experience evaluation of the program. She is also leading qualitative data collection with church members, pastors, and health leaders to better understand the barriers to vaccine uptake, as well as\u0026nbsp;the existing strengths in church communities that can support vaccine uptake.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The dialogue is being developed in partnership with the community to ensure everything makes sense in the specific church context, and that it is culturally relevant,\u0026rdquo; said Parker, who is a Georgia Tech alumna (Ph.D. HCC 11).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe program is off to a good start. According to co-primary investigator and Northeastern University computer science professor \u003Cstrong\u003ETimothy Bickmore\u003C\/strong\u003E, the feedback so far has been positive.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Even for individuals who don\u0026rsquo;t have high computer literacy, the program is still easy to use. We\u0026rsquo;ve gotten great feedback. It seems to be working, and most people like it,\u0026rdquo; Bickmore said in an \u003Ca href=\u0022https:\/\/news.northeastern.edu\/2021\/05\/13\/this-virtual-nurse-can-tell-you-a-prayer-and-where-to-get-a-coronavirus-vaccine\/\u0022\u003Earticle\u003C\/a\u003E from Northeastern University.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlthough expanding beyond the current scope is not in the plans for the program now, Parker thinks using ECAs to improve vaccination uptake rates would translate well to other communities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Being in Atlanta and seeing even lower vaccine uptake here, I personally would love to explore adapting the intervention for Georgia. Much of the hesitation and concerns we\u0026rsquo;re seeing around the vaccine as part of the study are not specific to Massachusetts,\u0026rdquo; said Parker.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe project is formally titled \u003Cem\u003ECommunity-based Design and Evaluation of a Conversational Agent to Promote SARS-COV2 Vaccination in Black Churches\u003C\/em\u003E. The funding for the project (1R01MD016882-01)\u0026nbsp;is administered by the National Institute on Minority Health and Health Disparities.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech is participating in an NIH-funded project to help address vaccination hesitancy and increase Covid-19 vaccination rates."}],"uid":"32045","created_gmt":"2021-07-23 15:35:19","changed_gmt":"2021-08-03 14:28:11","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-07-26T00:00:00-04:00","iso_date":"2021-07-26T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"649013":{"id":"649013","type":"image","title":"Virtual counselor for Covid vaccine hesitancy","body":null,"created":"1627052679","gmt_created":"2021-07-23 15:04:39","changed":"1627052679","gmt_changed":"2021-07-23 15:04:39","alt":"Clara a virtual healthcare counselor that answers questions and addresses concerns specifically related to the Covid-19 vaccine.","file":{"fid":"246411","name":"Timothy_Bickmore_09.JPG","image_path":"\/sites\/default\/files\/images\/Timothy_Bickmore_09.JPG","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Timothy_Bickmore_09.JPG","mime":"image\/jpeg","size":719852,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Timothy_Bickmore_09.JPG?itok=xPY_PLlq"}}},"media_ids":["649013"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"},{"id":"66442","name":"MS HCI"}],"categories":[{"id":"151","name":"Policy, Social Sciences, and Liberal Arts"}],"keywords":[{"id":"2076","name":"NIH"},{"id":"188331","name":"andrea parker"},{"id":"46361","name":"GT computing"},{"id":"186714","name":"Covid-19 vaccine"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAlbert Snedeker. Communications Mgr. II\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto:albert.snedeker@cc.gatech.edu?subject=NIH%20project\u0022\u003Ealbert.snedeker@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["albert.snedeker@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"649137":{"#nid":"649137","#data":{"type":"news","title":"Georgia Tech Will Help Bring Critical Advancements to Online Learning as Part of Multimillion Dollar NSF Grant","body":[{"value":"\u003Cp\u003EGeorgia Tech is a major partner in a new \u003Ca href=\u0022https:\/\/www.nsf.gov\/\u0022\u003ENational Science Foundation\u003C\/a\u003E (NSF) \u003Ca href=\u0022https:\/\/www.nsf.gov\/funding\/pgm_summ.jsp?pims_id=505686\u0022\u003EArtificial Intelligence Research Institute\u003C\/a\u003E focused on adult learning in online education, it was announced today. Led by the Georgia Research Alliance, the National AI Institute for Adult Learning in Online Education (ALOE) is one of 11 new NSF institutes created as part of an investment totaling $220 million.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe ALOE Institute will develop new AI theories and techniques for enhancing the quality of online education for lifelong learning and workforce development. According to some projections, about 100 million American workers will need to be reskilled or upskilled over the next decade. With the increase of AI and automation, said Co-Principal Investigator and Georgia Tech lead Professor \u003Cstrong\u003EAshok Goel\u003C\/strong\u003E, many jobs will be redefined.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;There will be some loss of jobs, but mostly we will see individuals needing to learn a new skill to get a new job or to advance their career,\u0026rdquo; said Goel, a professor of computer science and human-centered computing in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC) and the chief scientist with the \u003Ca href=\u0022https:\/\/c21u.gatech.edu\/\u0022\u003ECenter for 21\u003Csup\u003Est\u003C\/sup\u003E Century Universities\u003C\/a\u003E (C21U). \u0026ldquo;So, how do you help 100 million workers reskill or upskill in 10 years? Because AI is in part responsible for this need, it is our belief it should also be responsible for finding a solution.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThat is the goal of this project, which will be led by principal investigator \u003Cstrong\u003EMyk Garn\u003C\/strong\u003E, assistant vice chancellor for New Models of Learning at the University System of Georgia and senior advisor to the \u003Ca href=\u0022https:\/\/gra.org\/\u0022\u003EGeorgia Research Alliance\u003C\/a\u003E (GRA).\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Online education for adults has enormous implications for tomorrow\u0026rsquo;s workforce,\u0026rdquo; Garn said. \u0026ldquo;Yet, serious questions remain about the quality of online learning and how best to teach adults online. Artificial intelligence offers a powerful technology for dramatically improving the quality of online learning and adult education.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo do that successfully, the education must be personalized and scaled to unprecedented levels. Educating 100 million people in online environments will, of course, require far more time and energy than in-person educators can offer their students. That is where AI comes into play.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearchers will build new AI techniques that can adequately and efficiently train \u003Cem\u003Eother\u003C\/em\u003E AI agents to interact with humans in a classroom setting, similar to the virtual teaching assistant Jill Watson that Goel has used in his online computer science classes for the past five years. This will help satisfy the scalability requirement.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;That\u0026rsquo;s the fundamental advancement in AI,\u0026rdquo; Goel said. \u0026ldquo;A human can train an AI agent in just a few hours how to teach other AI agents on how to interact with humans on various subjects.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo satisfy the need for personalized AI, researchers will train machines to have a mutual theory of mind with their human counterparts. In other words, there will be a greater understanding by both machine and human of the others\u0026rsquo; needs, knowledge, and expectations.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Our vision is to develop AI agents that achieve a mutual understanding of learning expectations, outcomes, and methods between students and teachers,\u0026rdquo; said Alex Endert, an assistant professor in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/cc.gatech.edu\u0022\u003ECollege of Computing\u003C\/a\u003E who will help the team analyze and understand data from the project. \u0026ldquo;Along with my students, I look forward to developing visual analytic interfaces that serve that purpose to foster trust and interpretability of AI for this domain.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUltimately, the hope is that education becomes more available, affordable, achievable, and, thereby, equitable. Such an expansive project, understandably, requires the expertise of many kinds from many people. In addition to Endert and Goel, who will be executive director of the ALOE Institute, there will be a host of faculty at Georgia Tech will participate.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESenior Georgia Tech members of the ALOE team include \u003Cstrong\u003EStephen Harmon\u003C\/strong\u003E (Industrial Design and C21U), \u003Cstrong\u003EMichael Hoffmann\u003C\/strong\u003E (Public Policy), \u003Cstrong\u003EDavid Joyner\u003C\/strong\u003E (Online Master of Science in Computer Science), \u003Cstrong\u003ERuth Kanfer\u003C\/strong\u003E (Psychology), \u003Cstrong\u003EBrian Magerko\u003C\/strong\u003E (Language, Media, and Culture), \u003Cstrong\u003EKeith McGreggor\u003C\/strong\u003E (IC and VentureLab), \u003Cstrong\u003EChaohua Ou\u003C\/strong\u003E (Center for Teaching and Learning), and \u003Cstrong\u003ESpencer Rugaber\u003C\/strong\u003E (Computer Science).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOther partners in the ALOE Institute include Arizona State University, Drexel University, Georgia State University, Harvard University, the Technical College System of Georgia, the University of North Carolina at Greensboro, IMS Global, Boeing, IBM, and Wiley.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/research.gatech.edu\/georgia-tech-joins-us-national-science-foundation-advance-ai-research-and-education\u0022\u003EGeorgia Tech is a key partner in two additional institutes\u003C\/a\u003E in partnership with the U.S. Department of Agriculture, the National Institute of Food and Agricultures, the U.S. Department of Homeland Security Science \u0026amp; Technology Directorate, and the U.S. Department of Transportation Federal Highway Administration. Georgia Tech will lead the AI Institute for Advances in Optimization (AI4Opt) and the AI Institute for Collaborative Assistance and Responsiveness Interaction for Networked Groups (AI-CARING), the latter of which is led by College of Computing Associate Professor Sonia Chernova to support aging-related issues.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Led by the Georgia Research Alliance, the National AI Institute for Adult Learning in Online Education (ALOE) is one of 11 new NSF institutes created as part of an investment totaling $220 million."}],"uid":"33939","created_gmt":"2021-07-29 15:28:18","changed_gmt":"2021-07-29 15:28:18","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-07-29T00:00:00-04:00","iso_date":"2021-07-29T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"611004":{"id":"611004","type":"image","title":"Online learning stock","body":null,"created":"1536259875","gmt_created":"2018-09-06 18:51:15","changed":"1536259875","gmt_changed":"2018-09-06 18:51:15","alt":"Fingers typing on a laptop keyboard","file":{"fid":"232624","name":"online learning.jpg","image_path":"\/sites\/default\/files\/images\/online%20learning.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/online%20learning.jpg","mime":"image\/jpeg","size":68702,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/online%20learning.jpg?itok=CYZYPb3r"}}},"media_ids":["611004"],"related_links":[{"url":"https:\/\/research.gatech.edu\/georgia-tech-joins-us-national-science-foundation-advance-ai-research-and-education","title":"Georgia Tech Joins the U.S. National Science Foundation to Advance AI Research and Education"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"649087":{"#nid":"649087","#data":{"type":"news","title":"New Browser-Based Chart Builder Gives Line Graphs, Scatterplots Their Very Own Audio Track","body":[{"value":"\u003Cp\u003EA new multimodal data visualization tool for the web produces charts with a twist \u0026ndash; these charts also represent information using carefully designed sounds for a richer, more powerful, and accessible way to experience data.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EReleased by the Georgia Institute of Technology and open-source web application Highcharts, \u003Ca href=\u0022https:\/\/sonification.highcharts.com\/#\/\u0022\u003EHighcharts Sonification Studio (HSS)\u003C\/a\u003E\u0026nbsp;enables users to enter data into a spreadsheet to create traditional visual charts such as line graphs, scatterplots, and bar charts. At the same time, the tool creates non-speech audio tracks based on the data, a process known as sonification.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The goal of this tool is to provide a simple, intuitive, and accessible way for users to import, edit, visualize, and sonify their data, and then export the results to a useful format,\u0026rdquo; said Professor \u003Cstrong\u003EBruce Walker\u003C\/strong\u003E, director of \u003Ca href=\u0022http:\/\/sonify.psych.gatech.edu\/\u0022\u003EGeorgia Tech\u0026rsquo;s Sonification Lab\u003C\/a\u003E. \u0026ldquo;We want users to be able to use the tool without having to download software or write code, and without prior sonification expertise.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe data visualization+sonification approach lets users explore data with visual, auditory, or both modalities. This can lead to novel discoveries in its own right, and can also support users who may have limited ability to see or hear a given display.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Visually impaired readers find sonification and auditory graphs to be very useful for getting an overview of the data, as well as identifying patterns, outliers, and points of interest,\u0026rdquo; said Walker.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EBrandon Biggs\u003C\/strong\u003E, a researcher\u0026nbsp;and entrepreneur who is blind, highlighted the software\u0026rsquo;s ability to allow users such as himself to create a graph that he can trust will be visually appealing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I love how accessible all the components are with a screen-reader and how easy it is to create a sonification,\u0026rdquo; Biggs said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnd for all users\u0026mdash;even those who can see\u0026mdash;sound can communicate information without requiring visual attention. For instance, instead of looking at a weather forecast or a chart of a stock price on a screen, imagine being able to hear the ups and downs played like a melody, with additional sounds highlighting points of interest in the data.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHSS is the culmination of a multi-year collaboration between Highsoft\u0026mdash;the makers of Highcharts\u0026mdash;and the Georgia Tech Sonification Lab. The goal of the collaboration is to develop an extensible, accessible, online spreadsheet and multimodal graphing platform for the auditory display, assistive technology, and STEM education community.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWalker said that HSS is a systematic re-implementation of his lab\u0026rsquo;s Sonification Sandbox to integrate Highsoft\u0026rsquo;s industry-leading web-based Highcharts technology with Georgia Tech\u0026rsquo;s expertise in sonification and interactive auditory displays.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe tool is open-sourced under the MIT License to allow for extensions and forks in development from the community\u0026nbsp;and to ensure the tool is available to all. A Highcharts license is required for commercial use of the tool, but otherwise, usage is completely free.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This system will complement other tools and libraries actively used by the auditory display research community and help bring sonification to an even wider audience, especially in the visualization community and in situations of limited resources,\u0026rdquo; said \u003Cstrong\u003E\u0026Oslash;ystein Moseng\u003C\/strong\u003E, the Highcharts developer leading the implementation of the HSS.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA paper describing the research and development of the open-source tool is part of the 26\u003Csup\u003Eth\u003C\/sup\u003E annual International Conference on Auditory Displays (ICAD.org), which took place June 25-28, 2021. The paper \u003Cem\u003EHighcharts Sonification Studio: An Online, Open-Source, Extensible, And Accessible Data Sonification Tool\u003C\/em\u003E is co-authored by Stanley Cantrell, Walker, and Moseng.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe Highcharts Sonification Studio web app, source code, and developer community are available at \u003Ca href=\u0022https:\/\/sonification.highcharts.com\u0022\u003Ehttps:\/\/sonification.highcharts.com\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech researchers have created a data visualization plus sonification approach lets users explore data with visual, auditory, or both modalities."}],"uid":"32045","created_gmt":"2021-07-27 20:44:50","changed_gmt":"2021-07-28 15:20:25","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-07-27T00:00:00-04:00","iso_date":"2021-07-27T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"649088":{"id":"649088","type":"image","title":"Data vis sonification tool","body":null,"created":"1627422780","gmt_created":"2021-07-27 21:53:00","changed":"1627498800","gmt_changed":"2021-07-28 19:00:00","alt":"A user working with accessible browser-based Highcharts Sonification Studio software.","file":{"fid":"246435","name":"sonify-2.jpg","image_path":"\/sites\/default\/files\/images\/sonify-2.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/sonify-2.jpg","mime":"image\/jpeg","size":387592,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/sonify-2.jpg?itok=sxM8QM8z"}}},"media_ids":["649088"],"related_links":[{"url":"https:\/\/youtu.be\/VdKcyGXLyvg","title":"Hearing the Data"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"135","name":"Research"}],"keywords":[{"id":"170772","name":"Sonification"},{"id":"438","name":"data"},{"id":"7257","name":"visualization"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EJosh Preston, Research Communications Mgr.\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto:Jpreston@cc.gatech.edu?subject=Sonification\u0022\u003EJpreston@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["Jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"648905":{"#nid":"648905","#data":{"type":"news","title":"Georgia Tech Top Contributor to Research at International Conference on Machine Learning","body":[{"value":"\u003Cp\u003EGeorgia Tech researchers in the College of Engineering and College of Computing are presenting their work at the International Conference on Machine Learning (IMCL), which runs through Saturday.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EICML is the leading international academic conference in machine learning. Along with NeurIPS and ICLR, it is one of the three primary conferences of high impact in machine learning and artificial intelligence research. It is supported by the International Machine Learning Society (IMLS).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EExplore Georgia Tech people, research abstracts, and when authors will present (Tues-Thurs) in an interactive data graphic of \u003Ca href=\u0022https:\/\/public.tableau.com\/views\/GeorgiaTechatICML2021\/Dashboard1?:language=en-US\u0026amp;:display_count=n\u0026amp;:origin=viz_share_link\u0022\u003E\u003Cstrong\u003EGeorgia Tech at IMCL 2021\u003C\/strong\u003E\u003C\/a\u003E. Also explore the whole program in a second data graphic: \u003Ca href=\u0022https:\/\/public.tableau.com\/views\/ICML2021\/Dashboard12?:showVizHome=no\u0022\u003E\u003Cstrong\u003EWho\u0026rsquo;s Who at ICML 2021\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech\u0026rsquo;s work is represented in 2% of the program with 22 papers in a range of topics including (asterisk denotes a single paper):\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EApplications (CV and NLP)*\u003C\/li\u003E\r\n\t\u003Cli\u003EApplications (NLP)*\u003C\/li\u003E\r\n\t\u003Cli\u003EDeep Learning Algorithms*\u003C\/li\u003E\r\n\t\u003Cli\u003EDeep Learning Theory *\u003C\/li\u003E\r\n\t\u003Cli\u003EDeep Reinforcement Learning*\u003C\/li\u003E\r\n\t\u003Cli\u003ELearning Theory \u0026ndash; 2 papers\u003C\/li\u003E\r\n\t\u003Cli\u003EOptimal Transport \u0026ndash; 2 papers\u003C\/li\u003E\r\n\t\u003Cli\u003EOptimization (Convex)*\u003C\/li\u003E\r\n\t\u003Cli\u003EOptimization and Algorithms \u0026ndash; 2 papers\u003C\/li\u003E\r\n\t\u003Cli\u003EPrivacy *\u003C\/li\u003E\r\n\t\u003Cli\u003EReinforcement Learning \u0026ndash; 2 papers\u003C\/li\u003E\r\n\t\u003Cli\u003EReinforcement Learning and Optimization*\u003C\/li\u003E\r\n\t\u003Cli\u003EReinforcement Learning and Planning*\u003C\/li\u003E\r\n\t\u003Cli\u003EReinforcement Learning Theory*\u003C\/li\u003E\r\n\t\u003Cli\u003ETime Series \u0026ndash; 4 papers\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech researchers in the College of Engineering and College of Computing are presenting their work at the International Conference on Machine Learning (IMCL), which runs through Saturday."}],"uid":"33939","created_gmt":"2021-07-20 13:20:02","changed_gmt":"2021-07-21 05:00:40","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-07-20T00:00:00-04:00","iso_date":"2021-07-20T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"648904":{"id":"648904","type":"image","title":"ICML 2021","body":null,"created":"1626787175","gmt_created":"2021-07-20 13:19:35","changed":"1626787175","gmt_changed":"2021-07-20 13:19:35","alt":"","file":{"fid":"246336","name":"ICML2021.jpeg","image_path":"\/sites\/default\/files\/images\/ICML2021.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/ICML2021.jpeg","mime":"image\/jpeg","size":164013,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/ICML2021.jpeg?itok=e3cM_-yn"}}},"media_ids":["648904"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EJosh Preston\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003Ejpreston@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"648864":{"#nid":"648864","#data":{"type":"news","title":"Georgia Tech Faculty Hold Workshop to Improve Integration of Ethics into Courses","body":[{"value":"\u003Cp\u003EAs computer science becomes more ingrained into various areas of study and, indeed, our daily lives, an eye on the implications of innovation is needed, experts at Georgia Tech say.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo help students begin thinking about ethics with regards to research, faculty at Georgia Tech \u0026ndash; in conjunction with Mozilla \u0026ndash; held the first workshop on integrating ethics and responsible computing into courses this summer.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe workshop was a collaboration between faculty researchers at Georgia Tech in both the Ethics, Technology, and Human Interaction Center (ETHICx) and Computing and Society, as well as Mozilla. The workshop received a strong response, which organizers say indicates a growing desire for ethics at the center of computer science courses.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMembers of the College of Computing\u0026rsquo;s Division of Computing Instruction, the Schools of Interactive Computing, Computational Science and Engineering, Computer Science, and Electrical and Computer Engineering, along with attendees from Georgia State all participated in the online workshop.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s really gratifying to have broad representation because it demonstrates the desire for people from so many different areas to think more deeply about the role of ethics in our education,\u0026rdquo; said \u003Cstrong\u003EEllen Zegura\u003C\/strong\u003E, professor in the School of Computer Science and Fleming Chair in Telecommunications.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe goal of the workshop was to help instructors consider ways in which to implement ethics as a central piece in courses not just later in a student\u0026rsquo;s study, but from the very beginning. There\u0026rsquo;s an issue of urgency, Zegura said, that needed to be considered.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Computing has reached a point where it is being used for critical decision making that really affects people\u0026rsquo;s lives,\u0026rdquo; she said. \u0026ldquo;The need to use computing responsibly has moved up incredibly. And if we don\u0026rsquo;t talk about ethics early in the curriculum, we\u0026rsquo;re sending a message that it\u0026rsquo;s not important. If you only hear about it in one course and it\u0026rsquo;s later in your career, then what does that say about the importance? Students see that.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile official plans aren\u0026rsquo;t currently in place to continue the program, Zegura said the idea is to continue this as a series of activities that are responsive to what people\u0026rsquo;s needs are, specifically those who want to do a better job of embedding ethics into their computer science curriculum.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech graduate \u003Cstrong\u003EKathy Pham (CS \u0026rsquo;07, MS CS \u0026rsquo;09)\u003C\/strong\u003E, now at Mozilla, has been instrumental in engaging the computer science community from 15-20 universities on focusing on ethics, Zegura said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.youtube.com\/playlist?list=PLF0CYxpffvKx5W-y_xJ9xhrGapmeF70Og\u0022\u003EPortions of the workshop can be viewed on YouTube here.\u003C\/a\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"To help students begin thinking about ethics with regards to research, faculty at Georgia Tech \u2013 in conjunction with Mozilla \u2013 held the first workshop on integrating ethics and responsible computing into courses this summer."}],"uid":"33939","created_gmt":"2021-07-19 13:16:20","changed_gmt":"2021-07-19 13:16:20","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-07-19T00:00:00-04:00","iso_date":"2021-07-19T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"644759":{"id":"644759","type":"image","title":"Ethics stock image","body":null,"created":"1614365518","gmt_created":"2021-02-26 18:51:58","changed":"1614365518","gmt_changed":"2021-02-26 18:51:58","alt":"","file":{"fid":"244800","name":"AdobeStock_117212757.jpeg","image_path":"\/sites\/default\/files\/images\/AdobeStock_117212757.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/AdobeStock_117212757.jpeg","mime":"image\/jpeg","size":725547,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/AdobeStock_117212757.jpeg?itok=3tPD5rC9"}}},"media_ids":["644759"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"645832":{"#nid":"645832","#data":{"type":"news","title":"Assistant Professor Earns 2020 Salesforce AI Research Grant","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing Assistant Professor \u003Cstrong\u003EDiyi Yang\u003C\/strong\u003E was named a \u003Ca href=\u0022https:\/\/blog.einstein.ai\/celebrating-the-winners-of-the-third-annual-salesforce-ai-research-grant\/\u0022\u003ESalesforce AI Research Grant Winner for 2020\u003C\/a\u003E. One of seven winners of the award, she will receive a $50,000 grant to advance her work. It is the third year the grant has been provided by Salesforce.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EYang\u0026rsquo;s research, which is being led by her Ph.D. student \u003Cstrong\u003EJiaao Chen\u003C\/strong\u003E, aims to alleviate dependence of supervised models on labeled data via data augmentation approaches. Supervised learning is the machine learning task of learning a function that maps an input to an output based on example pairs, inferring the function from training data that has been tagged with identifying properties or characteristics (labeled data).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe hope is that they may improve upon the ability to transfer models from one setting to another despite the relative lack of intensive training examples.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In the era of deep learning, natural language processing (NLP) has achieved extremely good performances in most data-intensive settings,\u0026rdquo; Yang said. \u0026ldquo;However, when there are only one or a few training examples, supervised deep learning models often fail. This strong dependence on labeled data largely prevents neural network models from being applied to new settings or real-world situations.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EYang\u0026rsquo;s group has published a couple of papers in this field already, and she said the Salesforce grant will further support efforts to extend it to broader contexts, especially when NLP tasks involve complicated outputs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;These examples might include performing named entity recognition that finds the important information in a text, or semantic parsing that converts a natural language sentence into a structured command,\u0026rdquo; she said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EYou can read previous papers on the subject at the links below:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~dyang888\/docs\/mixtext_acl_2020.pdf\u0022\u003E\u003Cem\u003EMixText: Linguistically-Informed Interpolation of Hidden Space for Semi-Supervised Text Classification (Jiaao Chen, Zichao Yang, Diyi Yang)\u003C\/em\u003E\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/2010.01677.pdf\u0022\u003E\u003Cem\u003ELocal Additivity Based Data Augmentation for Semi-supervised NER (Jiaao Chen, Zhenghui Wang, Ran Tian, Zichao Yang, Diyi Yang)\u003C\/em\u003E\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EYang was chosen from a group of over 180 quality proposals from more than 30 countries.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Yang\u2019s research, which is being led by her Ph.D. student Jiaao Chen, aims to alleviate dependence of supervised models on labeled data via data augmentation approaches."}],"uid":"33939","created_gmt":"2021-03-29 14:42:23","changed_gmt":"2021-03-29 14:42:23","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-03-29T00:00:00-04:00","iso_date":"2021-03-29T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"630588":{"id":"630588","type":"image","title":"Diyi Yang 2020","body":null,"created":"1578338255","gmt_created":"2020-01-06 19:17:35","changed":"1578338255","gmt_changed":"2020-01-06 19:17:35","alt":"","file":{"fid":"240080","name":"Diyi_Yang.jpg","image_path":"\/sites\/default\/files\/images\/Diyi_Yang.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Diyi_Yang.jpg","mime":"image\/jpeg","size":194720,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Diyi_Yang.jpg?itok=T-Kv1Jqp"}}},"media_ids":["630588"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"645744":{"#nid":"645744","#data":{"type":"news","title":"OMSCS Alumnus Goes from TA to College Instructor","body":[{"value":"\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.uma.edu\/directory\/staff\/rocko-graziano\/\u0022\u003E\u003Cstrong\u003ERocko Graziano\u003C\/strong\u003E\u003C\/a\u003E had spent 30 years working his way up the ladder in private sector IT, but knew he wanted to do something different before he retired. Georgia Tech\u0026rsquo;s Online Master of Science Computer Science (OMSCS) let him transition into teaching.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I\u0026rsquo;d dabbled and done about everything you can in an IT career in those years\u0026mdash;from software development security to leading the construction of a LEED-certified data center\u0026mdash;all while getting married and raising a family in Maine,\u0026rdquo; Graziano said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EFinding OMSCS\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAfter three decades in the private sector, Graziano wondered what would be next for him and his wife, Robyn, a high school math teacher. When Robyn started an online master\u0026rsquo;s in mathematics, Graziano looked into going back to school himself and transitioning to a career where they could both have summers off.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlthough he had a lot of practical experience, Graziano hadn\u0026rsquo;t formally studied computer science since his bachelor\u0026rsquo;s at Boston College. All that changed when he found and applied to OMSCS in 2015.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELearning the content was almost more challenging than transitioning to student life again.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Comparing my undergraduate days to OMSCS is like comparing a horse and buggy to a Tesla,\u0026rdquo; he said. \u0026ldquo;The power of personal computers these days, the vast amounts of data we have to work with, and number of things you can download for free over internet\u0026mdash;none of that existed 30 years ago and now it\u0026rsquo;s open source.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDespite planning to study interactive intelligence, he fell in love with the hands-on application of algorithms after taking Professor \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/home\/thad\/\u0022\u003E\u003Cstrong\u003EThad Starner\u0026rsquo;s\u003C\/strong\u003E\u003C\/a\u003E artificial intelligence (AI) class. Graziano switched to the machine learning track and continued to work with Starner on a special research project that uses AI to detect and combat plagiarism at Georgia Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ERediscovering Teaching\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis work resonated with Graziano because he knew he also wanted to teach once he earned his degree. He had been a teaching assistant (TA) as an undergraduate and occasionally hosted training seminars in his corporate career. After enjoying graduate algorithms, he applied to be a TA for the class.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I really liked the material and wanted to give back,\u0026rdquo; he said. \u0026ldquo;I knew teaching was something I wanted to do and that a degree from Georgia Tech would make it possible.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBecoming a TA in OMSCS set up Graziano\u0026rsquo;s career change as a college-level educator. The experience of learning how to manage a class at scale \u0026mdash; drafting exams and grading rubrics, preparing office hours, and supporting hundreds of students across multiple time zones \u0026ndash; provided valuable experience to jump start his transition to academia.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThroughout his OMSCS career, Graziano was serving of the Board of Visitors at the University of Maine, Augusta (UMA) and knew the school had opportunities in its new data sciences program. After he graduated OMSCS in 2019, he joined UMA as an adjunct in the Fall of 2020 and became a full-time lecturer this past January.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGraziano\u0026rsquo;s OMSCS experience prepared him for UMA\u0026rsquo;s distanced learning degree structure where more than 60 percent of credits are delivered online.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I know what it\u0026rsquo;s like to watch asynchronous lectures and submit assignments online, so the materials I\u0026rsquo;m building for my classes replicate my good experiences from OMSCS,\u0026rdquo; he said. \u0026ldquo;The majority OMSCS teachers went out of the way to make lectures engaging and build up discussion through a series of videos.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDespite working full-time at UMA, Graziano is still an OMSCS TA. It\u0026rsquo;s important to him to stay connected to the program.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I was able to retire from the private sector when I wanted,\u0026rdquo; he said. \u0026ldquo;I knew I had another 10 years in my career, but I just wanted to do something completely different and OMSCS was the gateway.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Rocko Graziano had spent 30 years working his way up the ladder in private sector IT, but knew he wanted to do something different before he retired."}],"uid":"34541","created_gmt":"2021-03-25 18:14:57","changed_gmt":"2021-03-25 20:43:04","author":"Tess Malone","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-03-25T00:00:00-04:00","iso_date":"2021-03-25T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"645758":{"id":"645758","type":"image","title":"Rocko Granzian","body":null,"created":"1616704734","gmt_created":"2021-03-25 20:38:54","changed":"1616704734","gmt_changed":"2021-03-25 20:38:54","alt":"Rocko Granziano","file":{"fid":"245158","name":"IMG_6105.jpeg","image_path":"\/sites\/default\/files\/images\/IMG_6105.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/IMG_6105.jpeg","mime":"image\/jpeg","size":493238,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/IMG_6105.jpeg?itok=GpM20I0a"}}},"media_ids":["645758"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ETess Malone, Communications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:tess.malone@cc.gatech.edu\u0022\u003Etess.malone@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"644380":{"#nid":"644380","#data":{"type":"news","title":"Ph.D. Student Earns 2021 Focus Fellowship from Georgia Tech\u0027s Office of Minority Educational Development","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing (IC) Ph.D. student \u003Cstrong\u003EKantwon Rogers\u003C\/strong\u003E was awarded a 2021 Focus Fellowship by Georgia Tech\u0026rsquo;s \u003Ca href=\u0022https:\/\/omed.gatech.edu\/\u0022\u003EOffice of Minority Educational Development\u003C\/a\u003E (OMED).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe award recognizes participants in the \u003Ca href=\u0022https:\/\/focus.gatech.edu\/\u0022\u003EFocus Program\u003C\/a\u003E who have demonstrated academic excellence, community leadership, and been granted admittance to a graduate program. The Focus Program aims to introduce minority students to graduate school in hopes of increasing the number who pursue higher degrees.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERogers attended the Focus Program five years ago as an undergraduate student at Georgia Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It helped me learn about grad school and set me up for success,\u0026rdquo; Rogers said of the program.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe award, which carries a prize of up to $2,500 per student based on funds available and number of awardees, is not based on specific research but recognizes overall accomplishments. In an application essay, Rogers shared how OMED was pivotal to is success at Georgia Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs an undergraduate, he participated in the \u003Ca href=\u0022https:\/\/omed.gatech.edu\/programs\/challenge\u0022\u003EChallenge Program\u003C\/a\u003E, a five-week academic residential program for incoming first-year students. Later, he became a counselor in the same program, an OMED tutor, a Focus participant, a Focus panelist, and last summer a computer science (CS) instructor in the Challenge program.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It was really spooky because I was teaching the new Challenge students in the exact same room that I sat in when I was learning CS for the first time in Challenge a decade ago,\u0026rdquo; Rogers said. \u0026ldquo;Truly full circle. OMED has truly been a foundation for me here at Georgia Tech, and I am eternally grateful.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERogers\u0026rsquo; research focuses on human-robot interaction, investigating the effects that intelligent agent verbal deception has on human interaction.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Animals deceive. Humans deceive. Should robots and AI deceive?\u0026rdquo; Rogers poses in his research tagline.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAdditionally, the work aims to provide AI systems the ability to autonomously produce contextually meaningful and successfully deceptive utterances while determining when it is appropriate to verbally deceive humans.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe is advised by IC Chair \u003Cstrong\u003EAyanna Howard\u003C\/strong\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The award recognizes participants in the Focus Program who have demonstrated academic excellence, community leadership, and been granted admittance to a graduate program."}],"uid":"33939","created_gmt":"2021-02-17 16:57:08","changed_gmt":"2021-02-17 17:08:48","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-02-17T00:00:00-05:00","iso_date":"2021-02-17T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"585962":{"id":"585962","type":"image","title":"Kantwon Rogers 2","body":null,"created":"1484253211","gmt_created":"2017-01-12 20:33:31","changed":"1484253211","gmt_changed":"2017-01-12 20:33:31","alt":"","file":{"fid":"223340","name":"_MG_4285.jpg","image_path":"\/sites\/default\/files\/images\/_MG_4285.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/_MG_4285.jpg","mime":"image\/jpeg","size":173174,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/_MG_4285.jpg?itok=8se09y1V"}}},"media_ids":["585962"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"643612":{"#nid":"643612","#data":{"type":"news","title":"Georgia Tech Research Highlights Premier Artificial Intelligence Conference","body":[{"value":"\u003Cp\u003EGeorgia Tech faculty and student researchers will figure prominently into the proceedings of the \u003Ca href=\u0022https:\/\/aaai.org\/Conferences\/AAAI-21\/\u0022\u003E35\u003Csup\u003Eth\u003C\/sup\u003E AAAI Conference on Artificial Intelligence\u003C\/a\u003E, being held virtually from Feb. 2-9.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETwenty-three members of the Georgia Tech community contributed to 11 papers that will be presented at the conference, while two longtime contributors will join the ranks of the prestigious AAAI Fellows program.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E Chair \u003Cstrong\u003EAyanna Howard\u003C\/strong\u003E and Professor \u003Cstrong\u003EAshok Goel\u003C\/strong\u003E join \u003Ca href=\u0022http:\/\/cc.gatech.edu\/\u0022\u003ECollege of Computing\u003C\/a\u003E Dean \u003Cstrong\u003ECharles Isbell\u003C\/strong\u003E (elected in 2019) and Regents\u0026rsquo; Professor Emerita \u003Cstrong\u003EJanet Kolodner\u003C\/strong\u003E (elected in 1992) are 2021 inductees to the fellowship, giving the Institute four members. The program recognizes individuals who have made significant, sustained contributions to the field of artificial intelligence.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E[\u003Cstrong\u003ERelated news:\u003C\/strong\u003E \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/643355\/ic-professors-howard-goel-named-2021-aaai-fellows\u0022\u003EIC Professors Howard, Goel Named 2021 AAAI Fellows\u003C\/a\u003E]\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENotable research among the eight papers accepted to AAAI 2021 includes work from a multi-institution team working to understand and improve forecasting models of influenza-like illnesses like Covid-19. Effective forecasting is even more challenging amidst the current pandemic, when counts are affected by various factors such as symptomatic similarities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe approach in this paper steers historical forecasting models to new scenarios where the flu and Covid-19 co-exist, demonstrating success in adaptation without sacrificing overall performance.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech\u0026rsquo;s \u003Cstrong\u003EAlexander Rodr\u0026iacute;guez\u003C\/strong\u003E and \u003Cstrong\u003EB. Aditya Prakash\u003C\/strong\u003E are co-authors on the paper, along with \u003Cstrong\u003ENikhil Muralidhar\u003C\/strong\u003E, \u003Cstrong\u003EAnika Tabassum\u003C\/strong\u003E, and \u003Cstrong\u003ENaren Ramakrishnan\u003C\/strong\u003E of Virginia Tech, and \u003Cstrong\u003EBijaya Adhikari \u003C\/strong\u003Eof the University of Iowa.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E[\u003Cstrong\u003ERelated news:\u003C\/strong\u003E \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/642638\/research-team-wins-two-covid-19-challenges-one-week\u0022\u003EResearch Team Wins Two Covid-19 Challenges in One Week\u003C\/a\u003E]\u003C\/p\u003E\r\n\r\n\u003Cp\u003EExplore Georgia Tech\u0026rsquo;s presence in this visualization and view a list of papers below.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/public.tableau.com\/views\/AAAI2021-GeorgiaTechAIresearch\/Dashboard1?:language=en\u0026amp;:display_count=y\u0026amp;:origin=viz_share_link:showVizHome=no\u0022\u003EINTERACTIVE VISUALIZATION: Georgia Tech @ AAAI 20201\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/www.medrxiv.org\/content\/10.1101\/2020.09.28.20203109v2\u0022\u003EDeepCOVID: An Operational Deep Learning-driven Framework for Explainable Real-time COVID-19 Forecasting\u003C\/a\u003E (Alexander Rodr\u0026iacute;guez, Anika Tabassum, Jiaming Cui, Jiajia Xie, Javen Ho, Pulak Agarwal, Bijaya Adhikari, B. Aditya Prakash)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/www.medrxiv.org\/content\/10.1101\/2020.09.28.20203109v2\u0022\u003ESemantic MapNet: Building Alocentric SemanticMaps and Representations from Egocentric Views\u003C\/a\u003E (Vincent Cartillier, Zhile Ren, Neha Jain, Stefan Lee, Irfan Essa, Dhruv Batra)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/2009.11407.pdf\u0022\u003ESteering a Historical Disease Forecasting Model Under a Pandemic: Case of Flu and COVID-19\u003C\/a\u003E (Alexander Rodr\u0026iacute;guez, Nikhil Muralidhar, Bijaya Adhikari, Anika Tabassum, Naren Ramakrishnan, B. Aditya Prakash)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/2009.11407.pdf\u0022\u003EBias and Variance of Post-processing in Differential Privacy\u003C\/a\u003E (Keyu Zhu, Pascal Van Hentenryck, Ferdinando Fioretto)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003EBranch and Price for Bus Driver Scheduling with Complex Break Constraints (Lucas Kletzander, Nysret Musliu, Pascal Van Hentenryck)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003EDetecting and Adapting to Novelty in Games (Xiangyu Peng, Jonathan Balloch, Mark Riedl)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/2009.12562.pdf\u0022\u003EDifferentially Private and Fair Deep Learning: A Lagrangian Dual Approach\u003C\/a\u003E (Cuong Tran, Ferdinando Fioretto, Pascal Van Hentenryck)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/2010.00685.pdf\u0022\u003EHow to Motivate Your Dragon: Teaching Goal-Driven Agents to Speak and Act in Fantasy Worlds\u003C\/a\u003E\u0026nbsp;(Prithviraj Ammanabrolu, Jack Urbanek, Margaret Li, Arthur Szlam, Tim Rocktaschel, Jason Weston)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/2009.00829.pdf\u0022\u003EAutomated Storytelling via Causal, Commonsense Plot Ordering\u003C\/a\u003E\u0026nbsp;(Prithviraj Ammanabrolu, Wesley Cheung, William Broniec, Mark Riedl)\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1902.06007.pdf\u0022\u003EEncoding Human Domain Knowledge to Warm Start Reinforcement Learning\u003C\/a\u003E\u0026nbsp;(Andrew Silva, Matthew Gombolay)\u003C\/li\u003E\r\n\t\u003Cli\u003E\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/2101.06351\u0022\u003EWeakly-Supervised Hierarchical Models for Predicting Persuasive Strategies in Good-faith Textual Requests\u003C\/a\u003E (Jiaao Chen, Diyi Yang)\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Eighteen members of the Georgia Tech community contributed to eight papers that will be presented virtually at AAAI 2021, while two longtime contributors will join the ranks of the prestigious AAAI Fellows program."}],"uid":"33939","created_gmt":"2021-01-29 13:24:52","changed_gmt":"2021-02-01 15:48:30","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-01-29T00:00:00-05:00","iso_date":"2021-01-29T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"643611":{"id":"643611","type":"image","title":"Artificial Intelligence","body":null,"created":"1611926616","gmt_created":"2021-01-29 13:23:36","changed":"1611926616","gmt_changed":"2021-01-29 13:23:36","alt":"Artificial Intelligence","file":{"fid":"244352","name":"artificial-intelligence-4469138_1280.jpg","image_path":"\/sites\/default\/files\/images\/artificial-intelligence-4469138_1280.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/artificial-intelligence-4469138_1280.jpg","mime":"image\/jpeg","size":212458,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/artificial-intelligence-4469138_1280.jpg?itok=6bKOxBNr"}},"643694":{"id":"643694","type":"image","title":"AAAI 2021 Visualization","body":null,"created":"1612194422","gmt_created":"2021-02-01 15:47:02","changed":"1612194422","gmt_changed":"2021-02-01 15:47:02","alt":"Georgia Tech at AAAI 2021","file":{"fid":"244377","name":"aaai_viz.jpg","image_path":"\/sites\/default\/files\/images\/aaai_viz.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/aaai_viz.jpg","mime":"image\/jpeg","size":409660,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/aaai_viz.jpg?itok=2w3bfp7_"}}},"media_ids":["643611","643694"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"643693":{"#nid":"643693","#data":{"type":"news","title":"Two Doctorate Students Awarded Google-CMD-IT Dissertation Fellowships","body":[{"value":"\u003Cp\u003EGeorgia Tech doctorate students \u003Cstrong\u003EAlexander Moreno\u003C\/strong\u003E and \u003Cstrong\u003EAmber Solomon\u003C\/strong\u003E have been awarded \u003Ca href=\u0022https:\/\/cmd-it.org\/news-recent\/6-flip-phd-students-win-google-cmd-it-dissertation-fellowship-award\/\u0022\u003EGoogle-CMD-IT Dissertation Fellowships.\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMoreno and Solomon will each receive $25,000 toward their research in computer science or in a related field.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I plan to pursue a postdoc in academia or industry. I would like to continue to focus on methodology and theory for healthcare applications,\u0026rdquo; said Moreno, a Ph.D. student in the \u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing (IC).\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESolomon, a recent human-centered computing graduate, said she plans to first work as a research scientist creating programs that promote equity and inclusivity in computer science classrooms. Eventually, she would like work on policy related to computer science education.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe number of students from underrepresented groups who completed a Ph.D. in computer science from 2018 to 2019 decreased by\u0026nbsp;\u003Ca href=\u0022https:\/\/cra.org\/wp-content\/uploads\/2020\/05\/2019-Taulbee-Survey.pdf\u0022\u003E13 percent\u003C\/a\u003E. In an effort to increase the diversity of Ph.D. graduates in the industry, CMD-IT and Google Research created this fellowship. In 2020, the organizations awarded six fellowships to be used by recipients for the last year of their dissertation requirements.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EApplicants must come from one of the 11 universities that are a part of the \u003Ca href=\u0022http:\/\/constellations.gatech.edu\/flip-alliance\u0022\u003EFLIP Alliance\u003C\/a\u003E. The alliance aims to address the broadening participation challenge and increase the diversity of the future leadership in the professoriate in computing at research universities.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Two Georgia Tech doctorate students awarded fellowship from Google-CMD-IT to increase diversity in computing."}],"uid":"34773","created_gmt":"2021-02-01 15:46:27","changed_gmt":"2021-02-01 15:46:27","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-02-01T00:00:00-05:00","iso_date":"2021-02-01T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"643692":{"id":"643692","type":"image","title":"Google-CMD-it Dissertation Fellowships","body":null,"created":"1612194247","gmt_created":"2021-02-01 15:44:07","changed":"1612194247","gmt_changed":"2021-02-01 15:44:07","alt":"Google-CMD-it Dissertation Fellowships","file":{"fid":"244376","name":"Google-CMD-it Dissertation Fellowships.png","image_path":"\/sites\/default\/files\/images\/Google-CMD-it%20Dissertation%20Fellowships.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Google-CMD-it%20Dissertation%20Fellowships.png","mime":"image\/png","size":710401,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Google-CMD-it%20Dissertation%20Fellowships.png?itok=RV_gPNTP"}}},"media_ids":["643692"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"134","name":"Student and Faculty"},{"id":"8862","name":"Student Research"}],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"643355":{"#nid":"643355","#data":{"type":"news","title":"IC Professors Howard, Goel Named 2021 AAAI Fellows","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing Chair \u003Cstrong\u003EAyanna Howard\u003C\/strong\u003E and Professor \u003Cstrong\u003EAshok Goel\u003C\/strong\u003E were both named \u003Ca href=\u0022https:\/\/www.aaai.org\/Awards\/fellows.php\u0022\u003E2021 Fellows by the Association for the Advancement of Artificial Intelligence\u003C\/a\u003E (AAAI).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe AAAI Fellows program recognizes individuals who have made significant, sustained contributions \u0026ndash; usually over at least a 10-year period \u0026ndash; to the field of artificial intelligence (AI).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGoel\u0026rsquo;s research, which spans about 35 years, has connected fields of AI, cognitive science, and human cognition. Increasingly, it has merged the fields of AI and education, culminating in his lab\u0026rsquo;s groundbreaking work on \u003Ca href=\u0022https:\/\/emprize.gatech.edu\/\u0022\u003EJill Watson\u003C\/a\u003E, a virtual teaching assistant that can answer student questions in discussion forums for online classes. This trailblazing work has been recognized by numerous media outlets across the globe and has enormous long-term implications for the future of education.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This is an exciting time for AI research into cognitive systems,\u0026rdquo; Goel said. \u0026ldquo;In one direction, my research uses the needs of human learning to ground and inspire novel AI techniques and tools. In the other, it uses AI theories and methods to provide new insights into human cognition and behavior.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe team responsible for the advancement of Jill Watson and additional AI techniques for education, called emPrize, \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/631981\/team-makes-semifinals-global-ai-competition\u0022\u003Eadvanced to the semifinals of the international XPrize AI competition in 2020\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHoward, \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/641685\/renowned-roboticist-departing-georgia-tech-new-position\u0022\u003Ewho was recently named the next Dean of Engineering at The Ohio State University\u003C\/a\u003E, has performed similarly impactful research over her time in the field. As the director of the Human-Automation Systems Lab (HumAnS) at Georgia Tech, she has led research in conceptualizing humanized intelligence, the process of embedding human cognitive capability into the control path of autonomous systems.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESpecifically, the lab studies how human-inspired techniques, such as soft computing methodologies, sensing, and learning can be used to enhance the autonomous capabilities of intelligent systems. This has impact in both virtual AI and robotics, and has led to enterprises like \u003Ca href=\u0022http:\/\/zyrobotics.com\/\u0022\u003EZyrobotics\u003C\/a\u003E, the company Howard co-founded that produces mobile therapy and educational products for children with differing needs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAdditionally, she has been a spokesperson for the importance of ethical research in the field.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We\u0026rsquo;re at such a critical moment in the development of artificial intelligence,\u0026rdquo; Howard said. \u0026ldquo;There is incredible possibility, but equally daunting challenges. It\u0026rsquo;s an honor to be recognized for the work we are doing in this field, but it\u0026rsquo;s far from over. My hope is that I can inspire future researchers to pursue impactful and ethical advancements in the field.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEight others aside from Goel and Howard were also selected to the fellowship program for 2021 and will be recognized at the \u003Ca href=\u0022https:\/\/aaai.org\/Conferences\/AAAI-21\/\u0022\u003E2021 AAAI conference\u003C\/a\u003E, being held virtually Feb. 2-9.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The AAAI Fellows program recognizes individuals who have made significant, sustained contributions to the field of artificial intelligence (AI)."}],"uid":"33939","created_gmt":"2021-01-22 18:41:43","changed_gmt":"2021-01-22 18:41:43","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-01-22T00:00:00-05:00","iso_date":"2021-01-22T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"643352":{"id":"643352","type":"image","title":"Ashok Goel and Ayanna Howard","body":null,"created":"1611340547","gmt_created":"2021-01-22 18:35:47","changed":"1611340547","gmt_changed":"2021-01-22 18:35:47","alt":"Ashok Goel and Ayanna Howard","file":{"fid":"244266","name":"Ashok Goel and Ayanna Howard.png","image_path":"\/sites\/default\/files\/images\/Ashok%20Goel%20and%20Ayanna%20Howard.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Ashok%20Goel%20and%20Ayanna%20Howard.png","mime":"image\/png","size":1469621,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Ashok%20Goel%20and%20Ayanna%20Howard.png?itok=qWIWw2Wl"}}},"media_ids":["643352"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181920","name":"cc-research; ic-ai-ml; ic-robotics"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"643307":{"#nid":"643307","#data":{"type":"news","title":"IC Associate Professor Wins 2021 ACM-W Rising Star Award","body":[{"value":"\u003Cp\u003E\u003Ca href=\u0022http:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E Associate Professor \u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E was named a winner of the \u003Ca href=\u0022https:\/\/women.acm.org\/awards\/rising-star-award\/\u0022\u003E2021 ACM-W Rising Star Award\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe award, bestowed by the Association for Computing Machinery, recognizes a woman whose early-career research has had a significant impact on the computing discipline, as measured by factors like society impact, frequent citation of work, or creation of a new research area.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDe Choudhury will receive a framed certificate and a $1,000 stipend for the recognition, which is in its first year of existence and will be given out annually. She will be recognized for the award at a research conference to be named later.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I feel deeply honored for this recognition and owe my successes to my wonderful students and collaborators, as well as the intellectual freedom provided by Georgia Tech\u0026rsquo;s College of Computing that has helped trailblaze interdisciplinary research in computing, like mine, for years,\u0026rdquo; she said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDe Choudhury\u0026rsquo;s work leverages large-scale online social data and advances in machine learning to help answer fundamental questions relating to our social lives. Chief among them lie within the field of mental health care \u0026ndash; understanding mental health, improving access to care, and more. Her work has been recognized by a number of other awards, including 13 best paper and honorable mention paper awards from the ACM and AAAI, as well as publications such as the New York Times, BBC, NPR, and others.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn addition to the personal appreciation, De Choudhury stressed the importance of recognizing the work of under-represented researchers in the computing field.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I\u0026rsquo;d like to commend the efforts of ACM-W for creating this new opportunity to celebrate the research of a group under-represented in the computing field,\u0026rdquo; she said. \u0026ldquo;There is a long way to go when it comes to computing making significant positive impact on a pervasive societal problem like mental health. Still, this award serves as a valuable encouragement for the next frontier of my research program.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDe Choudhury leads the \u003Ca href=\u0022http:\/\/socweb.cc.gatech.edu\/\u0022\u003ESocial Dynamics and Wellbeing Lab\u003C\/a\u003E. Research from the lab, both past and current, can be explored in more detail on its website.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The award recognizes a woman whose early-career research has had a significant impact on the computing discipline."}],"uid":"33939","created_gmt":"2021-01-21 19:57:50","changed_gmt":"2021-01-21 19:57:50","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2021-01-21T00:00:00-05:00","iso_date":"2021-01-21T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"587685":{"id":"587685","type":"image","title":"Munmun De Choudhury","body":null,"created":"1487686001","gmt_created":"2017-02-21 14:06:41","changed":"1487783642","gmt_changed":"2017-02-22 17:14:02","alt":"Georgia Tech Assistant Professor Munmun De Choudhury","file":{"fid":"223975","name":"munmun portrait_horz.jpg","image_path":"\/sites\/default\/files\/images\/munmun%20portrait_horz.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/munmun%20portrait_horz.jpg","mime":"image\/jpeg","size":711876,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/munmun%20portrait_horz.jpg?itok=GwpgdV5R"}}},"media_ids":["587685"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182015","name":"cc-research; ic-ai-ml; ic-hcc; ic-social-computing"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"642143":{"#nid":"642143","#data":{"type":"news","title":"Q\u0026A: De\u0027Aira Bryant Discusses Her Experience Programming a Robot for the Movie Superintelligence","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003EDe\u0026rsquo;Aira Bryant\u003C\/strong\u003E didn\u0026rsquo;t come to Georgia Tech to work in the movie industry. Her interests lie within the field of robotics, where she works on projects that will increase the quality of human life.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBeing in the heart of Atlanta, however, the burgeoning heart of the film industry, comes with a few perks. Last year, Bryant was able to take advantage of one when she was contacted by representatives from the production crew of \u003Cem\u003ESuperintelligence\u003C\/em\u003E. The movie stars Melissa McCarthy as a woman who must prove to an artificial intelligence that humanity is worth saving and was recently released on HBO Max.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor the movie, Bryant was asked to program a Nao, a humanoid robot she uses in the \u003Ca href=\u0022https:\/\/humanslab.ece.gatech.edu\/\u0022\u003EHuman-Automation Systems (HumAnS) Lab\u003C\/a\u003E run by her advisor, School of Interactive Computing Chair Ayanna Howard. Read about Bryant\u0026rsquo;s experience programming the biggest star on the set.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EHow did this opportunity to work with \u003Cem\u003ESuperintelligence\u003C\/em\u003E come about, and what was the experience like?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe production team reached out to the College of Computing. They were interested in having a robot for a scene and needed someone who could program the Nao to match the scene they had written. They reached out to Dr. Howard because they knew she had that type of robot, and she reached out to me because I\u0026rsquo;m the person who does most of the customized programming for this particular robot. If there\u0026rsquo;s a script or movements or whatever, I\u0026rsquo;m the choreographer.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt was exciting. I was like, \u0026ldquo;Oh my goodness, this is for a movie.\u0026rdquo; I had no idea what it was about, but I was just excited to be a part of it. They asked if their ideas were possible and the production team was like, \u0026ldquo;We don\u0026rsquo;t know what it can do, but we think it looks cool. Can you make it do this?\u0026rdquo; We talked on the phone, and then I went to work.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EHow long did you have to program it?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI had about a week to get it ready. I had this idea of what they wanted, and I just tried to program it as best as I could.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ESo, tell me about the day of. What was it like being on set?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI took the robot to the Klaus Advanced Computing Building. They were filming in there. It was so exciting to see everything. I had to tell the robot to go on their cue, so I was sitting right behind the camera. I got to meet Melissa McCarthy and some of the other stars, and I got a few pictures with them that I\u0026rsquo;m excited to finally be able to share with everyone. Everyone was so welcoming and understanding that the robot needed some time. I like to say that the robot was the biggest superstar on the set. It had its moments where it was like, \u0026ldquo;I\u0026rsquo;m not ready yet. My joint isn\u0026rsquo;t quite ready to do this movement.\u0026rdquo; They were understanding and eager to learn. They wanted their own pictures with the robot and everything, and had their own questions that I was excited to answer.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EA lot of non-roboticists or AI researchers\u0026rsquo; first experiences with robots is in mass media like movies or TV shows, and normally its some dystopian or disaster scenario. How seriously did you take that responsibility or opportunity to portray the lighter, more realistic side?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI think for a lot of people, robots \u0026ndash; especially these humanoid ones \u0026ndash; have been largely portrayed negatively. They focus on disaster cases they may never happen in the next 100 years, if ever. There hasn\u0026rsquo;t been a lot of mass media attention that focuses on more positive use cases. I take that very seriously in our work, just knowing that we focus on people, on children that can benefit from the technology and have it improve their quality of life. It\u0026rsquo;s important to show those cases to affect the narrative. But we also want to highlight the concerns that are just. Things like bias and ethics of using robotics in certain domains. Those are real things that people are working to mitigate now, so we can bring people closer to what the field actually looks like by highlighting both.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEvery time I teach kids or teach a class, I start out by showing what robots can actually do. I show videos of them falling over or something like that to illustrate that those terminators or killer robots, that doesn\u0026rsquo;t happen right now. But there are some other issues that are real and current and pressing, and here\u0026rsquo;s how we address them.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EBeing at Georgia Tech with movies filmed nearby has offered these kinds of neat opportunities. How neat is it to have this platform?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMy friends think it\u0026rsquo;s so much cooler that I helped work on a movie that is going to be on HBO Max than for me to have some paper published at this really prestigious conference. The movie resonates with them more, so it\u0026rsquo;s an opportunity to have a connection. They can relate to the technology in a way that is natural to them and ask questions, and I can share more about robotics and my work. That\u0026rsquo;s how we get people interested in the field.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"De\u0027Aira Bryant programmed a robot for a scene in the movie Superintelligence. She discusses her experience in this Q\u0026A."}],"uid":"33939","created_gmt":"2020-12-15 23:09:43","changed_gmt":"2020-12-15 23:09:43","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-12-15T00:00:00-05:00","iso_date":"2020-12-15T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"642140":{"id":"642140","type":"image","title":"De\u0027Aira Bryant Superintelligence","body":null,"created":"1608072918","gmt_created":"2020-12-15 22:55:18","changed":"1608072918","gmt_changed":"2020-12-15 22:55:18","alt":"De\u0027Aira Bryant works on the set of the movie Superintelligence","file":{"fid":"243949","name":"BryantSuperintelligence2.jpg","image_path":"\/sites\/default\/files\/images\/BryantSuperintelligence2.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/BryantSuperintelligence2.jpg","mime":"image\/jpeg","size":141730,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/BryantSuperintelligence2.jpg?itok=G6SL6u0X"}}},"media_ids":["642140"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182940","name":"cc-research; ic-ai-ml; ic-robotics; ic-hcc"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"642142":{"#nid":"642142","#data":{"type":"news","title":"Sehoon Ha Part of $500k Grant to Make Safer, More Deployable Robots","body":[{"value":"\u003Cp\u003ESafety is arguably the biggest barrier to large-scale deployability of humanoid assistive robots.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELarge, heavy, and with the potential to suddenly fall over all mean that the risk to humans has remained too high to place this technology in homes, hospitals, retail spaces, or care facilities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn 2016, however, researchers at UCLA posed a solution: What if we made robots that just couldn\u0026rsquo;t fall down? Now, researchers at Georgia Tech, in collaboration with UCLA and the University of Southern California, are working to develop a new class of locomotion systems that could enable this technology to become a larger part of our daily lives.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We have lots of robots,\u0026rdquo; said Sehoon Ha, an assistant professor in Georgia Tech\u0026rsquo;s School of Interactive Computing and a co-principal investigator on the project. \u0026ldquo;But they aren\u0026rsquo;t in our house or in our stores. It\u0026rsquo;s mainly because of safety. I have a young daughter. I wouldn\u0026rsquo;t be comfortable with a full-sized humanoid robot in my house.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPreviously, UCLA developed a new class of robots called \u0026ldquo;buoyancy-assisted robots.\u0026rdquo; Instead of the human-like hardware that was bulky, heavy, and subject to the pitfalls of gravity, these legged robots remained erect thanks to a body made of helium balloons.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Even though there is some mechanical or motor error, it never falls,\u0026rdquo; Ha said. \u0026ldquo;It never breaks. It\u0026rsquo;s super light. Even if it might collide with you, it doesn\u0026rsquo;t fall and it can\u0026rsquo;t hurt you.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECreating a new class of locomotion systems has a couple of challenges: designing a new hardware that is cheap and safe and developing an algorithm that supports locomotion and collaboration. This grant will support development of novel frameworks that address a fundamentally new family of legged robots and empower them with reliable locomotion skills.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The main philosophy is to deploy the reinforcement learning on real hardware,\u0026rdquo; Ha said. \u0026ldquo;This buoyancy-assisted robot is subject to a relatively larger magnitude of drag forces. It\u0026rsquo;s hard to simulate it. There\u0026rsquo;s a discrepancy between simulation and the real world. We want to collect real-world experience and limit the reality gap.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe technology could help carry out a search and rescue in a disaster relief zone or answer a question in a retail space. The new project, funded by a $500,000 grant from the National Science Foundation\u0026rsquo;s National Robotics Initiative, will help create new locomotion control systems using reinforcement learning to improve the state of this technology.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlready cheaper than its bulkier counterparts, these robots could be as inexpensive as a couple hundred dollars produced at scale, Ha said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Now you might imagine a scenario where you could drop 1,000 of these into a disaster area to carry our search and rescue missions,\u0026rdquo; he said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe grant runs for four years and research from the project will be open-source to encourage additional collaboration. The grant will also support a competition for middle and high school students using the low-cost platforms to foster student interest in the field.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Researchers at Georgia Tech, in collaboration with UCLA and the University of Southern California, are working to develop a new class of locomotion systems that could enable buoyancy-assisted robots to become a larger part of our daily lives."}],"uid":"33939","created_gmt":"2020-12-15 23:02:58","changed_gmt":"2020-12-15 23:02:58","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-12-15T00:00:00-05:00","iso_date":"2020-12-15T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"642141":{"id":"642141","type":"image","title":"Sehoon Ha","body":null,"created":"1608073322","gmt_created":"2020-12-15 23:02:02","changed":"1608073322","gmt_changed":"2020-12-15 23:02:02","alt":"Sehoon Ha","file":{"fid":"243950","name":"sehoon.jpg","image_path":"\/sites\/default\/files\/images\/sehoon.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/sehoon.jpg","mime":"image\/jpeg","size":542864,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/sehoon.jpg?itok=95iLKDqy"}}},"media_ids":["642141"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181920","name":"cc-research; ic-ai-ml; ic-robotics"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"641685":{"#nid":"641685","#data":{"type":"news","title":"Renowned Roboticist Departing Georgia Tech for New Position","body":[{"value":"\u003Cp\u003EAfter serving as chair of Georgia Tech\u0026rsquo;s School of Interactive Computing (IC) for three years, \u003Cstrong\u003EAyanna Howard\u003C\/strong\u003E has accepted a position at another institution.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn a news release published today, The Ohio State University \u003Ca href=\u0022https:\/\/news.osu.edu\/ayanna-howard-named-next-dean-of-college-of-engineering\/\u0022\u003Eannounced that it has hired Howard as the dean of its College of Engineering\u003C\/a\u003E. Howard begins her new job March 1. She will be the first woman to lead engineering at Ohio State.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Ayanna has unbounded energy and the ability to share her passion for AI and robotics. Through her students, her research, her entrepreneurship, and in her own story, Ayanna has made both her fields of study and Georgia Tech better.\u0026nbsp;She is truly inspiring.\u0026nbsp;We\u0026rsquo;re grateful for her tremendous contributions and wish her continued success in her new role,\u0026rdquo; said \u003Cstrong\u003ECharles Isbell\u003C\/strong\u003E, dean of the College of Computing, which houses the School of Interactive Computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAccording to Isbell, an interim chair for the School of (IC) will be named for the spring semester. A search for a new chair will also begin in the spring.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHoward, also the Linda J. and Mark C. Smith Chair Professor in the School of IC and the School of Electrical and Computer Engineering (ECE), will continue to advise and work with her current Ph.D. students.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHoward joined Georgia Tech in 2005 as an associate professor in the School of ECE. She is the founder and director of the Human-Automation Systems Lab. Following a national search in 2017, Georgia Tech announced that Howard had been selected as chair of the School of IC. She began her term as chair in Spring 2018.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHoward will serve a five-year term as dean, according to the Ohio State release. She will also be a tenured professor in the College of Engineering\u0026rsquo;s Department of Electrical and Computer Engineering and hold a joint appointment in the Department of Computer Science and Engineering.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"School of IC Chair Ayanna Howard has been named as the Ohio State dean of the College of Engineering."}],"uid":"32045","created_gmt":"2020-11-30 14:55:39","changed_gmt":"2020-12-01 13:35:21","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-11-30T00:00:00-05:00","iso_date":"2020-11-30T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"641686":{"id":"641686","type":"image","title":"Ayanna Howard","body":null,"created":"1606748397","gmt_created":"2020-11-30 14:59:57","changed":"1606748397","gmt_changed":"2020-11-30 14:59:57","alt":"School of Interactive Computing Chair Ayanna Howard","file":{"fid":"243822","name":"Screen Shot 2020-11-30 at 9.58.44 AM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202020-11-30%20at%209.58.44%20AM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202020-11-30%20at%209.58.44%20AM.png","mime":"image\/png","size":1343076,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202020-11-30%20at%209.58.44%20AM.png?itok=K6Tx3x3O"}}},"media_ids":["641686"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"825","name":"Ayanna Howard"},{"id":"10664","name":"charles isbell"},{"id":"186330","name":"Ohio State"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAlbert Snedeker, Communications Mgr. II\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto:albert.snedeker@cc.gatech.edu?subject=Howard%20Departure\u0022\u003Ealbert.snedeker@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["albert.snedeker@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"641437":{"#nid":"641437","#data":{"type":"news","title":"New Grant Helps Researchers Bring Cybersecurity into the Physical World","body":[{"value":"\u003Cp\u003EImagine if you could physically feel a threat to your digital security \u0026ndash; perhaps a vibration on your wrist to alert you to nearby danger. What kinds of precautions would you take if you felt these digital threats the same way you felt those of the physical world?\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELike carrying a can of pepper spray when walking down a dark alleyway \u0026ndash; or avoiding the alleyway altogether \u0026ndash; a new project out of Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC) aims to connect this abstract world of cybersecurity and privacy with concrete physical environments to promote better security behavior.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In the real world, we have these corporeal sensations that give us cues on how to act,\u0026rdquo; said IC Assistant Professor \u003Cstrong\u003ESauvik Das\u003C\/strong\u003E, the principal investigator on the project. \u0026ldquo;If you feel a cold breeze on your cheek, you may decide to wear a scarf. If you are walking down a dark alleyway, you may become more alert and aware of your surroundings. It\u0026rsquo;s a different story in the present state of cybersecurity and privacy.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThat current state is mostly limited to a warning when you\u0026rsquo;re leaving a secure network on your computer or a pop-up box that might caution against proceeding to a specific website. But what about the digital threats we face daily when proceeding throughout our daily routines, perusing the internet on our phones or walking through a crowded airport?\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThere are no corporeal sensory perception cues that indicate what is threatening or worthy of our attention. Similarly, we don\u0026rsquo;t have affordances that allow us to manipulate digital interfaces in ways that will better protect us against these threats that we find salient.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;That\u0026rsquo;s the idea here,\u0026rdquo; Das said. \u0026ldquo;We want to solve this abstraction problem by physically alerting people to threats and giving them means to defend against them.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe project presents three solutions to the digital abstraction problem \u0026ndash; Spidey Sense, Bit Whisperer, and Horcrux. Each aims to solves a specific branch of the problem: notifying you to threats, giving you more effective means to defend against threats, and providing means to better govern shared resources.\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Cstrong\u003ESpidey Sense\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003ESpidey Sense uses a wristband that integrates with modern Apple watches that can squeeze the wrist in programmable patterns to notify the wearer of perceived digital threats.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe idea is that people might not feel the threat through visual communication design the same way they might when walking down a dark alleyway at night.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;How can we similarly communicate that threat?\u0026rdquo; Das poses. \u0026ldquo;This field of affective haptics was a good bridge.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Cstrong\u003EBit Whisperer\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003ESo, what do you do when you know threats exist? In the real world, one might intuit that to block entry into a room they could place a heavy object in front of a door or that to communicate secure information they might need to whisper. This project aims to present similar options for digital information.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s like whispering through the digital world,\u0026rdquo; Das said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo transfer data from one smart device to another, one might use Bluetooth. But one can\u0026rsquo;t see the bits traveling through the air as they are communicated. Bit Whisperer uses physical objects, like a table, to communicate.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing inaudible sound frequencies that can be generated through smartphones, data is transmitted through the physical surface from one device to other devices on the same surface. Anyone off the surface can\u0026rsquo;t receive the data without physically placing their device on it, making it much more challenging for would-be attackers.\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Cstrong\u003EHorcrux\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EHorcrux is a more abstract project at present. It aims to assist individuals governing shared digital resources. Current state of the art provides point-and-click resources, but those make it impossible to multitask and challenging to specify access controls.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis project, like the others, aims to provide physical tools that can be manipulated by hand to make it easier to specify access.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe idea now is a mat where play pieces like figurines can represent people or resources that people own.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Think of a castle where you can move figurines through different accesses,\u0026rdquo; Das said. \u0026ldquo;These tangible interfaces allow for more interaction, more multitasking, and visible physical representations for what everyone has access to.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThese projects are being funded by a $500,000 grant from the National Science Foundation. IC Professor \u003Cstrong\u003EGregory Abowd\u003C\/strong\u003E is a co-principal investigator on the grant, and Ph.D. student \u003Cstrong\u003EYoungwook Do\u003C\/strong\u003E is a key contributor.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A new project out of Georgia Tech\u2019s School of Interactive Computing (IC) aims to connect the abstract world of cybersecurity and privacy with concrete physical environments to promote better security behavior."}],"uid":"33939","created_gmt":"2020-11-19 15:38:31","changed_gmt":"2020-11-19 15:38:31","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-11-19T00:00:00-05:00","iso_date":"2020-11-19T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"626044":{"id":"626044","type":"image","title":"Cybersecurity stock image","body":null,"created":"1568223064","gmt_created":"2019-09-11 17:31:04","changed":"1568223064","gmt_changed":"2019-09-11 17:31:04","alt":"Stock photo of stylized padlock icons surrounded by a word cloud of information security terms.","file":{"fid":"238338","name":"Cybersecurity_stock_image.jpg","image_path":"\/sites\/default\/files\/images\/Cybersecurity_stock_image.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Cybersecurity_stock_image.jpg","mime":"image\/jpeg","size":110089,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Cybersecurity_stock_image.jpg?itok=0IXlXdwN"}}},"media_ids":["626044"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182941","name":"cc-research; ic-cybersecurity; ic-hcc"}],"core_research_areas":[{"id":"145171","name":"Cybersecurity"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"641381":{"#nid":"641381","#data":{"type":"news","title":"Need a Note Taker? This AI Can Help.","body":[{"value":"\u003Cp\u003EA new tool that uses artificial intelligence is bringing notetaking up to speed and may help future digital assistants ease fears of ever missing a meeting again.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt\u0026rsquo;s an age-old problem: We are inundated with informal forms of communication like phone calls, remote video conferences, text conversations on group messaging platforms like Slack or Microsoft Teams. Remembering key points of each discussion can at times be overwhelming, not to mention the stress caused by missing a meeting or seeing a couple hundred messages stack up while you were out for lunch.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis digital solution, developed by Georgia Tech researchers and being presented in a paper this week at the \u003Ca href=\u0022https:\/\/2020.emnlp.org\/\u0022\u003E2020 Conference on Empirical Methods in Natural Language Processing\u003C\/a\u003E, can assuage those concerns by generating summaries of informal conversations. Using a subset of machine learning called natural language processing, the method identifies conversational structure using particular keywords.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Think about informal conversational structure: It has an opening, problem statements, discussions, a conclusion,\u0026rdquo; said \u003Cstrong\u003EDiyi Yang\u003C\/strong\u003E, an assistant professor in the School of Interactive Computing and a co-author on the paper. \u0026ldquo;We want to mine those structures to teach the model what may be informative within the conversation for generating better summaries.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWords like any variation of \u0026ldquo;hello\u0026rdquo; or \u0026ldquo;good,\u0026rdquo; for example, might indicate that it is a greeting. Other action words likely indicate some kind of intention, and dates or times a discussion and conclusion on plans. Knowing this, the model can represent the unstructured conversation better to craft an accurate summary.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThese types of summaries are more important now than ever. More individuals all over the world are working or attending school remotely. More discussions are being handled over the phone or video conferencing, plans being made through applications like Microsoft Teams. Previous research on the subject has focused on formal content like books, papers, or news articles, but the existing body of work on informal language is relatively sparse.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This is applicable now more than ever because of where we are,\u0026rdquo; Yang said. \u0026ldquo;There\u0026rsquo;s so much online and text conversation, and we have way too much information. We need help storing it in a shorter and more structured way. If you\u0026rsquo;re away from your laptop for 30 minutes, it\u0026rsquo;s important to be able to get a quick summary of what you missed.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EChallenges still exist. There are problems with referral in the conversation, or calling back to a previous discussion point later in a meeting. There are also typos or slang, repetition, interruption or changes in role, language changes that can interfere with the model\u0026rsquo;s ability to determine structure. These are items Yang and her collaborator are continuing to address moving forward.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This is a great starting point,\u0026rdquo; Yang said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe work is presented in the paper \u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/2010.01672.pdf\u0022\u003E\u003Cem\u003EMulti-View Sequence-to-Sequence Models with Conversational Structure for Abstractive Dialogue Summarization\u003C\/em\u003E\u003C\/a\u003E. The paper is co-authored by Yang and \u003Cstrong\u003EJiaao Chen\u003C\/strong\u003E, a second-year Ph.D. student in the School of Interactive Computing.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A new AI tool that summarizes unstructured conversational language could help future digital assistants ease fears of ever missing a meeting again."}],"uid":"33939","created_gmt":"2020-11-17 17:03:48","changed_gmt":"2020-11-17 17:03:48","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-11-17T00:00:00-05:00","iso_date":"2020-11-17T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"641380":{"id":"641380","type":"image","title":"Taking Notes","body":null,"created":"1605631344","gmt_created":"2020-11-17 16:42:24","changed":"1605631344","gmt_changed":"2020-11-17 16:42:24","alt":"A stack of notes on a table","file":{"fid":"243729","name":"Note taking photo.jpg","image_path":"\/sites\/default\/files\/images\/Note%20taking%20photo.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Note%20taking%20photo.jpg","mime":"image\/jpeg","size":28007,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Note%20taking%20photo.jpg?itok=9unEOuC7"}}},"media_ids":["641380"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181639","name":"cc-research; ic-ai-ml"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"640727":{"#nid":"640727","#data":{"type":"news","title":"Georgia Tech\u2019s Secure and Safe Elections Research Group to Provide Live Wait Times to Fulton County Voters on Election Day ","body":[{"value":"\u003Cp\u003EResearchers from Georgia Tech have formed the Safe and Secure Elections research group. The group is developing tools that will allow Fulton County election officials to balance competing demands of election management, help enhance security and safety during the Covid-19 pandemic at polling locations, reduce voting waiting times, and expand access.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy testing the tools in Fulton County, Georgia\u0026rsquo;s largest county, the Georgia Tech team will be able to solve problems that might be shared by other jurisdictions in the country. The tools will ultimately be available to the general public and election officials nationwide so that people can better understand how public elections are conducted, which increases confidence in their outcome, according to the researchers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne of the early efforts of the group is to measure and report live wait times to voters at the 250 polling locations in Fulton County on election day, Nov. 3. The website wait.gatech.edu will go live that day and allow users to easily search for a location and view the current estimated wait time. The site also displays wait times recorded throughout the day.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe site is designed to be a utility for voters to help them plan when to vote, and that goal has informed the simple design and usability of the site, according to \u003Cstrong\u003EEllen Zegura\u003C\/strong\u003E, professor of computer science and a team lead in the research group.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EZegura emphasized that the site does not forecast wait times into the future but rather gives current results based on voter text responses and volunteer observations.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDuring early voting for the 2020 general election, a student-led team implemented a pilot test at four polling locations in the county. Signs at the polling locations prompted voters to text their wait times after voting. An optional survey let them share more details about equipment quality, Covid-19 concerns, and more.\u0026nbsp;\u003Cstrong\u003EVlad Kolesnikov\u003C\/strong\u003E,\u0026nbsp;associate professor of computer science who studies cryptography, said the team is mindful of data security and privacy and that\u0026nbsp;no personally identifiable user data is collected or stored.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;By expanding the scope of safe and secure elections from narrow technological problems to addressing physical access, availability, and public health, this project brings a new dimension to the design of modern voting systems,\u0026rdquo; said \u003Cstrong\u003ERichard DeMillo\u003C\/strong\u003E, principle investigator for the project and chair of Georgia Tech\u0026rsquo;s new School of Cybersecurity and Privacy.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe election-day effort ties into a larger six-month project by the group that is supported by the Public Interest Technology University Network. It focuses on understanding the quantitative tradeoffs that local election officials are forced to make, and provides tools to help them better manage the voting process. Faculty and students involved in the effort come from the schools of Industrial and Systems Engineering (ISyE), Computer Science, and Cybersecurity and Privacy, as well as the College of Design.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA second project team, led by ISyE Professor \u003Cstrong\u003EBenoit Montreuil\u003C\/strong\u003E, ISyE Director of Professional Practice \u003Cstrong\u003EDima Nazzal\u003C\/strong\u003E, and ISyE Adjunct Professor and IMT Mines Professor \u003Cstrong\u003EFrederic Benaben\u003C\/strong\u003E, is constructing 3-D maps and AI-based simulations of Covid-safe layouts for a number of Fulton County\u0026rsquo;s polling places. Their work also involves optimizing voting equipment allocations across all polling locations to minimize wait times, and projections of turn out and waiting times based on historical data and planned equipment provisioning.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Georgia Tech\u0026rsquo;s project is a comprehensive effort that puts much-needed tools and design methods in the hands of public officials that not only improve the process of voting but is a vehicle for communicating to the general public that elections can be both fair and safe in these times of public health crises and social unrest,\u0026rdquo; said DeMillo.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EMichael Best\u003C\/strong\u003E, professor in the Sam Nunn School\u0026nbsp;of International Affairs and School of Interactive Computing, is another team lead and has a long track record of working across the globe to help build sustainable election systems.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The goal is very straightforward. We hope to enhance safety, security, and efficiency at polling locations, which of course should contribute to better voter turnout and trust,\u0026quot; said Best.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/forms.office.com\/Pages\/ResponsePage.aspx?id=u5ghSHuuJUuLem1_Mvqgg0lHORmR6SpJnt7p0sQsfExUNTQ1MzcwNURJNk43Q00zNEQ1TDlNOUc5TC4u\u0022\u003EVolunteers can help\u003C\/a\u003E the research group in providing live wait times at polling locations to Fulton County voters by observing voters on Nov. 3 and texting wait times, or by helping set up signs Nov. 1 so voters can directly text wait times.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EOne of the early efforts of the group is to measure and report live wait times to voters at the 250 polling locations in Fulton County on election day, Nov. 3. The website wait.gatech.edu will go live that day and allow users to easily search for a location and view the current estimated wait time.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"One of the early efforts of the group is to measure and report live wait times to voters at the 250 polling locations in Fulton County on election day, Nov. 3. The website wait.gatech.edu will go live that day and allow users to search for a location."}],"uid":"27592","created_gmt":"2020-10-28 19:27:57","changed_gmt":"2020-10-30 18:49:49","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-10-28T00:00:00-04:00","iso_date":"2020-10-28T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"640728":{"id":"640728","type":"image","title":"Secure and Safe Elections Research Group","body":null,"created":"1603913359","gmt_created":"2020-10-28 19:29:19","changed":"1603913359","gmt_changed":"2020-10-28 19:29:19","alt":"","file":{"fid":"243535","name":"SSE_live wait times_social media.png","image_path":"\/sites\/default\/files\/images\/SSE_live%20wait%20times_social%20media.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/SSE_live%20wait%20times_social%20media.png","mime":"image\/png","size":229400,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/SSE_live%20wait%20times_social%20media.png?itok=38PBGfGm"}}},"media_ids":["640728"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"145171","name":"Cybersecurity"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu?subject=ICWSM%202020\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\nResearch Communications Manager\u003Cbr \/\u003E\r\nCollege of Computing\u003Cbr \/\u003E\r\nSchool of Cybersecurity and Privacy\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"640793":{"#nid":"640793","#data":{"type":"news","title":"Georgia Tech Researchers Contribute 13 Papers to Premier Visualization Conference","body":[{"value":"\u003Cp\u003EGeorgia Tech contributed to 13 papers and two workshops this week at \u003Ca href=\u0022http:\/\/ieeevis.org\/year\/2020\/welcome\u0022\u003EIEEE VIS 2020\u003C\/a\u003E, the premier forum for advances in theory, methods, and applications of visualization and visual analytics.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe conference highlights research from universities, government, and industry around the world. It is comprised of three separate events: IEEE Visual Analytics Science and Technology (VAST), IEEE Information Visualization (InfoVis), and IEEE Scientific Visualization (SciVis). Like other conferences throughout the Covid-19 pandemic, VIS was held virtually.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech\u0026rsquo;s research was highlighted by one Best Paper Honorable Mention titled \u003Cem\u003EMapping Researchers with PeopleMap\u003C\/em\u003E. The paper \u0026ndash; authored by \u003Cstrong\u003EJon Saad-Falcon\u003C\/strong\u003E, \u003Cstrong\u003EOmar Shaikh\u003C\/strong\u003E, \u003Cstrong\u003EZijie J. Wang\u003C\/strong\u003E, \u003Cstrong\u003EAustin P. Wright\u003C\/strong\u003E, \u003Cstrong\u003ESasha Richardson\u003C\/strong\u003E, and \u003Cstrong\u003EPolo Chau\u003C\/strong\u003E \u0026ndash; presents an open-source interactive tool that uses natural language processing to create visual maps for researchers based on their research interests and publications.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Discovering research expertise at universities can be a difficult task,\u0026rdquo; the paper contends. \u0026ldquo;Directories routinely become outdated, and few help in visually summarizing researchers\u0026rsquo; work or supporting the exploration of shared interests among researchers. This results in lost opportunities for both internal and external entities to discover new connections, nurture research collaboration, and explore the diversity of research.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe paper also received a VAST Poster Research Award.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlso of note, new School of Computational Science \u0026amp; Engineering Chair \u003Cstrong\u003EHaesun Park\u003C\/strong\u003E received recognition for a 2010 IEEE VAST Paper. The paper received a Test of Time Award, recognizing it for continued contributions to the visual analytics and visualization community. The paper is titled \u003Cem\u003EiVisClassifier: An Interactive Visual Analytics System for Classification Based on Supervised Dimension Reduction\u003C\/em\u003E and co-authored by \u003Cstrong\u003EJaegul Choo\u003C\/strong\u003E, \u003Cstrong\u003EHanseung Lee\u003C\/strong\u003E, and \u003Cstrong\u003EJaeyeon Kihm\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESchool of Interactive Computing Ph.D. student \u003Cstrong\u003EEmily Wall\u003C\/strong\u003E, who is advised by Associate Professor \u003Cstrong\u003EAlex Endert\u003C\/strong\u003E, was also recognized with the VGTC Outstanding Dissertation Honorable Mention for her work \u003Cem\u003EDetecting and Mitigating Human Bias in Visual Analytics\u003C\/em\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;People are susceptible to a multitude of biases, including perceptual biases and illusions; cognitive biases like confirmation bias or anchoring bias; and social biases like racial or gender bias that are borne of cultural experiences and stereotypes,\u0026rdquo; Wall contends. \u0026ldquo;As humans are an integral part of data analysis and decision making in many domains, their biases can be injected into and even amplified by models and algorithms.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHer work aims to develop a better understanding of the role human bias plays in visual data analysis by defining bias, detecting bias, and mitigating bias.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EExplore more about Georgia Tech\u0026rsquo;s contributions to IEEE VIS at the links below, or visit the \u003Ca href=\u0022http:\/\/vis.gatech.edu\/\u0022\u003EGeorgia Tech Visualization Lab\u003C\/a\u003E. You can follow the lab on Twitter at \u003Ca href=\u0022https:\/\/twitter.com\/GT_Vis\u0022\u003E@GT_Vis\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EGeorgia Tech at IEEE VIS 2020\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EPapers\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/2007.15832\u0022\u003ESafetyLens: Visual Data Analysis of Functional Safety of Vehicles (Arpit Narechania, Ahsan Qamar, and Alex Endert)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/nl4dv.github.io\/nl4dv\/\u0022\u003ENL4DV: A Toolkit for Generating Analytic Specifications for Data Visualization from Natural Language Queries (Arpit Narechania, Arjun Srinivasan, and John Stasko)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arjun010.github.io\/individual-projects\/databreeze.html\u0022\u003EInterweaving Multimodal Interaction with Flexible Unit Visualizations for Data Exploration (Arjun Srinivasan, Bongshin Lee, and John Stasko)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/terrancelaw.github.io\/publications\/data_insight_interviews_vis20.pdf\u0022\u003EWhat are Data Insights to Professional Visualization Users? (Po-Ming Law, Alex Endert, and John Stasko)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/terrancelaw.github.io\/publications\/auto_insights_vis20.pdf\u0022\u003ECharacterizing Automated Data Insights (Po-Ming Law, Alex Endert, and John Stasko)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/2004.15004\u0022\u003ECNN Explainer: Learning Convolutional Neural Networks with Interactive Visualization (Zijie J. Wang, Robert Turko, Omar Shaikh, Haekyu Park, Nilaksh Das, Fred Hohman, Minsuk Kahng, Duen Horng (Polo) Chau)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/2009.02608\u0022\u003EBluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks (Nilaksh Das, Haekyu Park, Zijie J. Wang, Fred Hohman, Robert Firstman, Emily Rogers, Duen Horng (Polo) Chau)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/poloclub.github.io\/papers\/20-vis-ganlabeval.pdf\u0022\u003EHow Does Visualization Help People Learn Deep Learning? Evaluating GAN Lab with Observational Study and Log Analysis (Minsuk Kahng, Duen Horng (Polo) Chau)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/2009.00091\u0022\u003EMapping Researchers with PeopleMap (Jon Saad-Falcon, Omar Shaikh, Zijie J. Wang, Austin P. Wright, Sasha Richardson, Duen Horng (Polo) Chau)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/gtvalab.github.io\/files\/legion.pdf\u0022\u003ELEGION: Visually compare modeling techniques for regression (Subhajit Das, Alex Endert)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/gtvalab.github.io\/files\/cava_dataaug.pdf\u0022\u003ECAVA: A Visual Analytics System for Exploratory Columnar Data Augmentation Using Knowledge Graphs (Dylan Cashman, Shenyu Xu, Subhajit Das, Florian Heimerl, Cong Liu, Shah Rukh Humayoun, Michael Gleicher, Alex Endert, Remco Chang)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003EA Comparative Analysis of Industry Human-AI Interaction Guidelines (Austin P. Wright, Zijie J. Wang, Haekyu Park, Grace Guo, Fabian Sperrle, Mennatallah El-Assady, Alex Endert, Daniel Keim, Duen Horng (Polo) Chau)\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/trexvis.github.io\/Workshop2020\/papers\/Coscia.pdf\u0022\u003EToward A Bias-Aware Future for Mixed Initiative Visual Analytics (Adam Coscia, Duen Horng (Polo) Chau, Alex Endert)\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ERecognitions\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~hpark\/papers\/choo_vast10_v1.pdf\u0022\u003EiVisClassifier: an Interactive Visual Analytics System for Classification Based on Supervised Dimension Reduction (Jaegul Choo, Hanseung Lee, Jaeyeon Kihm and Haesun Park)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/smartech.gatech.edu\/handle\/1853\/63597\u0022\u003EDetecting and Mitigating Human Bias in Visual Analytics (Emily Wall (Advisor: Alex Endert))\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWorkshops\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EMoVIS \u0026#39;20 (Organizers: Clio Andris, Somayeh Dodge, Alan MacEachren)\u003C\/li\u003E\r\n\t\u003Cli\u003EVISxAI \u0026#39;20 (Organizers: Adam Perer, Duen Horng (Polo) Chau, Fred Hohman, Hendrik Strobelt, Mennatallah El-Assady)\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"IEEE VIS highlights research from universities, government, and industry around the world."}],"uid":"33939","created_gmt":"2020-10-30 04:41:57","changed_gmt":"2020-10-30 04:41:57","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-10-30T00:00:00-04:00","iso_date":"2020-10-30T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"640792":{"id":"640792","type":"image","title":"Georgia Tech at IEEE VIS 2020","body":null,"created":"1604032582","gmt_created":"2020-10-30 04:36:22","changed":"1604032582","gmt_changed":"2020-10-30 04:36:22","alt":"Georgia Tech at IEEE VIS 2020","file":{"fid":"243550","name":"Screen Shot 2020-10-30 at 12.34.13 AM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202020-10-30%20at%2012.34.13%20AM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202020-10-30%20at%2012.34.13%20AM.png","mime":"image\/png","size":244701,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202020-10-30%20at%2012.34.13%20AM.png?itok=xIsvy28M"}}},"media_ids":["640792"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"186124","name":"cc-research; ic-ai-ml; ic-hcc; ic-social-computing; ic-visualization"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"640199":{"#nid":"640199","#data":{"type":"news","title":"Ivan Allen College of Liberal Arts and the College of Computing Launch New Ethics Center","body":[{"value":"\u003Cp\u003EBuilding on years of experience in research and education in ethics and technology, the College of Computing and the Ivan Allen College of Liberal Arts have launched the Ethics, Technology, and Human Interaction Center (ETHIC\u003Csup\u003Ex\u003C\/sup\u003E).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe new Center \u0026mdash; pronounced \u0026ldquo;ethics\u0026rdquo; \u0026mdash; will advance ethics-in-technology-centered research, education, and engagement at the Georgia Institute of Technology in collaboration with communities, government, non-governmental organizations, and industry. The office of the Executive Vice President for Research provided significant funds over a three-year period to seed the Center.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We must foster Georgia Tech\u0026rsquo;s strengths in ethics, responsible research, and the development of emerging technologies in collaborative ways,\u0026rdquo; said Raheem Beyah, Georgia Tech\u0026rsquo;s vice president for interdisciplinary research. \u0026ldquo;ETHIC\u003Csup\u003Ex \u003C\/sup\u003E\u0026nbsp;will provide the necessary environment to support this work and Georgia Tech\u0026rsquo;s mission to advance technology and improve the human condition.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe Colleges already have in-depth research and education experience addressing technology-related ethics questions. For instance, the School of Public Policy founded the Center for Ethics and Technology more than 12 years ago to foster a critical inquiry culture and deliberation about technology-related ethical issues. Faculty in that Center research ethical issues in the design of emerging contact tracing technologies, design ethics, social justice theory, and criticism broadly, and their relationship to emerging technologies such as smart cities, self-driving cars, and smart assistants, and a platform for fostering reflection and self-correcting reasoning in teaching and deliberation. The College of Computing also has created thriving research and educational initiatives such as the Ethical AI professional development course and the Law, Policy, and Ethics Initiative for Machine Learning @ GATECH.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe new Center will build on those strengths and position the Georgia Institute of Technology to become the leader in framing ethical concerns in technology, including fairness, accountability, transparency, social justice, and technological change.\u003C\/p\u003E\r\n\r\n\u003Ch2\u003EAnticipating New Ethical Challenges\u003C\/h2\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;ETHIC\u003Csup\u003Ex\u003C\/sup\u003E will be a place for robust, multidisciplinary research and a place to engage in systematic ethical analyses,\u0026rdquo; said Kaye Husbands Fealing, dean of the Ivan Allen College of Liberal Arts and co-director of the new Center. \u0026ldquo;It also will be a place for communities, corporations, governments, technologists, educators, and others to discuss and find solutions to complex ethical issues in science and technology.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe Center will conduct research in ethics and emerging technologies, frame ethical questions, solutions in ethics and technology, and social justice and equity. Interdisciplinary and community-based research also will be emphasized.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEducational initiatives will include investigating and designing curricula for ethics training that can be woven throughout students\u0026rsquo; educational journeys and for employees at affiliated companies.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026rdquo;Responsibility is a core value of everything we do in the College of Computing at Georgia Tech. That means focusing on our communities and examining the impacts, both positive and negative, of our research and curricula,\u0026rdquo; said Charles Isbell, dean and John P. Imlay, Jr. chair of the College of Computing. \u0026ldquo;It means reaching across disciplines to collaborate with experts in other fields\u0026nbsp;who\u0026nbsp;can inform our own technological developments. We find solutions for tomorrow\u0026rsquo;s problems, which means we have to anticipate the new ethical challenges we will face. This Center will help us do that.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch2\u003ENew Center Builds on Deep Experience\u003C\/h2\u003E\r\n\r\n\u003Cp\u003EAyanna Howard, chair in the School of Interactive Computing, joins Husbands Fealing as co-director of the new Center.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In the School of Interactive Computing, we encourage all of our faculty and student researchers to think critically about the new challenges their research presents and offer strategies to mitigate any potential negative impact on society,\u0026rdquo; Howard said. \u0026ldquo;Good innovation isn\u0026rsquo;t just about developing new technologies; it\u0026rsquo;s about developing solutions to problems that can make the world a better, more equitable, and more inclusive place.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech launched the School of Interactive Computing in anticipation of the need for interdisciplinary research in computer science, liberal arts, and more. Faculty members examine diverse ethical challenges, including misinformation, content moderation, free speech on social platforms, data privacy and security, virtual reality, wearable computing devices, and robo-ethics.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFaculty and students throughout the Ivan Allen College of Liberal Arts engage in interdisciplinary research collaborations on ethics and emerging technologies, including in areas such as engineering, the environment, bioethics, responsible innovation, research ethics, the \u003Cem\u003Eethical\u003C\/em\u003E\u0026nbsp;and political dimensions of design and technology, and more.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In the Ivan Allen College, careful consideration of the impacts of technology on people, and of people on technology, is a central part of our curriculum and values,\u0026rdquo; said Justin Biddle, an associate professor in the School of Public Policy, director of the Center for Ethics and Technology, and a member of the new Center\u0026rsquo;s leadership team. \u0026ldquo;With innovation today often outpacing our ability to understand its consequences, and widespread questions regarding the relations between technology, equity, and social justice, this kind of thinking is more important than ever.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFaculty in both Colleges also have initiated discussions on the social and ethical implications of emerging technologies\u0026nbsp;across campus and beyond. These include the \u003Ca href=\u0022https:\/\/ethics.gatech.edu\/techdebates\u0022\u003E\u003Cem\u003ETechDebates on Emerging Technologies\u003C\/em\u003E\u003C\/a\u003E\u003Cem\u003E, \u003C\/em\u003Ethe \u003Ca href=\u0022https:\/\/ethics.gatech.edu\/sparks-forum\u0022\u003ESparks Forum on Ethics and Engineering\u003C\/a\u003E, the Machine Learning@GT Seminar Series, and the \u003Ca href=\u0022http:\/\/techfutures.lmc.gatech.edu\/\u0022\u003EEthics and Technological Futures\u003C\/a\u003E series developed by Nassim Parvin and Susana Morris in the \u003Ca href=\u0022https:\/\/lmc.gatech.edu\u0022\u003ESchool of Literature, Media, and Communication\u003C\/a\u003E. Ellen Zegura, a professor in the School of Computer Science, also leads a Mozilla grant aimed at embedding ethics in computer science classes through role play.\u003C\/p\u003E\r\n\r\n\u003Ch2\u003E\u0026#39;Where the Best of Sciences and Humanities Meet\u0026#39;\u003C\/h2\u003E\r\n\r\n\u003Cp\u003EDeven Desai, associate professor and area coordinator for Law and Ethics at Scheller College of Business, also will assume a key leadership role at ETHIC\u003Csup\u003Ex\u003C\/sup\u003E. He said the new Center will \u0026ldquo;build and deepen technology-related ethics scholarship and research across Georgia Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Scheller College\u0026rsquo;s focus on law and ethics is part of how we train future business leaders, the people who take innovation and bring it to market,\u0026rdquo; said Desai, who is also associate director for Law, Policy, and Ethics for Machine Learning at GA Tech (ML@GATECH), an interdisciplinary research center.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;ETHICx will be a place where the best of science and humanities meet to challenge and push to find the unasked, important questions. In that friction and fun, the best questions about the problems we face and the best answer about how to solve them so that everyone can benefit will come out,\u0026rdquo; he said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOther members of the new Center\u0026rsquo;s key leadership team include Jason Borenstein, director of graduate research ethics programs in the School of Public Policy; Betsy DiSalvo, director of the human-centered computing Ph.D. program and associate professor in the School of Interactive Computing; Michael Hoffmann, a professor in the School of Public Policy; and Nassim Parvin, an associate professor in the School of Literature, Media, and Communication.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA launch event is planned for November, during Ethics Awareness Week, with a forum to identify key challenges in technology ethics. The Center will soon announce details.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor more information about ETHIC\u003Csup\u003Ex\u003C\/sup\u003E, contact Husbands Fealing at \u003Ca href=\u0022mailto:dean@gatech.edu\u0022\u003Edean@gatech.edu\u003C\/a\u003E or Howard at \u003Ca href=\u0022mailto:ah260@gatech.edu\u0022\u003Eah260@gatech.edu\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EThe new Center will advance ethics-in-technology-centered research, education, and engagement at the Georgia Institute of Technology in collaboration with communities, government, non-governmental organizations.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"The new Center will advance ethics-in-technology-centered research, education, and engagement at the Georgia Institute of Technology."}],"uid":"33939","created_gmt":"2020-10-14 15:01:00","changed_gmt":"2020-10-14 17:15:52","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-10-13T00:00:00-04:00","iso_date":"2020-10-13T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"640176":{"id":"640176","type":"image","title":"ETHICx Center graphic","body":null,"created":"1602623629","gmt_created":"2020-10-13 21:13:49","changed":"1602623629","gmt_changed":"2020-10-13 21:13:49","alt":"","file":{"fid":"243345","name":"ETHICx graphic.jpg","image_path":"\/sites\/default\/files\/images\/ETHICx%20graphic.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/ETHICx%20graphic.jpg","mime":"image\/jpeg","size":490407,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/ETHICx%20graphic.jpg?itok=rW3XGy3i"}}},"media_ids":["640176"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"186032","name":"ETHICx"},{"id":"186033","name":"Ethics Technology and Human Interaction Center"},{"id":"1616","name":"Ivan Allen College of Liberal Arts"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39511","name":"Public Service, Leadership, and Policy"}],"news_room_topics":[{"id":"71871","name":"Campus and Community"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EMichael Pearson\u003Cbr \/\u003E\r\nmichael.pearson@iac.gatech.edu\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDavid Mitchell\u003Cbr \/\u003E\r\ndavid.mitchell@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["michael.pearson@iac.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"639912":{"#nid":"639912","#data":{"type":"news","title":"Collaborative Startup Helping People in Disadvantaged Communities Learn Entry-level Data Science Skills","body":[{"value":"\u003Cp\u003EAcross businesses and organizations of all sizes, there are rapidly growing opportunities for data science workers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMany people, however, particularly those from economically disadvantaged communities, are often excluded from the training opportunities necessary to be competitive for these jobs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo address this issue and increase the diversity of the data science field, Georgia Tech has launched \u003Ca href=\u0022https:\/\/dataworkforce.gatech.edu\/\u0022\u003EDataWorks\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe new hybrid program \u0026ndash; which recently earned a $1.5 million National Science Foundation (NSF) grant \u0026ndash; works closely with community partners to hire and train people from under-resourced communities in Atlanta to do data science work.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;There are so many tasks peripheral to computer science that, while not requiring a degree to perform, are critically important to the CS community,\u0026rdquo; said DataWorks founder and School of Interactive Computing (IC) Associate Professor \u003Ca href=\u0022http:\/\/betsydisalvo.com\/\u0022\u003E\u003Cstrong\u003EBetsy DiSalvo\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPart startup company, part outreach effort, and part research platform, DataWorks provides its employees with on-the-job training to learn entry-level data wrangling skills.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo learn data skills like cleaning, linking, and reformatting, employees use real-world \u0026ldquo;messy\u0026rdquo; data \u0026ndash;\u0026nbsp;provided mostly by Atlanta non-profit organizations. Once cleaned the data are returned to an organization to help it fulfill its mission and business objectives.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Rather than just teaching, we think it\u0026rsquo;s important to have people situated in a real work environment. In doing so, they feel like what they are doing is more valued. It also allows them to see themselves as being a part of this industry,\u0026rdquo; said DiSalvo.\u003C\/p\u003E\r\n\r\n\u003Ch5\u003E\u003Ca href=\u0022https:\/\/podcasts.apple.com\/us\/podcast\/pursuing-equity-through-dataworks-with-betsy-disalvo\/id1435564422?i=1000492535418\u0022\u003E[RELATED: Betsy DiSalvo Joins the Interaction Hour Podcast to Discuss DataWorks and Equity in Computing]\u003C\/a\u003E\u003C\/h5\u003E\r\n\r\n\u003Cp\u003EFor one of its first pro bono projects DataWorks employees worked with \u003Ca href=\u0022https:\/\/www.enterprisecommunity.org\/where-we-work\/southeast\/atlanta\u0022\u003EEnterprise Community Partners, Inc.\u003C\/a\u003E on an affordable housing database project.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The DataWorks team completed the detail-oriented work of pulling data from public reports, aligning it with other public data, and producing one complete dataset for our affordable housing database and website project,\u0026rdquo; said \u003Cstrong\u003ESara Haas\u003C\/strong\u003E, southeast market director for Enterprise Community Partners.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;With DataWorks\u0026rsquo; help, we\u0026rsquo;re helping to level the playing field by providing residents, community advocates, public partners, and nonprofit developers access to the same type of data that others have,\u0026rdquo; said Haas.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDataWorks started in January with four part-time workers, which were recruited through west Atlanta organization\u0026nbsp;\u003Ca href=\u0022https:\/\/www.facebook.com\/raisingexpectations\/\u0022\u003ERaising Expectations\u003C\/a\u003E, a non-profit mentoring and tutoring program. DiSalvo had hoped to be up to 10 employees this summer but then the pandemic hit. At the time, she wasn\u0026rsquo;t sure the program would survive.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut working remotely, DataWorks continued building its reputation and training its employees through pro bono work. It has also recently picked up a few paying clients from the private sector, as well as a new contract with \u003Ca href=\u0022https:\/\/www.civicatlanta.org\/\u0022\u003EAtlanta\u0026rsquo;s Center for Civic Innovation\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlong with new clients, the program also has new funding. Price Waterhouse Cooper (PWC) recently donated $25,000 to the program. Along with the funding, this collaboration with DataWorks includes a donation of 300 volunteer hours from PWC.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDiSalvo and her co-primary investigators \u0026ndash;\u0026nbsp;School of IC Associate Professor \u003Ca href=\u0022https:\/\/carldisalvo.com\/\u0022\u003ECarl DiSalvo\u003C\/a\u003E and Georgia State University Assistant Professor \u003Ca href=\u0022https:\/\/education.gsu.edu\/profile\/ben-shapiro\/\u0022\u003EBen Shapiro\u003C\/a\u003E \u0026ndash;also recently earned a $1.5 million NSF grant for the program in August.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We\u0026rsquo;re actually down to three employees now, but one left to take a full-time job with Georgia Tech so we consider that a win,\u0026rdquo; DiSalvo said happily. She added that with the new NSF funding \u0026ndash; and the additional work \u0026ndash; she expects to move forward with hiring additional DataWorks employees in the near term.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe NSF grant is part of the Connected Communities program. Along with continuing to hire and train people from Atlanta\u0026rsquo;s under-resourced communities, the grant will fund research into the program to build more training tools and programs for employees that have the potential to scale to other cities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne aspect of the research will address bias. DiSalvo says she wants to better understand how having different groups of people doing peripheral data work ultimately impacts outputs. Another research question will look at what kind of structures can be developed to do this kind of community engagement work within an institution like Georgia Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We know there are a lot of grassroots communities that could take advantage of data and they don\u0026rsquo;t because there just aren\u0026rsquo;t structures in place for them to do it,\u0026rdquo; said DiSalvo.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDataWorks is currently part of Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/constellations.gatech.edu\/\u0022\u003EConstellations Center for Equity in Computing\u003C\/a\u003E, which housed in the College of Computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Constellations focuses on increasing equity in computing. DataWorks extends our reach by providing a pathway to computing opportunities for those who wish to acquire the necessary skills while applying them in an entry-level position,\u0026rdquo; said \u003Cstrong\u003ECedric Stallworth\u003C\/strong\u003E, assistant dean for Outreach, Enrollment and Community in the College of Computing, which houses Constellations.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAside from DataWorks, DiSalvo has been helping people to acquire entry-level skills since she was a Georgia Tech Ph.D. student. For her dissertation project as she created \u003Ca href=\u0022http:\/\/betsydisalvo.com\/projects\/games-and-the-glitch-game-testers\/\u0022\u003EGlitch\u003C\/a\u003E, a program in which young Black men were hired from the community to test video games. Like DataWorks, testing video games is entry-level work that doesn\u0026rsquo;t require a degree or special skills other than knowing how to play video games.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;These young men were testing real games for real companies. During the three years that Glitch was active, 33 young men, mostly from lower-income neighborhoods, participated in the project. More than 50 percent went on to major in computer science or related field,\u0026rdquo; said DiSalvo.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAccording to Stallworth, programs like Glitch and DataWorks are key to developing a social climate of inclusivity and opportunity for people in underrepresented communities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We must be creative and diligent in our efforts to widen the doorway that leads to success in computing,\u0026rdquo; said Stallworth.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A startup housed in the College of Computing has earned a $1.5 million NSF grant to support its mission of providing data science skills to people from disadvantaged communities in Atlanta."}],"uid":"32045","created_gmt":"2020-10-05 16:19:32","changed_gmt":"2020-10-07 18:57:22","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-10-05T00:00:00-04:00","iso_date":"2020-10-05T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"639914":{"id":"639914","type":"image","title":"DataWorks employees-1","body":null,"created":"1601915265","gmt_created":"2020-10-05 16:27:45","changed":"1601915538","gmt_changed":"2020-10-05 16:32:18","alt":"Young women learn data science skills at Georgia Tech as part of DataWorks program","file":{"fid":"243265","name":"Jan17-Jessica-Venise.jpg","image_path":"\/sites\/default\/files\/images\/Jan17-Jessica-Venise.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Jan17-Jessica-Venise.jpg","mime":"image\/jpeg","size":225908,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Jan17-Jessica-Venise.jpg?itok=eIrsqaBd"}}},"media_ids":["639914"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"606703","name":"Constellations Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"46361","name":"GT computing"},{"id":"185981","name":"dataworks"},{"id":"92811","name":"data science"},{"id":"11961","name":"betsy disalvo"},{"id":"181314","name":"constellations-external"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAlbert Snedeker, Senior Communications Mgr.\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto:albert.snedeker@cc.gatech.edu?subject=DataWorks\u0022\u003Ealbert.snedeker@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["albert.snedeker@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"639695":{"#nid":"639695","#data":{"type":"news","title":"Record Number of Students Attend Largest Women in Technology Conference","body":[{"value":"\u003Cp\u003EThe College of Computing is sending more than 100 students to the \u003Ca href=\u0022https:\/\/ghc.anitab.org\/\u0022\u003EGrace Hopper Celebration (GHC)\u003C\/a\u003E from Sept. 29 to Oct. 3. \u0026nbsp;Many are attending the annual conference for the first time.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlthough all virtual this year, it\u0026rsquo;s still one of largest gatherings of women in computing with more than 30,000 people from 115 countries representing academia and industry.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThanks to scholarships from the College, 63 undergraduate students, 32 master\u0026rsquo;s students, six Online Master\u0026rsquo;s of Science in Computer Science (OMSCS) students, and 12 Ph.D. students are able to attend.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThey have the opportunity to watch more than 200 panels and keynotes. Some highlights from Georgia Tech include a fireside chat with \u003Cstrong\u003EJoy Buolamwini\u003C\/strong\u003E, an alumna and founder of the \u003Ca href=\u0022https:\/\/www.ajl.org\/\u0022\u003EAlgorithmic Justice League\u003C\/a\u003E, on \u003Cem\u003E\u003Ca href=\u0022https:\/\/web.cvent.com\/event\/84f26b13-25ef-458c-9d38-38432d71be09\/websitePage:645d57e4-75eb-4769-b2c0-f201a0bfc6ce\u0022\u003EDecoding Bias\u003C\/a\u003E\u003C\/em\u003E on Oct. 3.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe \u003Ca href=\u0022http:\/\/constellations.gatech.edu\/\u0022\u003EConstellations Center for Equity in Computing\u003C\/a\u003E\u0026rsquo;s Director of Educational Innovation and Leadership \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/lien-diaz-0\u0022\u003E\u003Cstrong\u003ELien Diaz\u003C\/strong\u003E\u003C\/a\u003E joins the panel \u003Cem\u003E\u003Ca href=\u0022https:\/\/web.cvent.com\/event\/84f26b13-25ef-458c-9d38-38432d71be09\/websitePage:645d57e4-75eb-4769-b2c0-f201a0bfc6ce\u0022\u003ESeeing Beyond Yourself: Effective Allyship, Advocacy, and Activism for Women in Computing\u003C\/a\u003E\u003C\/em\u003E on Sept. 29.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I am particularly interested in the wide array of topics that GHC speakers will be addressing from tech careers to applications of machine learning and artificial intelligence,\u0026rdquo; said OMSCS student \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/michelleadea\/\u0022\u003E\u003Cstrong\u003EMichelle Adea\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe conference is just as much about networking as learning. As a silver-level sponsor, the College will connect with prospective students.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESome students are excited to meet other women in computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I\u0026rsquo;m looking forward to engaging with other like-minded women in different career positions and levels of education and making connections,\u0026rdquo; said undergraduate \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/rashmi-athavale\/\u0022\u003E\u003Cstrong\u003ERashmi Athavale\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The College of Computing is sending more than 100 students to the Grace Hopper Celebration (GHC) from Sept. 29 to Oct. 3."}],"uid":"34541","created_gmt":"2020-09-29 15:57:56","changed_gmt":"2020-09-29 16:08:34","author":"Tess Malone","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-09-29T00:00:00-04:00","iso_date":"2020-09-29T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"639696":{"id":"639696","type":"image","title":"GHC 2019","body":null,"created":"1601395682","gmt_created":"2020-09-29 16:08:02","changed":"1601395682","gmt_changed":"2020-09-29 16:08:02","alt":"GHC panel","file":{"fid":"243201","name":"IMG_1755 copy.jpg","image_path":"\/sites\/default\/files\/images\/IMG_1755%20copy.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/IMG_1755%20copy.jpg","mime":"image\/jpeg","size":365938,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/IMG_1755%20copy.jpg?itok=5wvNEkgn"}}},"media_ids":["639696"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ETess Malone, Communications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:tess.malone@cc.gatech.edu\u0022\u003Etess.malone@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"639092":{"#nid":"639092","#data":{"type":"news","title":"Georgia Tech Receives Google Grant to Study Impact of Pandemic Information Seeking on Vulnerable Populations","body":[{"value":"\u003Cp\u003E\u003Ca href=\u0022http:\/\/gatech.edu\u0022\u003EGeorgia Tech\u003C\/a\u003E will receive $155,000 from \u003Ca href=\u0022https:\/\/ai.google\/social-good\/\u0022\u003EGoogle\u0026rsquo;s Covid-19 AI for Social Good\u003C\/a\u003E program to investigate patterns and impact of pandemic information-seeking amongst vulnerable populations, such as older adults, low-income households, and Black and Hispanic adults. These populations have experienced disproportionately high rates of Covid-19-related death, severe sickness, and life disruptions like job loss.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFactors like higher rates of underlying health problems, reduced access to health care, and structural inequities shape access to critical resources. These same populations, however, also often have less access to the types of online information designed to improve health outcomes.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis project, led by principal investigator \u003Cstrong\u003EAndrea Grimes Parker\u003C\/strong\u003E, an associate professor in the \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E\u0026nbsp;and member of the \u003Ca href=\u0022http:\/\/ipat.gatech.edu\u0022\u003EInstitute for People and Technology\u003C\/a\u003E, will investigate how vulnerable and marginalized populations use technology for information seeking during the Covid-19 pandemic, as well as the impact of information exposure on their psychological wellbeing over time.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The Covid-19 pandemic has brought further attention to systemic disparities in health that have long existed in the United States,\u0026rdquo; Parker said. \u0026ldquo;Within a public health crisis, the information that people are exposed to has huge implications for how attitudes around the pandemic are shaped, how people respond, and thus the course of the pandemic.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Our work will provide both qualitative and quantitative evidence of the particular ways in which Covid-19 information exposure is tied to outcomes such as mental health in those most vulnerable to Covid-19 mortality and morbidity.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearchers will examine this information exposure over time. Their\u0026nbsp;findings will help to shape recommendations for crisis information communication, particularly online, in the future. This work builds upon existing work by Parker and collaborators at Northeastern University.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EParker and colleagues Professors \u003Cstrong\u003EMiso Kim\u003C\/strong\u003E and Dr. \u003Cstrong\u003EJacqueline Griffin\u003C\/strong\u003E began their collaboration investigation how well crisis apps \u0026ndash; mobile apps designed to provide help during emergency situations \u0026ndash; support older adults. This work was published at the 2020 ACM Conference on Human Factors in Computing Systems.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhen the pandemic began, they expanded their focus to additional groups of vulnerable to poor health, such as low-income and racial and ethnic minority populations. The team, in collaboration with Professor \u003Cstrong\u003EStacy Marsella\u003C\/strong\u003E, also expanded their focus beyond crisis apps, designing a survey to investigate information seeking practices in vulnerable populations amidst the pandemic.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis survey has been distributed to over 600 individuals in Massachusetts and Georgia to date. Parker\u0026rsquo;s new Google funding will enable the team to iterate on and expand the dissemination of this survey, conduct longitudinal analyses, and compliment the quantitative analysis with a qualitative component that will help unpack the nuances behind information-seeking practices and resulting Covid-19 attitudes, behaviors, and mental health outcomes.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis funding is part of Google.org\u0026rsquo;s $100 million commitment to Covid-19 relief efforts.\u0026nbsp;Organizations receiving funds were selected through a competitive review. Funding focus areas include health equity, disease spread monitoring and forecasting, frontline health worker support, secondary public health effects, and privacy-preserving contact tracing efforts.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Populations including older adults, low-income households, and Black and Hispanic adults have disproportionately high fatality rates, as well as less access to critical pandemic information."}],"uid":"33939","created_gmt":"2020-09-14 19:46:37","changed_gmt":"2020-09-14 19:46:37","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-09-14T00:00:00-04:00","iso_date":"2020-09-14T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"639090":{"id":"639090","type":"image","title":"Covid-19 Google Grant","body":null,"created":"1600112099","gmt_created":"2020-09-14 19:34:59","changed":"1600112099","gmt_changed":"2020-09-14 19:34:59","alt":"Two women wearing masks during Covid-19 pandemic","file":{"fid":"242990","name":"coronavirus-4981906_1920.jpg","image_path":"\/sites\/default\/files\/images\/coronavirus-4981906_1920.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/coronavirus-4981906_1920.jpg","mime":"image\/jpeg","size":134084,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/coronavirus-4981906_1920.jpg?itok=UdfavL2O"}}},"media_ids":["639090"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"184821","name":"cc-research; ic-hcc; ic-ai-ml; COVID-19"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"639077":{"#nid":"639077","#data":{"type":"news","title":"Georgia Tech Part of $5 Million Grant to Develop AI Tech Supporting Individuals With Autism Spectrum Disorder in the Workplace","body":[{"value":"\u003Cp\u003EThe \u003Ca href=\u0022http:\/\/nsf.gov\u0022\u003ENational Science Foundation\u003C\/a\u003E has awarded a $5 million grant to a multi-university team of researchers that includes \u003Ca href=\u0022http:\/\/gatech.edu\u0022\u003EGeorgia Tech\u003C\/a\u003E to create novel artificial intelligence technology that trains and supports individuals with Autism Spectrum Disorder (ASD) in the workplace.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe investment follows a successful $1 million, nine-month pilot grant to the same team, which also includes Yale University, Cornell University, Vanderbilt University, and the Vanderbilt University Medical Center. Georgia Tech\u0026rsquo;s portion of the grant is $500,000.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELed by co-principal investigator Professor \u003Cstrong\u003EJim Rehg\u003C\/strong\u003E of the \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E, Georgia Tech will develop methods for assessing nonverbal communication behaviors during face-to-face social interactions such as job interviews.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Our innovative approach uses an unobtrusive wearable camera to record social behaviors, which are then analyzed using computer vision and deep learning models,\u0026rdquo; Rehg said. \u0026ldquo;Our automated analysis will allow job seekers to get feedback on their communication skills as part of our team\u0026rsquo;s integrated approach to job interview coaching.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe project, which is part of the NSF\u0026rsquo;s \u003Ca href=\u0022https:\/\/www.nsf.gov\/od\/oia\/convergence-accelerator\/\u0022\u003EConvergence Accelerator\u003C\/a\u003E program, addresses an underutilized U.S. talent pool that poses a \u0026ldquo;critical but overlooked public health and economic challenge: how to include individuals with ASD\u0026rdquo; in the workforce, according to Vanderbilt Professor \u003Cstrong\u003ENilanjan Sarkar\u003C\/strong\u003E who is leading the project team.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EConsider:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EOne in 54 people in the United States has ASD;\u003C\/li\u003E\r\n\t\u003Cli\u003EEach year 70,000 young adults with ASD leave high school and face grim employment prospects;\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003EMore than 8 in 10 adults with ASD are either unemployed or underemployed, a significantly higher rate than adults with other developmental disabilities;\u003C\/li\u003E\r\n\t\u003Cli\u003EThe estimated lifetime cost of supporting an individual with ASD and limited employment prospects $3.2 million.\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003EThe total estimated cost of caring for Americans with ASD was $268 billion in 2015 and projected to grow to $461 billion in 2025.\u003C\/li\u003E\r\n\t\u003Cli\u003EAn estimated $50,000 per person per year could be contributed back into society when individuals with ASD are employed.\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We want to harness the power of AI, stakeholder engagement and convergent research to include neurodiverse individuals in the 21\u003Csup\u003Est\u003C\/sup\u003E century workforce,\u0026rdquo; Sarkar said. \u0026ldquo;We feel that there is a big opportunity to turn great societal cost into great societal value.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor this project, organizational, clinical and implementation experts are integrated with engineering teams to pave the way for real-world impact. The multi-university, multi-disciplinary team already has commitments from major employers to license some of the technology and tools developed.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearchers will address three themes:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EIndividualized assessment of unique abilities and appropriate job-matching\u003C\/li\u003E\r\n\t\u003Cli\u003ETailored understanding and ongoing support related to social communication and interaction challenge\u003C\/li\u003E\r\n\t\u003Cli\u003ETools to support job candidates, employees and employers.\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EAlready, notable private-sector companies that employ people with ASD have committed to using at least one of the technologies developed under this program: Auticon, The Precisionists, Ernst \u0026amp; Young and SAP among them.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETwo other companies, Floreo and Tipping Point Media, will make their existing VR modules available for adaptation to the program. Microsoft, which has a long-standing interest in hiring people with ASD, is involved as well and provided seed funding and access to cloud services for technology integration.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe five technologies can be used separately or as an integrated system, and the work has broader potential beyond ASD to expand employment access. In the U.S. alone, an estimated 50 million people have ASD, attention-deficit\/ hyper-activity disorder, learning disability or other neuro-diverse conditions.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech will develop methods for assessing nonverbal communication behaviors during face-to-face social interactions such as job interviews."}],"uid":"33939","created_gmt":"2020-09-14 17:52:01","changed_gmt":"2020-09-14 17:52:01","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-09-14T00:00:00-04:00","iso_date":"2020-09-14T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"590844":{"id":"590844","type":"image","title":"Child Study Lab Autism Research","body":null,"created":"1493061979","gmt_created":"2017-04-24 19:26:19","changed":"1493061979","gmt_changed":"2017-04-24 19:26:19","alt":"Lab coordinator Audrey Southerland, along with undergraduate assistants, leads data collection at the Child Study Lab.","file":{"fid":"225112","name":"Autism5.jpg","image_path":"\/sites\/default\/files\/images\/Autism5.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Autism5.jpg","mime":"image\/jpeg","size":329499,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Autism5.jpg?itok=a3RDfy3M"}}},"media_ids":["590844"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182941","name":"cc-research; ic-cybersecurity; ic-hcc"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"638703":{"#nid":"638703","#data":{"type":"news","title":"Welcome New IC Faculty: Seven Join School from Variety of Research Areas","body":[{"value":"\u003Cp\u003EEach year, the School of Interactive Computing conducts a rigorous search for the brightest minds to carry forward its academic and research initiatives. This year, IC welcomes seven\u0026nbsp; to that mission. Take a quick glance at the new research\u0026nbsp;coming to the School in 2020.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ESehoon Ha\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPh.D. in Computer Science, Georgia Tech 2015\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearch interests: Robotics, Artificial Intelligence, Character Animation\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHa\u0026rsquo;s research lies at the intersection of computer graphics and robotics, including physics-based animation, deep reinforcement learning, and computational robot design. Specifically, he has published work that addresses the need for more intelligent control software in robotics to improve agility, robustness, efficiency, and safety. In the long term, he aims to develop robotic companions for the home, search-and-rescue robots for disaster recovery scenes, and custom medical surgery robots that are tailored to individual patients.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EJennifer Kim\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPh.D. in Computer Science, University of Illinois, Urbana-Champaign 2019\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearch interests: Human-Computer Interaction, Interactive Systems, Health Care\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKim\u0026rsquo;s research investigates and develops interactive systems as communication artifacts to address various health-related challenges such as financial burdens of medical costs, difficulties in understanding behaviors of people with neurological disorders, and online health misinformation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EChris Le Dantec\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPh.D. in Human-Centered Computing, Georgia Tech 2011\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearch interests: Digital Media, Science and Technology Studies\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELe Dantec is interested in developing community-based design practices that support new forms of collective action through production and use of civic data. Specifically, his research has direct impact on how policy makers and citizens work together to address issues of community engagement, social justice, urban transportation, and development.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EAndrea Grimes Parker\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPh.D. in Human-Centered Computing, Georgia Tech 2011\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearch interests: Human-Computer Interaction, Computer Supported Cooperative Work, Health Informatics\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGrimes Parker designs and evaluates the impact of software tools that help people manage their health and wellness with a particular focus on equity. She studies racial, ethnic and economic health disparities, and the social context of health management. Through technology design, her research examines intrapersonal, social, cultural, and environmental factors that influence a person\u0026rsquo;s ability and desire to make healthy decisions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EAlan Ritter\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPh.D. in Computer Science and Engineering, University of Washington 2013\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearch interests: Natural Language Processing, Information Extraction, Machine Learning\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERitter\u0026rsquo;s research aims to solve challenging technical problems that can help machines learn to read vast quantities of text with minimal supervision. Past work included a system that reads millions of tweets for mentions of new software vulnerabilities. This tool spotted critical security flaws in software. He is also interested in data-driven dialogue agents that can converse with people more naturally.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ESashank Varma\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPh.D. in Cognitive Studies, Vanderbilt University 2006\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearch interests: Abstract Mathematical Thinking, Memory Systems Supporting Language Processing, Computational Models of High-Level Cognition\u003C\/p\u003E\r\n\r\n\u003Cp\u003EVarma\u0026rsquo;s research investigates complex forms of cognition that are uniquely human from multiple disciplinary perspectives. Primarily, this involves mathematical cognition, where he investigates how people use symbols systems to understand abstract mathematical concepts, how they develop intuitions about and insights into mathematics, and the mental mechanisms shared between reasoning and algorithmic thinking.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWei Xu\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPh.D. in Computer Science, New York University 2014\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearch Interests: Natural Language Processing, Machine Learning, Social Media\u003C\/p\u003E\r\n\r\n\u003Cp\u003EXu\u0026rsquo;s recent work focuses on methods to understand the varied expressions in human language and to generate paraphrases for applications, such as reading and writing assistive technology. She has also worked on crowdsourcing, summarization, and information extraction for user-generated data, such as Twitter and StackOverflow.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Take a quick glance at the new research\u00a0coming to the School of Interactive Computing in 2020."}],"uid":"33939","created_gmt":"2020-09-02 17:13:29","changed_gmt":"2020-09-02 17:13:29","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-09-02T00:00:00-04:00","iso_date":"2020-09-02T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"638702":{"id":"638702","type":"image","title":"New IC faculty 2020","body":null,"created":"1599066470","gmt_created":"2020-09-02 17:07:50","changed":"1599066470","gmt_changed":"2020-09-02 17:07:50","alt":"Sashank Varma, Sehoon Ha, Chris Le Dantec, Wei Xu, Alan Ritter, Andrea Grimes Parker, Jennifer Kim","file":{"fid":"242860","name":"New IC Faculty 2020.png","image_path":"\/sites\/default\/files\/images\/New%20IC%20Faculty%202020.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/New%20IC%20Faculty%202020.png","mime":"image\/png","size":1039808,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/New%20IC%20Faculty%202020.png?itok=EBKlSdn8"}}},"media_ids":["638702"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181216","name":"cc-research"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"638689":{"#nid":"638689","#data":{"type":"news","title":"IC Student Ceara Byrne Trades Dog Toys for Masks to Chip in on Covid Relief","body":[{"value":"\u003Cp\u003EWhat do dog toys have to do with Covid-19 pandemic relief? Leave it to a Georgia Tech student to find a connection.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESchool of Interactive Computing Ph.D. student \u003Cstrong\u003ECeara Byrne\u003C\/strong\u003E, whose primary research focuses on instrumenting dog toys with various sensors to measure canine behavior, found a way to contribute to the cause when she was approached by a fellow Georgia Tech student for assistance in 3D printing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ELee Whitcher\u003C\/strong\u003E, a Ph.D. student in the \u003Ca href=\u0022https:\/\/www.ae.gatech.edu\/\u0022\u003EDaniel Guggenheim School of Aerospace Engineering\u003C\/a\u003E, had already joined colleagues from the \u003Ca href=\u0022https:\/\/gtri.gatech.edu\/\u0022\u003EGeorgia Tech Research Institute\u003C\/a\u003E and \u003Ca href=\u0022https:\/\/www.me.gatech.edu\/\u0022\u003EGeorge W. Woodruff School of Mechanical Engineering\u003C\/a\u003E to design and manufacture personal protective equipment (PPE) like face shields to supplement the available supplies in the Atlanta area.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe work from GTRI and ME assisted in hospitals, and Whitcher\u0026rsquo;s work \u0026ndash; a non-profit called \u003Ca href=\u0022http:\/\/AtlantaBeatsCOVID.com\u0022\u003EAtlanta Beats COVID\u003C\/a\u003E \u0026ndash; aimed to design and produce masks and ventilators that could be produced by non-engineers wherever they are needed.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo do that, Whitcher and his partners needed a 3D printer that could cast the negatives for the masks. Georgia Tech\u0026rsquo;s \u003Ca href=\u0022https:\/\/gvu.gatech.edu\/\u0022\u003EGVU\u003C\/a\u003E Prototyping Lab in the Technology Square Research Building had just what they needed. So did Byrne.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EByrne has been using the Prototyping Lab\u0026rsquo;s printer for a while now to develop negatives of the silicone dog toys she uses in her research. Byrne\u0026rsquo;s work involves studying behavior in canines to understand temperament for service animals.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I was inspired by a friend from high school who grew up on a ranch,\u0026rdquo; Byrne said. \u0026ldquo;She and I got involved in 4-H. When I came back for a master\u0026rsquo;s degree, I started working with \u003Cstrong\u003EThad Starner\u003C\/strong\u003E and \u003Cstrong\u003EMelody Jackson\u003C\/strong\u003E on the FIDO project. I started noticing these aspects of the data that were reflective of dog temperament like drive and how they tackle activities. It really interested me.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPart of the research was to find good ways to measure that temperament beyond just visual observation. One solution was to place sensors into toys to take measurements as the dog played with it.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I\u0026rsquo;ve used the Prototyping Lab to 3D print my negative molds so that I can silicone cast the positives like balls and tug toys,\u0026rdquo; Byrne said. \u0026ldquo;It\u0026rsquo;s a long process of finding the right silicones, materials, hardness.\u0026nbsp; For the toys, I went through three or four different molds to find the right way to actually cast the parts. It was a lot of experimenting.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThat experimentation made her uniquely prepared to chip in with Whitcher\u0026rsquo;s project when Covid-19 hit. Looking for a way to develop the right mold for easy do-it-yourself mask production, Whitcher turned to Byrne for assistance.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;There are a number of aspects to it,\u0026rdquo; Byrne said. \u0026ldquo;How do you de-gas some of the silicone? When you have a mask, you can\u0026rsquo;t have the bubbles in the mold because you need a seal. How do you do it with the vacuum? If there\u0026rsquo;s no vacuum available, what are some easier ways? How do we make these negatives properly, and how many can you cast at once? What are the environmental aspects when you do it from home?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThese are all questions Byrne has had to explore when it comes to her dog toys. The experience proved useful in the mask production, as well.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EByrne was happy to get involved in pandemic relief assistance. She has brothers and sisters-in-law who are doctors.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;They\u0026rsquo;ve been amazing in helping around the community,\u0026rdquo; she said. \u0026ldquo;My brother is making masks, which I think is fascinating. He\u0026rsquo;s a radiation oncologist and has built respiratory masks with the Pancreatic Cancer Foundation. So, I wanted to help out in any way that I could, as well.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBeing at Georgia Tech, she said, made the collaboration a natural occurrence.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;That\u0026rsquo;s what makes Georgia Tech unique, right?\u0026rdquo; she said. \u0026ldquo;We can collaborate across these disciplines that maybe don\u0026rsquo;t connect to each other on the surface.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERead more about the relief effort, how to request PPE, and how to get involved at \u003Ca href=\u0022http:\/\/AtlantaBeatsCOVID.com\u0022\u003EAtlantaBeatsCOVID.com\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Byrne, whose work uses a 3D printer to make dog toys, is using her expertise to help in mask production."}],"uid":"33939","created_gmt":"2020-09-01 22:53:17","changed_gmt":"2020-09-01 22:53:17","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-09-01T00:00:00-04:00","iso_date":"2020-09-01T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"638688":{"id":"638688","type":"image","title":"Ceara Byrne","body":null,"created":"1598997204","gmt_created":"2020-09-01 21:53:24","changed":"1598997204","gmt_changed":"2020-09-01 21:53:24","alt":"ceara byrne","file":{"fid":"242857","name":"heart-innovation.jpg","image_path":"\/sites\/default\/files\/images\/heart-innovation.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/heart-innovation.jpg","mime":"image\/jpeg","size":20656,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/heart-innovation.jpg?itok=orZbCYDG"}}},"media_ids":["638688"],"related_links":[{"url":"https:\/\/ae.gatech.edu\/news\/2020\/04\/what-engineers-do-crisis","title":"What Engineers Do in a Crisis"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"185769","name":"cc-research; ic-hcc; COVID-19"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"638165":{"#nid":"638165","#data":{"type":"news","title":"OMSCS Shines at Top Educational Technology Conference","body":[{"value":"\u003Cp\u003EGeorgia Tech \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/623681\/omscs-dominates-learning-scale\u0022\u003Econtinues to be one of the stars\u003C\/a\u003E at \u003Ca href=\u0022https:\/\/learningatscale.acm.org\/las2020\u0022\u003ELearning @ Scale (L@S)\u003C\/a\u003E, the Association of Computing Machinery\u0026rsquo;s annual conference celebrating digital learning. With 17 research projects and students presiding over the conference, the College of Computing is one of the leaders in this space.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/david-joyner\u0022\u003E\u003Cstrong\u003EDavid Joyner\u003C\/strong\u003E\u003C\/a\u003E, the executive director of the \u003Ca href=\u0022http:\/\/www.omscs.gatech.edu\/\u0022\u003EOnline Master of Science in Computer Science (OMSCS)\u003C\/a\u003E and online education, and OMSCS student \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/ambassadorcxo\u0022\u003E\u003Cstrong\u003ERobert Schmidt\u003C\/strong\u003E\u003C\/a\u003E are on the organizing committee. Joyner is also involved in the steering and programming committees.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOMSCS\u0026rsquo;s influence also extends to research with five full papers and 12 short papers on everything from plagiarism to peer evaluations.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The most remarkable thing to me is the number of students, including online students, with work at the conference,\u0026rdquo; Joyner said. \u0026ldquo;It\u0026rsquo;s evidence that OMSCS students have a thirst for research opportunities and will make the most of them when they have them.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPrior to the Covid-19 pandemic, the digital learning community had planned to gather Aug. 12-14 in Atlanta for this year\u0026rsquo;s conference. If there was any conference that could be virtual, however, it\u0026rsquo;s one devoted to online learning.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EStill, there were challenges to moving everything online. Schmidt is in charge of organizing workshops and had to determine how to move everything to Zoom.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;There are a lot of questions you need to ask to prepare,\u0026rdquo; he said. \u0026ldquo;Do you have multiple computers? What do you do if the network goes down? It\u0026rsquo;s about making sure all software works before the session starts, and that you have everything you need in front in you.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EYet Schmidt is confident they are prepared.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;At OMSCS, we\u0026rsquo;re experts in this,\u0026rdquo; he said.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech continues to be one of the stars at Learning @ Scale (L@S), the Association of Computing Machinery\u2019s annual conference celebrating digital learning."}],"uid":"34541","created_gmt":"2020-08-20 21:11:40","changed_gmt":"2020-08-20 21:17:54","author":"Tess Malone","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-08-20T00:00:00-04:00","iso_date":"2020-08-20T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"638166":{"id":"638166","type":"image","title":"OMSCS Tour","body":null,"created":"1597958250","gmt_created":"2020-08-20 21:17:30","changed":"1597958250","gmt_changed":"2020-08-20 21:17:30","alt":"OMSCS campus tour","file":{"fid":"242713","name":"47790952601_7e4edb316e_c.jpg","image_path":"\/sites\/default\/files\/images\/47790952601_7e4edb316e_c.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/47790952601_7e4edb316e_c.jpg","mime":"image\/jpeg","size":203625,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/47790952601_7e4edb316e_c.jpg?itok=9OKNg7yJ"}}},"media_ids":["638166"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ETess Malone, Communications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:tess.malone@cc.gatech.edu\u0022\u003Etess.malone@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"637711":{"#nid":"637711","#data":{"type":"news","title":"Two IC Grads Earn Sigma Xi Best Ph.D. Thesis Awards","body":[{"value":"\u003Cp\u003ERecent Georgia Tech Ph.D. graduates \u003Cstrong\u003ECaitlyn Seim\u003C\/strong\u003E and \u003Cstrong\u003EAishwarya Agrawal\u003C\/strong\u003E, both from the School of Interactive Computing, were awarded the 2020 Sigma Xi Best Ph.D. Thesis Award. They were two of just 10 Ph.D. students at Georgia Tech recognized with the honor.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeim\u0026rsquo;s thesis, titled \u003Cem\u003EWearable Vibrotactile Stimulation: How Passive Stimulation Can Train and Rehabilitate\u003C\/em\u003E, presents a technique in which a vibrating wearable device is used to retrain motor function following debilitating occurrences of spinal fracture or stroke. Now a postdoc at Stanford University and a fellow with the National Institutes of Health, Seim is currently working with stroke survivors to develop accessible and functional wearable devices to reduce physical disability in both the upper and lower limbs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Lately, I have also developed new mechanical tools to assess hand and arm function when there are no quantitative clinical tests available,\u0026rdquo; Seim said. \u0026ldquo;I plan to continue research on wearable and ubiquitous systems for health, accessibility, and training.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn Agrawal\u0026rsquo;s thesis, titled \u003Cem\u003EVisual Question Answering and Beyond\u003C\/em\u003E, she explores a multi-modal artificial intelligence task called visual question answering. In this task, given an image and natural language question about it, a machine is programmed to automatically produce an accurate natural language answer. The applications of VQA include aiding visually impaired users in understanding their surroundings, aiding analysts in examining large quantities of surveillance data, teaching children through interactive demos, interacting with personal AI assistants, and making visual social media content more accessible.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENow at DeepMind and soon to be an assistant professor at the University of Montreal and Mila, an AI research institute, Agrawal intends to equip current VQA systems with better skills to move toward artificial general intelligence.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In the long term, I am excited about science fiction becoming reality, when we all have smart virtual assistants that can see and talk and serve as an aid to visually impaired users,\u0026rdquo; she said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe eight other recipients of the Georgia Tech Sigma Xi Best Ph.D. Thesis Award were Mingue Kim (ECE), Ming Zhao (Chemistry), Andres Caballero (BME), Ke (Chris) Liu (CEE), Monica McNerney (ChBE), Chris Sugino (ME), Hamid Reza Seyf (ME), and Eric Tervo (ME).\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"They were two of just 10 Ph.D. students at Georgia Tech recognized with the honor."}],"uid":"33939","created_gmt":"2020-08-10 13:46:02","changed_gmt":"2020-08-10 13:46:02","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-08-10T00:00:00-04:00","iso_date":"2020-08-10T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"637710":{"id":"637710","type":"image","title":"Aishwarya Agrawal and Caitlyn Seim","body":null,"created":"1597067128","gmt_created":"2020-08-10 13:45:28","changed":"1597067128","gmt_changed":"2020-08-10 13:45:28","alt":"Aishwarya Agrawal and Caitlyn Seim","file":{"fid":"242547","name":"Personal Vlog YouTube Thumbnail.png","image_path":"\/sites\/default\/files\/images\/Personal%20Vlog%20YouTube%20Thumbnail.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Personal%20Vlog%20YouTube%20Thumbnail.png","mime":"image\/png","size":1057627,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Personal%20Vlog%20YouTube%20Thumbnail.png?itok=SUD_F-qp"}}},"media_ids":["637710"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181639","name":"cc-research; ic-ai-ml"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"637104":{"#nid":"637104","#data":{"type":"news","title":"Computing Alum is Shaping the Future of Software Engineering","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003EJeremy Duvall\u003C\/strong\u003E, BS CS 07, MS CS 13, spent a decade in Georgia Tech\u0026rsquo;s classrooms \u0026ndash; at times in an uphill battle \u0026ndash; to earn his degrees from one of the nation\u0026rsquo;s top five public universities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAfter Georgia Tech put him on academic probation, he had to leave after not making the minimum GPA, but upon returning home to Blairsville, 100 miles north of Atlanta, he realized he had to go back. It simply wasn\u0026rsquo;t in him to quit.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAfter appealing his status, Duvall was readmitted and went on to complete his bachelor\u0026rsquo;s in computer science with a 3.0 GPA.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe did the master\u0026rsquo;s degree in CS the hard way as well, launching right into grad school while working as a software development engineer for several companies along the way, including Microsoft and Deloitte.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFive years later, he \u0026ldquo;got out\u0026rdquo; again, two semesters prior to the launch of Georgia Tech\u0026#39;s now nationally known Online Master of Science in CS program. That program currently enrolls more than 8,000 students and is helping fill the national shortage in software development talent.\u003C\/p\u003E\r\n\r\n\u003Cblockquote\u003E\r\n\u003Cp\u003E\u0026quot;Without custom software, your company might become irrelevant, especially if software is part of your strategic focus.\u0026quot;\u003C\/p\u003E\r\n\u003C\/blockquote\u003E\r\n\r\n\u003Cp\u003EAfter more than ten years of being what Duvall calls a \u0026ldquo;software craftsman,\u0026rdquo; building and advising others on how to develop software that is \u0026ldquo;rugged, performant, and beautiful\u0026rdquo; for nearly every industry, the 36-year-old set up his own shop in his adopted home of Atlanta. Duvall says he knew that the city\u0026#39;s vibrant tech community would be able to provide the talent, resources, and growth opportunities his new company needed to succeed.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDuvall founded 7Factor Software in 2016 to innovate in software delivery, from high-availability systems to software engineering and architecture to site reliability design.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I\u0026#39;ve worked at several places, and all of them had their own sort of software development life cycles and methodologies to solve problems for business stakeholders,\u0026rdquo; says Duvall, who lives in Sandy Springs. \u0026ldquo;It wasn\u0026rsquo;t until I was more on the consulting side that I learned that executives running $7 billion companies are just too busy to learn how the software gets built \u0026ndash; they only care if it works.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDuvall says that this often leaves a lot of room for interpretation how to deliver the final product, and in the computer software industry there\u0026rsquo;s no way to put a specific barometer on the quality of the software that\u0026rsquo;s being built.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I feel that it\u0026rsquo;s an important approach at 7Factor to first define and then deliver on quality software that meets the needs of people. That\u0026rsquo;s our approach with every customer.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo punctuate the point, he mentions his wife Judy, also a Georgia Tech graduate, who works in structural engineering, an industry where the physical integrity of buildings is paramount to protecting human life. Structural engineers have to be licensed and certified, and they must comply with exacting regulations. Software development may not have the same responsibility, but Duvall believes that day may be quickly approaching.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDuvall is quick to point out that software engineering shouldn\u0026rsquo;t be just about pulling out a blueprint or working from a script. He compares his company rather to an artist using a blank canvas to paint a picture, one commissioned by clients who are in the room describing what they want.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The reason that 7Factor exists, and the thing that we try and hit on when we work with our customers, is to build the smallest, tightest, strongest teams possible, whose goal is to create inherently high-quality software that can positively impact the lives of many people.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;One of my biggest customers provides care for children with special needs, so when my engineers are working on software that makes those caregivers\u0026#39; lives easier, we are indirectly impacting the lives of the families and the children that those caregivers serve,\u0026rdquo; Duvall says. \u0026ldquo;So we have a very lucky and wonderful conundrum on our hands, where we can solve problems that impact people directly in many different industries, not just one or two.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne of the constants in Duvall\u0026rsquo;s career has been focusing on the human element of computing, and he says that fundamentally his work is about building \u0026ldquo;human-centered software.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn 2007, he was at Danger Inc. helping write the operating system and services infrastructure for the T-Mobile Sidekick, one of the first smartphones to gain status with celebrities in the U.S. The phone also garnered a sizable following by those with vision impairments.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It was a mobile device that was accessible to the blind community in ways that other gadgets just weren\u0026rsquo;t at the time. We had full software development kits, and people were literally writing blind-enablement and disability-enablement apps for the Sidekick platform.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFast forward a decade later and Duvall started applying a similar people-first ethos to the\u0026nbsp; software his company builds. His desire to help his clients become better informed and not settle for one-size-fits-all solutions is evidenced in how he approaches his work.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/entrepreneurs\u0022\u003E[RELATED: Entrepreneurship at GT Computing]\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Without custom software, your company might become irrelevant, especially if software is part of your strategic focus,\u0026rdquo; he says. \u0026ldquo;When you look at software engineering now with the ubiquity of cloud services and the fact that anybody can write software, the problem is that many people don\u0026rsquo;t often think about how to write quality software.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDuvall\u0026rsquo;s advice to Georgia Tech students:\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Remember that software is built by a team of humans that has to be able to work with other teams of humans to align to a larger goal, something no one told me when I was in school and I wish someone had.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Also, don\u0026rsquo;t get stuck in one frame of reference with computer languages, which we have no shortage of. Think in terms of, \u0026lsquo;I am an engineer, not just a developer,\u0026rsquo; because once you think of your work as solving problems with code, you don\u0026#39;t care what code you\u0026#39;re using.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDuvall recently made a financial gift to the James D. Foley GVU Center Endowment at Georgia Tech, which supports graduate students in computing-related disciplines.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThrough giving, he hopes to inspire incoming students and encourage them to believe that they can succeed despite any previous hardships or socioeconomic challenges.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDuvall is living proof anyone can beat the odds. When he was readmitted to the institute, his professors held him accountable to his graduation plan.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I came back a completely different person, fully focused and determined to succeed, no matter what. My dad, who was a single parent, had driven trucks for a week at a time to send me to school. I couldn\u0026rsquo;t accept any less dedication than that in myself.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe had a 4.0 his first semester back while taking five CS courses. Duvall went on to win the President\u0026rsquo;s Undergraduate Research Award. He worked in the Pixi lab, directed by professor \u003Cstrong\u003EKeith Edwards\u003C\/strong\u003E, creating a zero-configuration router during the days when routers were complicated to configure.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAtlanta has been Duvall\u0026rsquo;s professional home ever since his days on the Tech campus lawn, and now with 21 employees at 7Factor, he\u0026rsquo;s looking at how to continue to grow organically and match his teams with the right customers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I think Atlanta is a great tech hub, and I think we have a lot of talent here. We have Georgia Tech, we have plenty of opportunities in front of us, and we need to keep the people here by providing the opportunities and jobs so we can continue to grow.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech alumnus and founder of 7Factor Software Jeremy Duvall recently made a financial gift to the James D. Foley GVU Center Endowment."}],"uid":"32045","created_gmt":"2020-07-20 14:43:13","changed_gmt":"2020-07-23 13:58:13","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-07-20T00:00:00-04:00","iso_date":"2020-07-20T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"637107":{"id":"637107","type":"image","title":"GT Computing Alum Jeremy Duvall","body":null,"created":"1595259004","gmt_created":"2020-07-20 15:30:04","changed":"1595259004","gmt_changed":"2020-07-20 15:30:04","alt":"Georgia Tech Alum Jeremy Duvall","file":{"fid":"242359","name":"Duvall5_thumbnail_Jeremy Duvall_headshot.jpg","image_path":"\/sites\/default\/files\/images\/Duvall5_thumbnail_Jeremy%20Duvall_headshot.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Duvall5_thumbnail_Jeremy%20Duvall_headshot.jpg","mime":"image\/jpeg","size":61224,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Duvall5_thumbnail_Jeremy%20Duvall_headshot.jpg?itok=UrvJ_zrJ"}}},"media_ids":["637107"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"66442","name":"MS HCI"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EJoshua Preston, Communications Manager\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:Jpreston@cc.gatech.edu?subject=Jeremy%20Duvall\u0022\u003EJpreston@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["Jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"636549":{"#nid":"636549","#data":{"type":"news","title":"C4G BLIS Update Improves Usability, Could Prove Useful in Fight Against Disease Outbreaks","body":[{"value":"\u003Cp\u003EAn update to a laboratory information system used in countries across Africa is improving usability and could prove critical in response to future disease outbreaks.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn 2010, a group of researchers at \u003Ca href=\u0022http:\/\/gatech.edu\/\u0022\u003EGeorgia Tech\u003C\/a\u003E, the CDC, and Ministries of Health in several African countries launched an open-source laboratory management system as part of the \u003Ca href=\u0022https:\/\/ptc.gatech.edu\/computing-for-good-college-of-computing\u0022\u003ECollege of Computing\u0026rsquo;s Computing-for-Good\u003C\/a\u003E (C4G) initiative. Designed to be ultra-configurable to meet variable needs of labs across developing countries with minimal training for staff, it quickly grew to become one of C4G\u0026rsquo;s biggest success stories.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMore than 10 nations in sub-Saharan Africa adopted the program, called the \u003Ca href=\u0022http:\/\/blis.cc.gatech.edu\/\u0022\u003EBasic Laboratory Information System\u003C\/a\u003E (BLIS), giving areas with little or poor internet connectivity an easy-to-use system for many who had minimal computing experience. These countries, which had over 1 million patients at the time, were using paper-based systems to manage information on disease spread, local illnesses, and much more. As information and communications technologies have expanded in the area, however, many labs gained a standardized reports system that could track prevalence rates of infections, slowing their spread.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut a lot can change in just 10 years. What was once designed for personal computing interfaces is now desired for a wide range of new platforms. Although laptops are still the device of choice for the majority of nurses \u0026ndash; 79.6 percent reported in a study of a Nigerian hospital -- smartphones and tablets have seen a steady increase. The coming years will include many more innovations that render even those obsolete.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs users in the global south aspire to embrace mobile computing in clinical settings, a flexible interface, adaptable to everchanging applications, is needed.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEnter: \u003Cstrong\u003EJung Wook Park\u003C\/strong\u003E and \u003Cstrong\u003EAditi Shah\u003C\/strong\u003E, a Ph.D. student in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC) and former master\u0026rsquo;s student in the \u003Ca href=\u0022http:\/\/scs.gatech.edu\/\u0022\u003ESchool of Computer Science\u003C\/a\u003E (SCS), respectively. Along with SCS Professor \u003Cstrong\u003ESantosh Vempala\u003C\/strong\u003E and IC Principal Research Scientist \u003Cstrong\u003ERosa Arriaga\u003C\/strong\u003E, Park and Shah published research updating the current interface of C4G BLIS\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETheir updates focused on a handful of key areas, primarily mobile support. A responsive user interface framework supporting various screen sizes and resolutions was developed and evaluated by real users at hospitals in Africa currently using BLIS.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThey compared user experience with the current interface on desktops and smartphones with a proposed interface on both and found that there was a significant improvement on both the desktop and smartphone.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;When you bring in a new system, they may feel uncomfortable with it,\u0026rdquo; Park said. \u0026ldquo;If we didn\u0026rsquo;t do a great job, you might get the same score or lower at the beginning. Over time, we saw improvements of 32 and 34 percent on desktops and smartphones.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EShah, now at Microsoft, offered plenty of help in the development of the system, and her experience with a visual impairment allowed her to provide perspective on accessibility, as well.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe implications of this research extend far beyond ease of use for nurses, however. Park identified a growing problem across the globe in health care: communication. As the current pandemic can illustrate, viruses and diseases can spread quickly across many different populations. It isn\u0026rsquo;t sufficient to have just local data to mount an appropriate response; teams around the world must be able to rapidly share information.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA system like C4G BLIS, with its improved user interface that can be used across multiple platforms depending on the local needs of various communities, can help that communication.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If you notice something locally and maybe other areas of the country or continent notice something, how do you know if it is a pandemic?\u0026rdquo; Park posed. \u0026ldquo;You need to be able to share that information to manage the spread. By turning these local systems into a standardized cloud-based system, we can improve communication.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlready, Vempala said, he has heard reports from many labs that have adapted the flexible system to keep track of COVID-19 data in their communities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe paper is titled \u003Cem\u003ERedesigning a Basic Laboratory Information System for the Global South\u003C\/em\u003E, and was presented at the International Telecommunication Union Kaleidoscope conference, earning a Best Paper award.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A system that has helped bring digital record keeping to hospitals across Africa has received a needed update for new platforms like smartphones and tablets."}],"uid":"33939","created_gmt":"2020-06-25 20:22:10","changed_gmt":"2020-06-25 20:22:10","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-06-25T00:00:00-04:00","iso_date":"2020-06-25T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"636548":{"id":"636548","type":"image","title":"Jung Wook Park and Aditi Shah","body":null,"created":"1593116182","gmt_created":"2020-06-25 20:16:22","changed":"1593116182","gmt_changed":"2020-06-25 20:16:22","alt":"Jung Wook Park and Aditi Shah","file":{"fid":"242182","name":"Shah and Park Image.png","image_path":"\/sites\/default\/files\/images\/Shah%20and%20Park%20Image.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Shah%20and%20Park%20Image.png","mime":"image\/png","size":1093782,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Shah%20and%20Park%20Image.png?itok=qfxEI99m"}}},"media_ids":["636548"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"431631","name":"OMS"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"184890","name":"cc-research; ic-hcc"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"636196":{"#nid":"636196","#data":{"type":"news","title":"ML@GT Faculty Members Will Discuss Projects Related to Covid-19 Relief During Virtual Panel","body":[{"value":"\u003Cp\u003EThe coronavirus (Covid-19) pandemic has wreaked havoc on the world, spurring researchers across disciplines into action to help human-kind. Four researchers affiliated with the \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EMachine Learning Center at Georgia Tech (ML@GT)\u003C\/a\u003E and one \u003Ca href=\u0022https:\/\/omscs.gatech.edu\/\u0022\u003EOnline Master of Science in Computer Science (OMSCS)\u003C\/a\u003E student examined different aspects of the virus\u0026rsquo; impact. From creating forecasting models to studying the psychological impact of the disease, these researchers are helping people understand the virus.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOn June 24, ML@GT faculty members \u003Cstrong\u003ESrijan Kumar \u003C\/strong\u003E(School of Computational Science and Engineering,) \u003Cstrong\u003EAditya Prakash \u003C\/strong\u003E(School of Computational Science and Engineering,) \u003Cstrong\u003EMunmun De Choudhury \u003C\/strong\u003E(School of Interactive Computing,) \u003Cstrong\u003ENicoleta Serban\u0026nbsp;\u003C\/strong\u003E(H. Milton Stewart School of Industrial and Systems Engineering,) and OMSCS student \u003Cstrong\u003EKenneth Miller\u003C\/strong\u003E will participate in a virtual panel discussing their work. The panel will be moderated by ML@GT executive director \u003Cstrong\u003EIrfan Essa\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPanelists will give individual presentations before participating in a general question-and-answer segment with audience members.\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EKumar and De Choudhury will share details of their work regarding the \u003Ca href=\u0022http:\/\/ml.gatech.edu\/hg\/item\/635397\u0022\u003Epsychological impact of Covid-19\u003C\/a\u003E. Kumar will also discuss his work examining \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/635858\/predicting-hate-crimes-targeting-asian-americans-amid-covid-19-outbreak\u0022\u003Ehate and counter-hate messages on Twitter against Asian Americans\u003C\/a\u003E during the pandemic.\u003C\/li\u003E\r\n\t\u003Cli\u003EPrakash is a member of the Center for Disease Control and Prevention\u0026rsquo;s (CDC) forecasting team, and will share their \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/635849\/forecasting-covid-19-pandemic-united-states\u0022\u003Enew data-driven approach to disease forecasting\u003C\/a\u003E.\u003C\/li\u003E\r\n\t\u003Cli\u003ESerban\u0026rsquo;s presentation will focus on her work creating an \u003Ca href=\u0022https:\/\/www.georgiahealthnews.com\/2020\/05\/georgia-tech-model-predicts-spike-covid-cases-deaths\/\u0022\u003Eagent-based simulation\u0026nbsp;forecasting model\u003C\/a\u003E. This model captures the progression of the disease in an individual and in households, schools, communities, and workplaces.\u003C\/li\u003E\r\n\t\u003Cli\u003EA lawyer by day and OMSCS student by night, Miller participated in a Kaggle challenge using natural language processing and machine learning to \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/635081\/omscs-student-uses-machine-learning-help-understand-covid-19\u0022\u003Ehelp doctors and scientists read the most important studies\u003C\/a\u003E related to Covid-19.\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EThe panel will take place virtually via a Bluejeans Event at 11 a.m. on June 24 and is open to the public. \u003Ca href=\u0022https:\/\/primetime.bluejeans.com\/a2m\/register\/sfpbpsgg\u0022\u003ERegistration is required\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Members of the ML@GT community will discuss their Covid-19 related research efforts in a panel discussion on June 24, 2020."}],"uid":"34773","created_gmt":"2020-06-12 13:40:53","changed_gmt":"2020-06-15 19:52:10","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-06-12T00:00:00-04:00","iso_date":"2020-06-12T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"636195":{"id":"636195","type":"image","title":"Members of the ML@GT community will discuss their Covid-19 related research efforts in a panel discussion on June 24, 2020.","body":null,"created":"1591969094","gmt_created":"2020-06-12 13:38:14","changed":"1591969094","gmt_changed":"2020-06-12 13:38:14","alt":"Members of the ML@GT community will discuss their Covid-19 related research efforts in a panel discussion on June 24, 2020.","file":{"fid":"242073","name":"Using Machine Learning to Respond to Covid-19.png","image_path":"\/sites\/default\/files\/images\/Using%20Machine%20Learning%20to%20Respond%20to%20Covid-19.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Using%20Machine%20Learning%20to%20Respond%20to%20Covid-19.png","mime":"image\/png","size":504783,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Using%20Machine%20Learning%20to%20Respond%20to%20Covid-19.png?itok=HSZ2sXoG"}}},"media_ids":["636195"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"},{"id":"431631","name":"OMS"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"636173":{"#nid":"636173","#data":{"type":"news","title":"Research Conference Shows Social Challenges are Manifested, Magnified, and Mitigated Online at Pivotal Time for Nation","body":[{"value":"\u003Cp\u003EThe value of online mental health communities, how crisis events are described differently over time on social media, and refining how cyberbullying is detected and classified are major topics of research by Georgia Institute of Technology researchers at this week\u0026rsquo;s International Conference on Web and Social Media (ICWSM), taking place virtually. It was originally scheduled to be held in Atlanta near the Georgia Tech campus.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOver 220 academics at the 14\u003Csup\u003Eth\u003C\/sup\u003E annual event are convening and discussing work that is especially relevant during a time of an ongoing global health crisis and social unrest that has taken root across the United States.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearch in the conference proceedings include many topics directly addressing social ills and injustices that are magnified online as well as potential ways to better understand and mitigate them.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeveral College of Computing faculty, current and former students, and postdoctoral researchers are part of the organizing committee. \u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E (Interactive Computing) is serving as the general chair of the conference this year. Former Human-Centered Computing PhD student \u003Cstrong\u003EStevie Chancellor\u003C\/strong\u003E is workshop chair, former Computer Science PhD student \u003Cstrong\u003ETanushree Mitra\u003C\/strong\u003E is tutorials chair, current CS PhD student \u003Cstrong\u003EKoustuv Saha\u003C\/strong\u003E is web chair, and current postdoc \u003Cstrong\u003ETalayeh Aledavood\u003C\/strong\u003E is local\/social chair. CoC faculty \u003Cstrong\u003EDiyi Yang\u003C\/strong\u003E (Interactive Computing) and \u003Cstrong\u003ESrijan Kumar\u003C\/strong\u003E (Computational Science and Engineering) are data challenge chairs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne of the two keynotes at the conference is by IC faculty \u003Cstrong\u003EAmy Bruckman\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech has three papers in this year\u0026rsquo;s program:\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EA study in causal inference by CS PhD student \u003Cstrong\u003EKoustuv Saha\u003C\/strong\u003E that tests what leads to favorable psychosocial outcomes in mental health forums.\u003Cbr \/\u003E\r\n\t\u003Cem\u003ELink: \u003C\/em\u003E\u003Ca href=\u0022https:\/\/aaai.org\/ojs\/index.php\/ICWSM\/article\/view\/7326\u0022\u003E\u003Cem\u003Ehttps:\/\/aaai.org\/ojs\/index.php\/ICWSM\/article\/view\/7326\u003C\/em\u003E\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003EA paper by HCC PhD student \u003Cstrong\u003EIan Stewart\u003C\/strong\u003E, with advisors \u003Cstrong\u003EDiyi Yang\u003C\/strong\u003E and \u003Cstrong\u003EJacob Eisenstein\u003C\/strong\u003E, that intends to gather a sharper view of \u0026ldquo;collective attention\u0026rdquo; on social media. Looking at descriptive details for a crisis event, researchers find that the information needed to describe that event changes as time goes on.\u003Cbr \/\u003E\r\n\t\u003Cem\u003ELink: \u003C\/em\u003E\u003Ca href=\u0022https:\/\/aaai.org\/ojs\/index.php\/ICWSM\/article\/view\/7331\u0022\u003E\u003Cem\u003Ehttps:\/\/aaai.org\/ojs\/index.php\/ICWSM\/article\/view\/7331\u003C\/em\u003E\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003EA socially-inspired approach to detect cyberbullying online, by incoming PhD student \u003Cstrong\u003ECaleb Ziems\u003C\/strong\u003E. The paper proposes new criteria for cyberbullying (e.g. harmful intent) and finds that both text and social features help prediction. This paper has been recognized with an Honorable Mention Award, given to a total of eight papers this year.\u003Cbr \/\u003E\r\n\t\u003Cem\u003ELink: \u003C\/em\u003E\u003Ca href=\u0022https:\/\/aaai.org\/ojs\/index.php\/ICWSM\/article\/view\/7345\u0022\u003E\u003Cem\u003Ehttps:\/\/aaai.org\/ojs\/index.php\/ICWSM\/article\/view\/7345\u003C\/em\u003E\u003C\/a\u003E\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EFor details about more research and to read the organizing committee\u0026rsquo;s full statement on the commitment to Black Lives Matter, fighting structural racism, and promoting inclusion and equity, go to \u003Ca href=\u0022https:\/\/www.icwsm.org\/2020\/index.html\u0022\u003Ehttps:\/\/www.icwsm.org\/2020\/index.html\u003C\/a\u003E. In the wake of current events in the United States, the conference made 20 registration fee waivers available for Black scholars and individuals from other marginalized groups throughout the world, and provided scheduling flexibility to speakers and attendees participating in the Shutdown STEM walkout on June 10.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe conference is sponsored by the Association for the Advancement of Artificial Intelligence.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EThe value of online mental health communities, how crisis events are described differently over time on social media, and refining how cyberbullying is detected and classified are major topics of research by Georgia Institute of Technology researchers at this week\u0026rsquo;s International Conference on Web and Social Media (ICWSM 2020).\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"The value of online mental health communities, how crisis events are described differently over time on social media, and refining how cyberbullying is detected and classified are major topics of research from Georgia Tech at ICWSM 2020."}],"uid":"27592","created_gmt":"2020-06-11 15:20:55","changed_gmt":"2020-06-11 15:25:41","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-06-10T00:00:00-04:00","iso_date":"2020-06-10T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"636174":{"id":"636174","type":"image","title":"International Conference on Web and Social Media (ICWSM 2020)","body":null,"created":"1591888971","gmt_created":"2020-06-11 15:22:51","changed":"1591888971","gmt_changed":"2020-06-11 15:22:51","alt":"","file":{"fid":"242065","name":"ICWSM 2020.png","image_path":"\/sites\/default\/files\/images\/ICWSM%202020.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/ICWSM%202020.png","mime":"image\/png","size":4680771,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/ICWSM%202020.png?itok=_IyYw1Qt"}}},"media_ids":["636174"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu?subject=ICWSM%202020\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\nResearch Communications Manager\u003Cbr \/\u003E\r\nGVU Center and College of Computing\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"636082":{"#nid":"636082","#data":{"type":"news","title":"Dellaert Awarded IEEE ICRA Milestone Award","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003EFrank Dellaert\u003C\/strong\u003E, a professor in the\u0026nbsp;\u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E, and affiliated with the\u0026nbsp;\u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EMachine Learning Center at Georgia Tech (ML@GT)\u003C\/a\u003E\u0026nbsp;and\u0026nbsp;\u003Ca href=\u0022https:\/\/gvu.gatech.edu\/\u0022\u003EGVU Center\u003C\/a\u003E, has been honored with the IEEE ICRA Milestone Award at the\u0026nbsp;\u003Ca href=\u0022https:\/\/www.icra2020.org\/\u0022\u003E2020 IEEE International Conference on Robotics and Automation (ICRA.)\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe award recognizes the most influential ICRA paper published between 1998-2002 and selected\u0026nbsp;\u003Ca href=\u0022https:\/\/www.ri.cmu.edu\/pub_files\/pub1\/dellaert_frank_1999_2\/dellaert_frank_1999_2.pdf\u0022\u003E\u003Cem\u003EMonte Carlo Localization for Mobile Robots\u003C\/em\u003E\u003C\/a\u003E\u0026nbsp;as this year\u0026rsquo;s recipient. Dellaert conducted this work during his Ph.D studies at Carnegie Mellon University with\u0026nbsp;\u003Cstrong\u003EDieter Fox, Wolfram Burgard\u003C\/strong\u003E, and\u0026nbsp;\u003Cstrong\u003ESebastian Thrun\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It is a great honor to be recognized, but receiving a \u0026rsquo;20 years on\u0026rsquo; milestone award also makes you feel old!\u0026rdquo; said Dellaert.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe paper was accepted to ICRA in 1999 and introduced the Monte Carlo Localization (MLC) method or particle filter localization, which represents the probability density involved in maintaining a set of samples that are randomly drawn from it. This method is faster, more accurate, and less memory-intensive than earlier grid-based methods and allows a robot to be localized without knowledge of its starting location.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMCL is simple to apply to the robotics domain, leading to its popularity. It is now taught in every robotics 101 class around the world. Many mobile robots, including commercial efforts, rely on MCL for localizing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Simplicity is key for acceptance and you cannot predict which of your research will have the most impact. This paper was a result of me procrastinating on my Ph.D. thesis which is a paper almost nobody read. It is an enormous honor that MCL has made a lasting impact on our field,\u0026rdquo; said Dellaert.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The award recognizes the most influential ICRA paper published between 1998-2002 and selected\u00a0Monte Carlo Localization for Mobile Robots\u00a0as this year\u2019s recipient. "}],"uid":"34773","created_gmt":"2020-06-09 15:09:11","changed_gmt":"2020-06-09 15:09:11","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-06-09T00:00:00-04:00","iso_date":"2020-06-09T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"636081":{"id":"636081","type":"image","title":"Frank Dellaert, a professor in the School of Interactive Computing, and affiliated with the Machine Learning Center at Georgia Tech (ML@GT) and GVU Center, has been honored with the IEEE ICRA Milestone Award at the 2020 IEEE International Conference on Ro","body":null,"created":"1591715211","gmt_created":"2020-06-09 15:06:51","changed":"1591715211","gmt_changed":"2020-06-09 15:06:51","alt":"","file":{"fid":"242027","name":"frank-dellaert2.jpeg","image_path":"\/sites\/default\/files\/images\/frank-dellaert2.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/frank-dellaert2.jpeg","mime":"image\/jpeg","size":126074,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/frank-dellaert2.jpeg?itok=Ks7F6Fyh"}}},"media_ids":["636081"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"134","name":"Student and Faculty"},{"id":"135","name":"Research"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"152","name":"Robotics"}],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"635397":{"#nid":"635397","#data":{"type":"news","title":"NSF Grant to Fund Georgia Tech Research into Psychological Impact of COVID-19","body":[{"value":"\u003Cp\u003EArguably the most visible of all prescriptions to the COVID-19 pandemic this year have been guidelines or imposed restrictions commonly referred to as \u0026ldquo;social distancing.\u0026rdquo; Less physical contact, the thinking goes, means a lowered risk of viral transmission.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELike the virus itself, however, stress and anxiety stemming from overconsumption of news or other media can spread through social networks. As the mental health fallout becomes clearer, are some similar social media distancing recommendations needed to stem the flow through the online world?\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA multidisciplinary team of researchers at Georgia Tech, Washington University-St. Louis, and the University of Wisconsin-Madison argue that these mental health implications of the pandemic are equally important, and \u003Ca href=\u0022https:\/\/www.nsf.gov\/awardsearch\/showAward?AWD_ID=2027689\u0022\u003Ea new grant from the National Science Foundation (NSF) has recently funded new research to that effect\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s not just the fear and anxiety that I might get infected or I might infect or know someone who is infected,\u0026rdquo; said \u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E, an associate professor in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E and the co-principal investigator on the project. \u0026ldquo;It\u0026rsquo;s all of these things around it that are furthering the psychological impact. It\u0026rsquo;s very different from other kinds of illnesses or pandemics because of the uncertainty of the crisis. We simply don\u0026rsquo;t know how long we are into it.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe grant is funded by the NSF\u0026rsquo;s Rapid Response Project program, which is intended for research that addresses an immediate need within society. It has provided $200,000 toward the yearlong project.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research will combine investigations in two separate environments: the online world, where news, personal posts, videos, and other media are shared rampantly across social networks, and the offline real world, where the epidemiological data about the spread of the virus or economic data about the financial fallout can be measured.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor the former, they will use social media data from various popular social platforms \u0026ndash; Twitter, Reddit, and YouTube \u0026ndash; to measure the spread of information and how consumers of it express themselves in terms of anxiety or fear, or what they are saying about their own psychological wellbeing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;How often are people expressing anger or fear or blaming someone through their posts?\u0026rdquo; said \u003Cstrong\u003ESrijan Kumar\u003C\/strong\u003E, an assistant professor in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/cse.gatech.edu\/\u0022\u003ESchool of Computational Science and Engineering\u003C\/a\u003E and the other co-principal investigator. \u0026ldquo;We\u0026rsquo;ll develop new classifiers using natural language processing that will help us classify social posts into two categories: either anxiety-inducing or anxiety itself.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis is new territory, according to De Choudhury. Although there have been other pandemics such as the 1918 influenza epidemic, none of this magnitude have taken place during the digital\/social age. And while social media provides an important mechanism for staying informed and remaining in contact with friends and loved ones during the difficult social distancing measures, overexposure could result in negative mental health consequences.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;There is probably a sweet spot,\u0026rdquo; De Choudhury said. \u0026ldquo;Just like we need physical distancing in the real world, we probably need to practice distancing from social media or online information to an extent to avoid consuming too much anxiety-inducing media, while also staying informed.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If I say something, it doesn\u0026rsquo;t just affect me. It affects all the people who read my posts. If they share it or if they post something, then it affects all of their social neighbors. It can be an outward ripple that affects people. We want to measure that, how they spread through social networks.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThey\u0026rsquo;ll compare that data with the other element: the offline world. Currently, people in New York City are likely more stressed and anxious in a different way than people in Georgia. New York has been the epicenter of the viral outbreak in the United States, meaning that much of the anxiety locally stems from the virus itself.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EWill I contract the virus? Will someone I know contract the virus? Can I go to the store for groceries? How much disinfecting is required when I return home?\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnd then, you can tease out that geographical data. How are higher-income individuals stressed in comparison to lower-income? What about differences along racial lines? Data has shown higher mortality rates in African-Americans, for example, which leads to different fears than those in other communities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn U.S. cities where there is also sufficient social media data, they will examine this offline data to see rates of infection, fatalities, when shelter-in-place was imposed, and more.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe final piece will be what they will do with this information. The goal is to create tools for social platforms to provide coping techniques or guidelines for use.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Maybe that might include encouraging you to limit the amount of time you spend on social media,\u0026rdquo; Kumar said. \u0026ldquo;Or, maybe you step out and do something with family members. Some kind of physical activity. Then we can begin to examine how people react to these messages. Do we see that their anxiety levels are coming down, or not?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In this time, we have a very unique lens to study this pandemic in a whole new light as opposed to other events of a global scale,\u0026rdquo; De Choudhury said. \u0026ldquo;There is no guarantee this won\u0026rsquo;t come back. And even if it doesn\u0026rsquo;t, something else will. Being able to have these tools built and available will better prepare us for the future.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor more coverage of Georgia Tech\u0026rsquo;s response to the coronavirus pandemic, please visit our \u003Ca href=\u0022https:\/\/helpingstories.gatech.edu\/\u0022\u003EResponding to COVID-19 page\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A multidisciplinary team of researchers has received a grant from the NSF to study the mental health outcomes of COVID-19 through examination of social media activity and geographic epidemiological data."}],"uid":"33939","created_gmt":"2020-05-15 16:40:10","changed_gmt":"2020-06-04 13:09:47","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-05-15T00:00:00-04:00","iso_date":"2020-05-15T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"635396":{"id":"635396","type":"image","title":"Munmun De Choudhury and Srijan Kumar","body":null,"created":"1589560736","gmt_created":"2020-05-15 16:38:56","changed":"1589560736","gmt_changed":"2020-05-15 16:38:56","alt":"Munmun De Choudhury and Srijan Kumar","file":{"fid":"241787","name":"NSF RAPID GRANT - Munmun and Srijan.png","image_path":"\/sites\/default\/files\/images\/NSF%20RAPID%20GRANT%20-%20Munmun%20and%20Srijan.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/NSF%20RAPID%20GRANT%20-%20Munmun%20and%20Srijan.png","mime":"image\/png","size":1487509,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/NSF%20RAPID%20GRANT%20-%20Munmun%20and%20Srijan.png?itok=8E8NSuCB"}}},"media_ids":["635396"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"184821","name":"cc-research; ic-hcc; ic-ai-ml; COVID-19"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"635751":{"#nid":"635751","#data":{"type":"news","title":"New Device Helps Parents Keep an Eye on\u00a0Children in Public Places","body":[{"value":"\u003Cp\u003EAs parents know, it is not uncommon for children to wander off in public places. This is particularly true for children on the autism spectrum or with other special needs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo address this challenge, a team of student entrepreneurs from Georgia Tech has created BuddyEye to help parents easily track and locate a lost child. The wristband uses a combination of Bluetooth and GPS technologies to alert parents when their child leaves a specified area.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDuring the research phase of their project, team members found that 60 percent of parents with children on the autism spectrum reported that their child had gone missing for more than an hour at least once. This can be distressing for children and parents alike, especially in crowded public spaces like zoos, shopping malls, or amusement parks.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We wanted to give these and other parents another set of eyes to keep track of their children and an efficient way to find them should they wander off,\u0026quot; said \u003Cstrong\u003ETillson Galloway\u003C\/strong\u003E, Invenovate co-founder and second-year computer science student.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe developers are including software that alerts a store or other location that a child using BuddyEye is lost. Once alerted, store employees can lock doors, make appropriate announcements, and take other steps to help find the child quickly.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;BuddyEye provides a safety net. It vibrates, immediately notifying the parent when their child moves beyond a designated range. It also lets the child know that they are out of range so they will hopefully head back toward their parent,\u0026rdquo; said Galloway.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile the rest of the team \u0026ndash; mechanical engineering students \u003Cstrong\u003EAlejandro Campos\u003C\/strong\u003E and \u003Cstrong\u003EMark Saad\u003C\/strong\u003E, and Andre Prieto, an industrial design student from Universidad de las Am\u0026eacute;ricas in Ecuador \u0026ndash; is primarily tasked with developing the hardware, Tillson is leading the charge on the application itself.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe says the BuddyEye platform is being developed using Facebook\u0026rsquo;s React Native, an open-source mobile application framework.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;What\u0026rsquo;s great about React Native is that it lets developers build apps for IOS, Android, and the web without having to separately code for each platform,\u0026rdquo; Galloway said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe students behind BuddyEye are currently participating in Georgia Tech\u0026rsquo;s annual \u003Ca href=\u0022https:\/\/create-x.gatech.edu\/launch\u0022\u003ECREATE-X Launch\u003C\/a\u003E program. Their startup company is known as Invenovate.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlthough the CREATE-X Launch competition runs through the summer, BuddyEye has already earned a positive response.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe team also entered its product in the recently participated \u003Ca href=\u0022https:\/\/www.scheller.gatech.edu\/centers-initiatives\/ile\/i2s\/index.html\u0022\u003EIdeas to Serve (I2S) competition sponsored by Georgia Tech\u0026rsquo;s Scheller College of Business\u003C\/a\u003E. The team won first place and $2,500 in the Solutions Discovery Track of the annual I2S competition. They were also winners of the Best Pitch Award, which earned them an additional $500.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A CREATE-X team is developing wearable technology to track children that wonder off."}],"uid":"32045","created_gmt":"2020-05-28 14:39:26","changed_gmt":"2020-05-28 18:01:08","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-05-28T00:00:00-04:00","iso_date":"2020-05-28T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"635794":{"id":"635794","type":"image","title":"Team Invenovate - CREATE-X Launch 2020","body":null,"created":"1590688774","gmt_created":"2020-05-28 17:59:34","changed":"1590688796","gmt_changed":"2020-05-28 17:59:56","alt":"Three Georgia Tech students that comprise Team Invenovate","file":{"fid":"241906","name":"Invenovate-team-createX-launch-summer-2020.jpeg","image_path":"\/sites\/default\/files\/images\/Invenovate-team-createX-launch-summer-2020.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Invenovate-team-createX-launch-summer-2020.jpeg","mime":"image\/jpeg","size":207384,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Invenovate-team-createX-launch-summer-2020.jpeg?itok=HIWH4OsG"}}},"media_ids":["635794"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"137161","name":"CREATE-X"},{"id":"184953","name":"buddyeye"},{"id":"46361","name":"GT computing"},{"id":"3472","name":"entrepreneurship"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAlbert Snedeker, Sr. Communications Manager\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto:albert.snedeker@cc.gatech.edu?subject=CREATE-X%20project%20BuddyEye\u0022\u003Ealbert.snedeker@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["albert.snedeker@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"635593":{"#nid":"635593","#data":{"type":"news","title":"IC Students Support Innovation in India through \u0027MakerGhat\u0027","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003EAzra Ismail\u003C\/strong\u003E was working with health workers in Delhi, India, when she had a realization. What she saw from locals in the community was that there was an intense desire for societal impact from many workers \u0026ndash; and the ideas to go with it \u0026ndash; but an absence of resources necessary to fully realize the innovation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The experience that these health workers had in these communities provided unique perspectives and ideas that produced the kinds of ideas that could be relevant,\u0026rdquo; said Ismail, now a Ph.D. student in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;But because they were the lowest rung on the health infrastructure and were low income or low social class, those ideas weren\u0026rsquo;t recognized and represented.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAround the same time, \u003Ca href=\u0022http:\/\/cc.gatech.edu\u0022\u003ECollege of Computing\u003C\/a\u003E alumnus \u003Cstrong\u003EAditya Vishwanath\u003C\/strong\u003E, now a doctoral student at Stanford University, had a similar realization. He was working with Asha Mumbai, a non-profit in a low-resourced slum in India\u0026rsquo;s biggest city, using virtual reality to see how students appropriated and made sense of it. Like Ismail, he recognized a group of students who had unique viewpoints and drive, but too few resources to realize them.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKnowing how important it is to support innovation from those who understand the specific needs of a community, the two of them founded \u003Ca href=\u0022https:\/\/makerghat.org\/space\u0022\u003EMakerGhat,\u003C\/a\u003E a non-profit with the mission to take ideas from concept to creation and application where they are needed most: the communities they serve.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESituated in an impoverished neighborhood in Mumbai, MakerGhat is a community lab in which local students, young and old, can join to receive education and resources to put their ideas into practice. Makers join through subscription or scholarships if they are unable to afford membership. In exchange, they receive access to support ranging from an electronics room, a 3D printing and PC workstation, a science lab, a woodworking shop, and a design and workshop studio.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe space is intentionally unsophisticated. Enter the space, and you may find a mish-mash of supplies and painting on the walls, a far cry from the labs of the nearby Indian Institute of Technology-Bombay, one of the top technological universities in India.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We want people to be encouraged to try things and not afraid to break it,\u0026rdquo; Ismail said. \u0026ldquo;We don\u0026rsquo;t want something that people are afraid to use.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIf a maker can\u0026rsquo;t find what they are looking for, they can turn to connections within the community to meet the need. Heavier equipment, for example, might require a trip to the local smith for welding.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Students coming in have family members in these other industries, so it sets up an informal infrastructure where the students know where to go for a specific need,\u0026rdquo; Ismail said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe model has resulted in a number of tangible outputs. In Summer 2019, a handful of interns from Georgia Tech, Stanford, and Smith College were able to take advantage of the Denning Global Engagement Seed Fund to fund their travel to India. Interns were there not just to teach or run the lab, but to co-learn with locals. Collaborations between the technical expertise of the interns and the locally-significant knowledge of the makers resulted in a handful of innovations that addressed local needs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne collaboration resulted in a system that could compact plastic bottles to assist in a waste management challenge in Mumbai. Workers who collect waste locally and transport to recycling plants to sell to companies or government institutions face challenges transporting plastic bottles, the most common waste item, which take up a lot of space.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnother created a community mapping platform to help identify local resources. Makers and interns went into the community and conducted surveys to find needs specific to different geographies.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;A big part of this is engaging with the community to identify needs, current status quos, and how to approach the challenge,\u0026rdquo; Ismail said. \u0026ldquo;This happens in the schools too. What are the gaps that need to be addressed, and how can we help address them?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMakerGhat serves about 300 students weekly, ranging from young to old \u0026ndash; it is open to any age or background. Many come from STEM fields, but others may be interested in math or art or fashion design.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s a melting pot,\u0026rdquo; Ismail said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe goal is to turn MakerGhat into an incubator. As the first class of students graduates from the program, they will move on to other sources of education or work. Ismail said that she and her collaborators \u0026ndash; which includes Vishwanath, a team programmer, local leaders in finances and project resources, and a group of 10 or so volunteers \u0026ndash; want to help build companies from the ideas and innovations that formed at MakerGhat.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The mission is to actually transform these students and community members into entrepreneurs,\u0026rdquo; Ismail said. \u0026ldquo;We want to take these creations to the next level and help them scale beyond their own community.\u0026quot;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThat might mean launching new MakerGhat centers elsewhere. The goal is to make the model of the original open-source so that other communities can replicate \u0026ndash; in India and beyond. While it may play out different in each location depending on the community\u0026rsquo;s needs, the organizational structure would be the same.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;There\u0026rsquo;s a misconception that great innovation only comes from these big tech companies or big universities,\u0026rdquo; Ismail said. \u0026ldquo;But we want to challenge that narrative. Many of the great ideas that can make significant impacts on society come from the people in these communities of need themselves.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOther members of the Georgia Tech community have contributed to the project. \u003Cstrong\u003ENeha Kumar\u003C\/strong\u003E, an assistant professor joint between the School of Interactive Computing and the Sam Nunn School of International Affiars, is an advisor. Students involved in a Makers-in-Residence program last summer were \u003Cstrong\u003ERitesh Bhatt\u003C\/strong\u003E, \u003Cstrong\u003ESolum Onwuchekwa\u003C\/strong\u003E, and \u003Cstrong\u003EJosiah Mangiameli\u003C\/strong\u003E. \u003Cstrong\u003EVishal Sharma\u003C\/strong\u003E, an incoming IC Ph.D. student, was also involved.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"MakerGhat is a local makerspace in India designed to cater specifically to low-resourced innovators."}],"uid":"33939","created_gmt":"2020-05-22 19:17:15","changed_gmt":"2020-05-22 19:17:15","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-05-22T00:00:00-04:00","iso_date":"2020-05-22T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"635592":{"id":"635592","type":"image","title":"MakerGhat","body":null,"created":"1590174252","gmt_created":"2020-05-22 19:04:12","changed":"1590174252","gmt_changed":"2020-05-22 19:04:12","alt":"Makers paint walls at MakerGhat in India.","file":{"fid":"241868","name":"MakerGhat.jpeg","image_path":"\/sites\/default\/files\/images\/MakerGhat.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/MakerGhat.jpeg","mime":"image\/jpeg","size":188915,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/MakerGhat.jpeg?itok=V5YPfbtN"}}},"media_ids":["635592"],"related_links":[{"url":"https:\/\/www.cc.gatech.edu\/content\/researchers-work-kids-mumbai-examine-classroom-potential-virtual-reality","title":"Researchers Work with Kids in Mumbai to Examine Classroom Potential of Virtual Reality"},{"url":"https:\/\/www.cc.gatech.edu\/news\/605000\/vr-taking-students-where-once-only-ms-frizzle-and-magic-school-bus-could","title":"VR Taking Students Where Once Only Ms. Frizzle and the Magic School Bus Could"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"184890","name":"cc-research; ic-hcc"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"634175":{"#nid":"634175","#data":{"type":"news","title":"Four Machine Learning Faculty Members Earn Promotions and Tenure","body":[{"value":"\u003Cp\u003EFour faculty members at the \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EMachine Learning Center at Georgia Tech\u003C\/a\u003E have received promotions or been granted tenure.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EJake Abernethy\u003C\/strong\u003E has been promoted to associate professor in the \u003Ca href=\u0022https:\/\/scs.gatech.edu\/\u0022\u003ESchool of Computer Science\u003C\/a\u003E and granted tenure. Abernethy\u0026rsquo;s research focus is machine learning, where he enjoys discovering connections between optimization, statistics, and economics.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn 2011, he completed his Ph.D. at the University of California, Berkeley before becoming a Simons postdoctoral fellow for the following two years. After the water crisis in Flint, Mich., Abernethy worked on \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~jabernethy9\/flint\/\u0022\u003Edetecting lead contamination and infrastructure remediation\u003C\/a\u003E. Prior to studying and teaching machine learning, Abernethy performed comedy and juggling shows, opening for Sinbad and Dave Chappelle.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E has been promoted to associate professor in the\u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003E School of Interactive Computing\u003C\/a\u003E and granted tenure. De Choudhury is also affiliated with the \u003Ca href=\u0022http:\/\/gvu.gatech.edu\/\u0022\u003EGVU\u003C\/a\u003E Center and \u003Ca href=\u0022http:\/\/ipat.gatech.edu\/\u0022\u003EInstitute for People and Technology (IPaT)\u003C\/a\u003E and leads the \u003Ca href=\u0022http:\/\/socweb.cc.gatech.edu\/\u0022\u003ESocial Dynamics and Wellbeing Lab (SocWeb Lab.)\u003C\/a\u003E De Choudhury studies problems at the intersection of computer science and social media, building computational methods and artefacts to help understand human behaviors and psychological states and how they manifest online.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPrior to joining Georgia Tech in 2014, De Choudhury was a postdoctoral researcher in the nexus group at Microsoft Research, Redmond. In 2011, she received her Ph.D. from Arizona State University, Tempe. After graduate school, De Choudhury spent time at Rutgers University and was a faculty associate with the Berkman Center for Internet and Society at Harvard University.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EYajun Mei\u003C\/strong\u003E has been promoted to professor in the \u003Ca href=\u0022https:\/\/www.isye.gatech.edu\/\u0022\u003EH. Milton Stewart School of Industrial and Systems Engineering\u003C\/a\u003E. Mei\u0026#39;s research interests include change-point problems and sequential analysis in mathematical statistics and sensor networks and information theory in engineering. Mei also examines longitudinal data analysis, random effects models, and clinical trials in biostatistics.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMei received his Ph.D. in mathematics from the California Institute of Technology in 2003. He has also worked as a postdoc in biostatistics at the Fred Hutchinson Cancer Research Center. In 2010, Mei was awarded the National Science Foundation (NSF) CAREER Award and in 2008 was awarded Best Paper at FUSION. Mei was awarded the prestigious Abraham Wald Prize in Sequential Analysis in 2009.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EAlex Endert \u003C\/strong\u003Ehas been promoted to associate professor and granted tenure in the School of Interactive Computing. Endert directs the \u003Ca href=\u0022https:\/\/gtvalab.github.io\/\u0022\u003EVisual Analytics Lab\u003C\/a\u003E where he and his students apply fundamental research to\u0026nbsp;domains including text analysis, intelligence analysis, cybersecurity, and decision-making, and explore novel user interaction techniques for visual analytics.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEndert earned his Ph.D. from Virginia Tech in 2012, and in 2013 his work on Semantic Interaction was awarded the IEEE VGTC VPG Pioneers Group Doctoral Dissertation Award, and the Virginia Tech Computer Science Best Dissertation Award. In 2018, Endert received the NSF CAREER Award.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEditors Note:\u0026nbsp;\u003Cstrong\u003EMolei Tao\u003C\/strong\u003E\u0026nbsp;has been promoted to associate professor with tenure in the School of Math. Tao is an applied and computational mathematician, designing algorithms for faster and more accurate computations and developing mathematical tools to analyze and design engineering systems or answer scientific questions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe earned his Ph.D. in control and dynamical systems with a minor in physics from the California Institute of Technolgy where he also worked as a postdoctoral researcher. He is the 2011 recipient of the W.P. Carey Ph.D. Prize in Applied Mathematics and a 2019 NSF CAREER Award.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Four faculty members at the Machine Learning Center at Georgia Tech have received promotions or been granted tenure."}],"uid":"34773","created_gmt":"2020-04-08 17:37:17","changed_gmt":"2020-05-11 18:57:34","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-04-08T00:00:00-04:00","iso_date":"2020-04-08T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"634173":{"id":"634173","type":"image","title":"Four ML@GT faculty members earn promotions and tenure","body":null,"created":"1586367276","gmt_created":"2020-04-08 17:34:36","changed":"1586367276","gmt_changed":"2020-04-08 17:34:36","alt":"Congratulations Alex, Jake, Munmun, and Yajun","file":{"fid":"241321","name":"Spring 2020 ML Promotions.png","image_path":"\/sites\/default\/files\/images\/Spring%202020%20ML%20Promotions.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Spring%202020%20ML%20Promotions.png","mime":"image\/png","size":440694,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Spring%202020%20ML%20Promotions.png?itok=frxxuWzs"}}},"media_ids":["634173"],"groups":[{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"635208":{"#nid":"635208","#data":{"type":"news","title":"Social Media and Wellbeing: Does Bias in Self-Reported Data Impact Research?","body":[{"value":"\u003Cp\u003EAlong with the development of each new technological platform comes a series of questions designed to understand its ultimate impact on users\u0026rsquo; wellbeing or performance. It\u0026rsquo;s like clockwork.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EDoes watching too much television rot your child\u0026rsquo;s brain? How much is too much when it comes to video games? Is our time spent on social media impacting our mental health?\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThese are all important questions, but how they are asked matters to the ultimate conclusions we can draw. It is well-established that the most commonly used method in this area of research \u0026ndash; user self-reports and survey questions \u0026ndash; are prone to error. Now, new research from collaborators at Georgia Tech, Facebook, and the University of Michigan have shed light on the nature of error \u0026ndash; that is to say whether user over or underestimate their data, who and which questions are more prone to error, and more.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EError in the data, said \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E Ph.D. student\u0026nbsp;\u003Cstrong\u003ESindhu Ernala\u003C\/strong\u003E, can impact the inferences drawn from the data itself.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We know survey questions have several well-documented biases,\u0026rdquo; Ernala said. \u0026ldquo;People may not remember correctly. They can\u0026rsquo;t keep up with their time. They remember recent things more accurately than those further in the past. All of this matters because error in measurement might impact the downstream inferences we make. Accurate assessments of social media use is critical because of the everyday impact it has on people\u0026rsquo;s lives.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIndeed, Ernala and her collaborators found that these biases held up in many surveys. In a paper accepted to the \u003Ca href=\u0022http:\/\/chi.gatech.edu\u0022\u003E2020 ACM Conference on Human Factors in Computing\u003C\/a\u003E (CHI), they picked 10 of the most common survey questions in prior literature that investigate time spent on Facebook. The questions were asked in a variety of ways: open ended or multiple choice, the frequency of visits or the total time spent. They asked these 10 questions in a survey to 50,000 random users in 15 countries around the world.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWith self-reported data in hand, they compared it to the actual server logs at Facebook to see how it stacked up. Interestingly, people most often overestimated the time they spent on the platform and underestimated the number of times they visited. Specifically, in the 18-24 demographic, a common age range for research done at universities, there was even more error in self-reports.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This is important, because a lot of our research is done with these age samples,\u0026rdquo; Ernala said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWith this information in mind, the researchers made a handful of recommendation in order to improve the data and, thus, the research around the data itself:\u003C\/p\u003E\r\n\r\n\u003Col\u003E\r\n\t\u003Cli\u003EAs a researcher, if you are investigating time spent, consider using time tracking applications as an alternative to self-report time spent measures. These applications include things like Apple\u0026rsquo;s screen time feature or Facebook\u0026rsquo;s \u0026ldquo;Your Time on Facebook.\u0026rdquo;\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003EIf researchers want to use surveys, which often makes sense, consider using the phrasing with the lowest error or multiple-choice questions.\u003C\/li\u003E\r\n\u003C\/ol\u003E\r\n\r\n\u003Cp\u003EThe researchers caution against using time spent self-reports directly, but rather interpret reports as noisy estimates of where someone falls on a distribution. More important when determining wellbeing outcomes is\u0026nbsp;\u003Cem\u003Ehow\u003C\/em\u003E\u0026nbsp;users actually spend their time on the platform.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Social platforms change and user habits change over time,\u0026rdquo; Ernala said. \u0026ldquo;The questions now might not be the best questions five or 10 years from now. This is fluid, and we need to continue to look at this to make sure our past and future research is well-informed.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EShe and her collaborators hope to contribute positively to this ongoing process by providing some validated measures that can be used across studies, while understanding that these methods may change over time as user habits transform.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Error in the data, said School of Interactive Computing Ph.D. student\u00a0Sindhu Ernala, can impact the inferences drawn from the data itself."}],"uid":"33939","created_gmt":"2020-05-08 08:36:27","changed_gmt":"2020-05-08 08:36:27","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-05-08T00:00:00-04:00","iso_date":"2020-05-08T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"624519":{"id":"624519","type":"image","title":"Social Media Logos","body":null,"created":"1565805908","gmt_created":"2019-08-14 18:05:08","changed":"1565805908","gmt_changed":"2019-08-14 18:05:08","alt":"A keyboard featuring different social media logos","file":{"fid":"237806","name":"Social Media logos.jpg","image_path":"\/sites\/default\/files\/images\/Social%20Media%20logos.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Social%20Media%20logos.jpg","mime":"image\/jpeg","size":215846,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Social%20Media%20logos.jpg?itok=G7qWkSGs"}}},"media_ids":["624519"],"related_links":[{"url":"http:\/\/chi.gatech.edu","title":"CHI 2020 at Georgia Tech"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182508","name":"cc-research; ic-hcc; ic-social-computing"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"634899":{"#nid":"634899","#data":{"type":"news","title":"Machine Learning Method Amplifies \u2018Voice of the People\u2019 to Model Workplace Culture","body":[{"value":"\u003Cp\u003EHuman resources professionals and job seekers alike may soon be able to better understand a company\u0026rsquo;s unique organizational culture thanks to a new machine-learning approach.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDeveloped by Georgia Tech researchers, the approach is the first of its kind to computationally model organizational culture using publicly available anonymized data sources \u0026ndash; including Glassdoor user reviews.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThese models are illustrated using heat maps that reveal positive and negative sentiment for a company and its business units across 41 dimensions of organizational culture.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe heat maps give a \u0026ldquo;cloud-contributed\u0026rdquo; sense of what a particular workplace culture is like and can provide actionable insights to HR teams, unit managers, and job seekers, according to the researchers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Right now, to get a measure of organizational culture, companies rely on internal surveys, which are difficult to scale. It\u0026rsquo;s also unlikely that they are getting true responses given factors like organizational bias or employee concerns about anonymity,\u0026rdquo; said \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/vedant-das-swain\/\u0022\u003E\u003Cstrong\u003EVedant Das Swain\u003C\/strong\u003E\u003C\/a\u003E, a second-year Ph.D. student studying \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/academics\/degree-programs\/phd\/human-centered-computing\u0022\u003Ehuman-computer interaction at Georgia Tech\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWith a potential dataset of more than 35 million Glassdoor user reviews about 770,000+ companies, \u0026ldquo;straight off the bat, for a company\u0026rsquo;s HR unit, our approach will allow them to go beyond traditional surveys. They gain scalability in terms of the number of people they can represent in a model of their workplace culture,\u0026rdquo; Das Swain said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/587684\/likelihood-dieting-success-lies-within-your-tweets\u0022\u003E[RELATED:\u0026nbsp;Likelihood of Dieting Success Lies Within Your Tweets]\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Existing measures are mostly done by companies themselves, which means they might not be transparent if it\u0026rsquo;s not in their interest to do so. This approach utilizes the voice of the people,\u0026rdquo; said \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/kous2v\/\u0022\u003E\u003Cstrong\u003EKoustuv Saha\u003C\/strong\u003E\u003C\/a\u003E, a fourth-year Ph.D. student studying social computing and computational social science.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe computational approach to modeling organizational culture has other advantages. Unlike traditional surveys, the method lets HR managers run reports virtually as often as needed, even retrospectively. This provides them with the ability to assess and understand how their company\u0026rsquo;s organizational culture changes over time. And, because the tool assesses workplace culture across business functions, team managers throughout the business can see where things are working well, and where they are not.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlong with helping business unit managers better understand what\u0026rsquo;s happening within their teams, this level of detail can also be a huge benefit for job seekers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;A prospective employee can look at all of the cultural dimensions in the heat map and see that the people in a particular unit or division of a company speak favorably about competitiveness and then say, \u0026lsquo;you know what, I like a competitive atmosphere, maybe I\u0026rsquo;ll self-select myself into this opportunity,\u0026rdquo; said Das Swain.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo create the 41 dimensions of organizational culture that serve as the framework for their research, Das Swain and Saha started with publicly available job descriptors from \u003Ca href=\u0022https:\/\/www.onetonline.org\/\u0022 target=\u0022_blank\u0022\u003EOccupational Information Network (O*NET)\u003C\/a\u003E, an occupational information database sponsored by the U.S. Department of Labor. The job descriptor language is rooted in occupational psychology. It also reflects current workplace norms and expectations for nearly 200 professional and technical occupations.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWith this database, the team developed 41 computer-readable language models. Models like these are known as word vectors and can reveal often-unseen connections between seemingly unrelated words, compound words, phrases, as well as word parts like prefixes and suffixes.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor the team, these word vectors are the backbone of their approach and serve as a multi-faceted representation of organizational culture.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The 41 dimensions reflect what is important in a job role and in workplace behavior. We\u0026rsquo;re able to operationalize them, or put them to use, by converting them into machine-understandable language through the word vectors,\u0026rdquo; Saha said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo proof their framework of organizational culture, the team analyzed more than 615,000 Glassdoor user reviews from 92 companies from the Fortune 500 List. In all, the dataset contained more than 10.7 million words from user reviews classified as \u0026lsquo;pro\u0026rsquo; and more than 17.1 million words from user reviews classified as \u0026lsquo;con\u0026rsquo;.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Once you start making things machine-interpretable, it can become so nuanced that it\u0026rsquo;s mumbo jumbo for humans and impossible to interpret. We wanted to see if there was a way to operationalize this information to measure a company\u0026rsquo;s culture by simply looking at what people say in self-initiated pro or con reviews from Glassdoor,\u0026rdquo; said Das Swain, who is advised by Georgia Tech Regents\u0026#39; Professor \u003Ca href=\u0022http:\/\/ubicomp.cc.gatech.edu\/gregory-d-abowd\/\u0022\u003E\u003Cstrong\u003EGregory Abowd\u003C\/strong\u003E\u003C\/a\u003E and Assistant Professor \u003Ca href=\u0022http:\/\/www.munmund.net\/\u0022\u003E\u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;With our framework, we can make sense of that volume of data to accurately model a company\u0026rsquo;s workplace culture so that people can experience the norms, the beliefs of an organization without ever being there.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;People make the place. We\u0026rsquo;re leveraging what lots of people are saying about a place to get a cloud-contributed sense of what the workplace culture is within that particular company,\u0026rdquo; said Saha, a fifth-year Ph.D. student who is advised by De Choudhury.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDas Swain and Saha are co-authors of \u003Cem\u003EModeling Organizational Culture with Workplace Experiences Shared on Glassdoor\u003C\/em\u003E, which \u003Ca href=\u0022https:\/\/chi2020.acm.org\/chi-2020-free-proceedings\/\u0022\u003Eappears in the CHI 2020 Proceedings\u003C\/a\u003E.\u0026nbsp; The work is supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA Contract No. 2017-17042800007.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Students analyzed more than half a million Glassdoor user reviews to model organizational culture."}],"uid":"32045","created_gmt":"2020-04-30 16:16:24","changed_gmt":"2020-05-04 14:19:33","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-05-04T00:00:00-04:00","iso_date":"2020-05-04T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"634900":{"id":"634900","type":"image","title":"Workplace Culture Model","body":null,"created":"1588263499","gmt_created":"2020-04-30 16:18:19","changed":"1588263499","gmt_changed":"2020-04-30 16:18:19","alt":"A computational heat map showing organizational culture models for 3 companies ","file":{"fid":"241616","name":"CompanyCulture2 copy.png","image_path":"\/sites\/default\/files\/images\/CompanyCulture2%20copy.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/CompanyCulture2%20copy.png","mime":"image\/png","size":89915,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/CompanyCulture2%20copy.png?itok=B9_LhPVU"}}},"media_ids":["634900"],"related_links":[{"url":"https:\/\/youtu.be\/LoIccpmGCt0","title":"Modeling Organizational Culture Using Glassdoor"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"66442","name":"MS HCI"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"9167","name":"machine learning"},{"id":"184702","name":"glassdoor"},{"id":"184703","name":"word vector"},{"id":"184704","name":"word embedding"},{"id":"46361","name":"GT computing"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAlbert Snedeker, Sr. Communications Mgr.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:albert.snedeker@cc.gatech.edu?subject=Modeling%20workplace%20culture\u0022\u003Ealbert.snedeker@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["albert.snedeker@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"634833":{"#nid":"634833","#data":{"type":"news","title":"Institute Research Award Winners Named for 2020","body":[{"value":"\u003Cp\u003EEvery year, all of Georgia Tech\u0026rsquo;s outstanding faculty and staff are recognized at the Faculty and Staff Honors Luncheon. As a result of modified campus operations during the COVID-19 pandemic, the luncheon this spring has been canceled, but the work of the 2020 honorees continues, and the Office of the Executive Vice President for Research (EVPR) has proudly named the winners of six research awards.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I always look forward to the honors luncheon because it reminds us that Georgia Tech\u0026rsquo;s people are at the heart of our success,\u0026rdquo; said Chaouki Abdallah, Georgia Tech\u0026rsquo;s EVPR. \u0026ldquo;Although this year\u0026rsquo;s circumstances prevent us from gathering in person, I am pleased to be able to recognize these individuals\u0026rsquo; accomplishments virtually. Their collective talent, energy, and enthusiasm continue to make Georgia Tech an outstanding institution.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech\u0026rsquo;s 2020 Institute Research Award winners include:\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EOutstanding Achievement in Research Enterprise Enhancement Award\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis year\u0026rsquo;s award for Outstanding Achievement in Research Enterprise Enhancement goes to \u003Cstrong\u003EChristine Conwell\u003C\/strong\u003E, a senior research scientist and managing director of the Center for Chemical Evolution in the School of Chemistry and Biochemistry. This award is given to a Georgia Tech staff member who consistently betters Georgia Tech\u0026rsquo;s research\u0026nbsp;program but is not a traditional researcher.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EConwell has led the daily operations of the Center for Chemical Evolution (CCE), a large-scale research center funded by the National Science Foundation and NASA, for more than nine years. As part of the CCE leadership team, she focuses on the Center\u0026rsquo;s mission of pursuing impactful and innovative science within the interdisciplinary research structure. Conwell also acts as the liaison between the CCE and both the NSF and NASA program officers. Her leadership and accomplishments have been recognized by the NSF and by NASA in several site visit reports.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EConwell was nominated by M.G. Finn, James A. Carlos Family Chair for Pediatric Technology and professor and chair in the School of Chemistry and Biochemistry.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Christine models the best and most effective aspects of our community culture,\u0026rdquo; Finn writes. \u0026ldquo;She excels at nurturing innovation through collaborative and interdisciplinary pursuits, is committed to excellence, and embraces the opportunity to challenge and enrich the next generation entering the STEM workforce.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EOutstanding Achievement in Research Innovation Award\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe award for Outstanding Achievement in Research Innovation goes to \u003Cstrong\u003EAyanna Howard\u003C\/strong\u003E, chair of the School of Interactive Computing, Linda J. and Mark C. Smith Professor\u0026nbsp;and director of the Human-Automation Systems Lab in the School of Electrical and Computer Engineering. The award honors faculty whose\u0026nbsp;research results have had a demonstrable and sustained societal impact.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHoward researches and designs robotics and interaction technologies. Working closely with clinicians, therapists, and special education teachers, she has created mobile technology and robotics that can be used in a clinical setting or at a child\u0026rsquo;s home or school to support rehabilitation, learning, and development of autonomy. Her most recent work is on an NSF-funded initiative to research and design a new robot programming platform to engage deaf and hard-of-hearing children in computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHoward was nominated by Magnus Egerstedt, Steve W. Chaddick School Chair and professor in the School of Electrical and Computer Engineering.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Dr. Howard is at the top of her discipline and is considered a research leader,\u0026rdquo; Egerstedt says. \u0026ldquo;This is evident from the impact and quality of her work, backed by her quantity of peer-reviewed publications, and her technology transfer efforts and its corresponding impact on children with diverse learning needs.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EOutstanding Doctoral Thesis Advisor Award\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EMuhannad Bakir\u003C\/strong\u003E, the Daniel Curtis Fielder Professor of Discrete Aspects in the School of Electrical and Computer Engineering, is this year\u0026rsquo;s recipient of the Outstanding Doctoral Thesis Advisor award. This award recognizes the achievements of a faculty member\u0026#39;s doctoral students who completed all degree requirements.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn the past five years, Bakir has graduated 14 Ph.D. students. He and his students have won 32 awards for their research, including multiple best paper awards from the Institute of Electrical and Electronics Engineers. Many of Bakir\u0026rsquo;s students have received prestigious fellowships from Intel, IBM, Semiconductor Research Corporation, and federal agencies.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Graduate school is not just about doing great research and publishing papers,\u0026rdquo; Bakir writes of his teaching philosophy. \u0026ldquo;We help students discover their technical passions and career paths that leverage [those] passions, so they remain happy pursuing what they love post-graduation.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBakir was nominated by Magnus Egerstedt, Steve W. Chaddick School Chair and professor in the School of Electrical and Computer Engineering. Egerstedt says that Bakir \u0026ldquo;strives to build a group culture that is collaborative, focused, transparent, professional, respectful, and diverse.\u0026rdquo; Egerstedt added, \u0026ldquo;Seeing students flourish and grow gives Dr. Bakir immense joy and gratitude.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EOutstanding Faculty Research Author Award\u0026nbsp;\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis year\u0026rsquo;s Outstanding Faculty Research Author Award goes to \u003Cstrong\u003ECheng Zhu\u003C\/strong\u003E, Regents Professor in the Wallace H. Coulter Department of Biomedical Engineering and J. Erskine Love Jr. Chair in the College of Engineering. The award recognizes faculty who most contributed\u0026nbsp;to highly impactful publications describing the results of research conducted at Georgia Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EZhu is best known for his discoveries in the field of mechanobiology, an emerging field at the intersection of biology, engineering, and physics. He employs biomechanical approaches to study how cells sense, respond, adapt, function, and develop in their changing mechanical environment. His work has significantly influenced the fields of immunology, hemostasis (stopping blood flow from an injured blood vessel), and thrombosis (blood clots in a vessel).\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESusan S. Margulies, Wallace H. Coulter Department Chair and professor in the Wallace H. Coulter Department of Biomedical Engineering, nominated Zhu for the award.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Professor Cheng Zhu is a world leader in molecular mechanobiology,\u0026rdquo; Margulies says. \u0026ldquo;His discoveries help us better understand and treat infections, cancer, and cardiovascular diseases.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EOutstanding Achievement in Research Program Development Award\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe Outstanding Achievement in Research Program Development Award is given annually to the research team that creates a new thought leadership platform for significantly expanding Georgia Tech\u0026rsquo;s research portfolio. This year\u0026rsquo;s recipients are \u003Cstrong\u003EKrishnendu Roy and Johnna S. Temenoff\u003C\/strong\u003E, director and deputy director, respectively, of the NSF Engineering Research Center for Cell Manufacturing Technologies (CMaT). Roy also holds the Robert A. Milton Chair and is the director of the Marcus Center for Cell-Therapy Characterization and Manufacturing and the Center for ImmunoEngineering. Temenoff holds the Carol Ann and David D. Flanagan Professorship II and is the co-director of the Regenerative Engineering and Medical Center, as well.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECMaT is the world\u0026rsquo;s first and only center focused on developing new tools, technologies, and processes for scalable, quality-driven biomanufacturing of cell therapies.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENew cell therapies, especially stem cell and immune cell, have the potential to revolutionize treatments of unsolved and chronic medical conditions. In the past, manufacturing failures, financial challenges, and lower-than-expected sales have hampered the transition of new cell therapies from clinical trials to the open market. The biomanufacturing community needs new production tools and technologies; robust supply-chain, storage, and distribution logistics; and a well-trained cell-manufacturing workforce. These are the challenges that CMaT, under the leadership of Roy and Temenoff, is designed to meet.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERoy and Temenoff were nominated by Susan S. Margulies, Wallace H. Coulter Department Chair and professor in the Wallace H. Coulter Department of Biomedical Engineering.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Professors Roy and Temenoff have helped bring together a highly diverse local and national team . . . to solve the critical challenges facing cell manufacturing,\u0026rdquo; Margulies writes. \u0026ldquo;Such a multidisciplinary, comprehensive approach could be emulated to solve other grand challenges.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EOutstanding Achievement in Early Career Research\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EJames Dahlman\u003C\/strong\u003E, assistant professor in the Wallace H. Coulter Department of Biomedical Engineering, is being recognized with the award for Outstanding Achievement in Early Career Research. The award is given annually to a faculty member, within eight years of his or her initial appointment, who has made significant discoveries or advancements in his or her research, visibly influencing society or one or more scholarly communities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDahlman\u0026rsquo;s work is in the area of testing nanoparticles used to deliver RNA-based gene therapies to diseased cells. Previously, each nanoparticle had to be tested individually in living animals to see whether it could deliver, for example, liver therapy to liver cells.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDahlman developed a way to encode each candidate nanoparticle with an identifying DNA sequence called a barcode. With his barcode, 300 nanoparticles could be tested at once in a living animal and successful candidates later identified through gene sequencing. This discovery considerably speeds up research on potentially lifesaving RNA-based drugs. Using traditional methods, it took Dahlman five years to find one non-liver nanoparticle; within the past 18 months, his lab has found approximately eight.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESusan S. Margulies, Wallace H. Coulter Department Chair and professor in the Wallace H. Coulter Department of Biomedical Engineering, nominated Dahlman for the award.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;At the age of 33, James transformed the field of RNA therapies,\u0026rdquo; Margulies says. \u0026ldquo;He is internationally known within the nanomedicine community as someone whose work is repeatable, robust, and transformational.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAll awardees, including those listed above, will be featured on the \u003Ca href=\u0022https:\/\/specialevents.gatech.edu\/events\/faculty-staff-honors\u0022 target=\u0022_blank\u0022\u003EFaculty and Staff Honors website\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EThe Office of the Executive Vice President for Research announces the winners of the 2020 Institute Research Awards.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"The Office of the Executive Vice President for Research announces the winners of the 2020 Institute Research Awards."}],"uid":"27165","created_gmt":"2020-04-29 12:17:40","changed_gmt":"2020-04-30 19:04:06","author":"Susie Ivy","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-04-29T00:00:00-04:00","iso_date":"2020-04-29T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"634909":{"id":"634909","type":"image","title":"Georgia Tech Campus Aerial ","body":null,"created":"1588272456","gmt_created":"2020-04-30 18:47:36","changed":"1588272456","gmt_changed":"2020-04-30 18:47:36","alt":"Georgia Tech Campus Aerial ","file":{"fid":"241618","name":"GT Campus Aerial.jpg","image_path":"\/sites\/default\/files\/images\/GT%20Campus%20Aerial.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/GT%20Campus%20Aerial.jpg","mime":"image\/jpeg","size":186897,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/GT%20Campus%20Aerial.jpg?itok=lCqjBvjn"}}},"media_ids":["634909"],"related_links":[{"url":"https:\/\/specialevents.gatech.edu\/events\/faculty-staff-honors","title":"Read more about faculty and staff honors"}],"groups":[{"id":"60109","name":"Executive Vice President for Research (EVPR)"},{"id":"85951","name":"School of Chemistry and Biochemistry"},{"id":"1278","name":"College of Sciences"},{"id":"1255","name":"School of Electrical and Computer Engineering"},{"id":"1237","name":"College of Engineering"},{"id":"1254","name":"Wallace H. Coulter Dept. of Biomedical Engineering"},{"id":"50876","name":"School of Interactive Computing"},{"id":"47223","name":"College of Computing"}],"categories":[{"id":"129","name":"Institute and Campus"}],"keywords":[{"id":"276","name":"Awards"}],"core_research_areas":[],"news_room_topics":[{"id":"71871","name":"Campus and Community"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:Evproffice@gatech.edu\u0022\u003EThe Office of the Executive Vice President for Research\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["Evproffice@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"634469":{"#nid":"634469","#data":{"type":"news","title":"IC Ph.D. Students Named 2020 Members of NSF Graduate Research Fellowship Program","body":[{"value":"\u003Cp\u003EA pair of \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E students was selected as 2020 members of the N\u003Ca href=\u0022https:\/\/www.nsfgrfp.org\/\u0022\u003Eational Science Foundation Graduate Research Fellowship Program\u003C\/a\u003E (NSF GRFP).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFirst-year Ph.D. students \u003Cstrong\u003EDaniel Bolya\u003C\/strong\u003E (advised by \u003Cstrong\u003EJudy Hoffman\u003C\/strong\u003E) and \u003Cstrong\u003EJoanne Truong\u003C\/strong\u003E (advised by \u003Cstrong\u003EDhruv Batra\u003C\/strong\u003E and \u003Cstrong\u003ESonia Chernova\u003C\/strong\u003E) were recognized by the program, which supports graduate students pursuing research-based Master\u0026rsquo;s and doctoral degrees at United States institutions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe NSF GRFP provides financial support for three years, comprised of a $34,000 stipend per 12-month fellowship year, as well as a direct payment of $12,000 to Georgia Tech toward the cost of education for each of the three years of fellowship funding.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBolya\u0026rsquo;s work is in machine learning and computer vision. Recent work at Georgia Tech has focused on error profiling in instance segmentation and object detection models. His method, building upon previous work at MIT, is unique in that it captures all possible sources of error in a model, while properly weighing the importance of each. He plans to continue pursuit of faster methods of instance segmentation that can he make accessible. Current methods are not practical for many applications due to limits in speed, accuracy, and data efficiency. His research addresses this challenge.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This is not just about computer vision,\u0026rdquo; he said in his research statement. \u0026ldquo;Improving instance segmentation would impact the tech we use every day.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELike his work at MIT, called YOLACT, he plans to fully release the project open source once it is ready.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETruong\u0026rsquo;s long-term research goal is to develop robots that can see, talk, reason, and act in complex human environments. Specifically, she will focus on a method called \u0026ldquo;sim2robot transfer,\u0026rdquo; which develops efficient domain adaptation techniques to enable pre-training of AI agents in simulators while ensuring that the learned skills generalize to a real robotic platform.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The overall goals of my research plan are to, one, break down the possible errors in simulation-to-reality transfer that result in a reality gap, and, two, close the loop between simulation and reality by using data collected on a real robot to finetune and optimize parameters in simulation,\u0026rdquo; she said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EShe worked on the first goal last fall, achieving optimization in simulator settings for sim2real predictivity. Currently, she is working on the second goal, developing domain adaptation techniques to enable low-shop adaptation between simulation and reality.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The NSF GRFP provides financial support for three years, comprised of a $34,000 stipend per 12-month fellowship year, as well as a direct payment of $12,000 to Georgia Tech toward the cost of education for each of the three years of fellowship funding."}],"uid":"33939","created_gmt":"2020-04-16 20:02:46","changed_gmt":"2020-04-16 20:02:46","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-04-15T00:00:00-04:00","iso_date":"2020-04-15T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"634467":{"id":"634467","type":"image","title":"Daniel Bolya and Joanne Truong","body":null,"created":"1587067147","gmt_created":"2020-04-16 19:59:07","changed":"1587067147","gmt_changed":"2020-04-16 19:59:07","alt":"Daniel Bolya and Joanne Truong","file":{"fid":"241446","name":"Joanne and Daniel.png","image_path":"\/sites\/default\/files\/images\/Joanne%20and%20Daniel.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Joanne%20and%20Daniel.png","mime":"image\/png","size":1176810,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Joanne%20and%20Daniel.png?itok=SNMuNKXd"}}},"media_ids":["634467"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181639","name":"cc-research; ic-ai-ml"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"634055":{"#nid":"634055","#data":{"type":"news","title":"Looking for Activities at Home? Try These Interactive Tools from IC Researchers","body":[{"value":"\u003Cp\u003EThe world is on lockdown right now, and we\u0026rsquo;re all searching for new ways to occupy our time inside. With only so many times you can re-watch The Office (oh, who are we kidding \u0026ndash; maybe just one more time through\u0026hellip;), we thought it would be fun to share some of the interactive tools from our own researchers\u0026rsquo; workshops.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBelow, you\u0026rsquo;ll find just a couple of the tools you can interact with online, giving you opportunities from learning how to code to creating art. But this is only just a start \u0026ndash; we\u0026rsquo;d love to hear from you.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIf you\u0026rsquo;re a Georgia Tech student or faculty member, submit your interactive tools to communications officer David Mitchell at \u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E. We\u0026rsquo;ll add to the list, share with our audience, and help everyone find some enjoyment during a difficult time.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ECreate Your Own Generative Art Pieces \u0026ndash; \u003C\/strong\u003Esubmitted by Devi Parikh\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELooking for a new piece of art for your wall? With this tool, you can flex your creative muscles. Choose a style, adjust the values, colors, and properties, and generate a piece that would fit in nicely in your home.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis work demonstrates a broader area of research into machine learning and creativity. The first piece of AI-generated art to go to auction sold for $432,500 in 2018.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ELINK: \u003C\/strong\u003E\u003Ca href=\u0022https:\/\/cc.gatech.edu\/~parikh\/art.html\u0022\u003Ehttps:\/\/cc.gatech.edu\/~parikh\/art.html\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EInteract with Visual Chatbot \u003C\/strong\u003E\u0026ndash; submitted by Devi Parikh\u003C\/p\u003E\r\n\r\n\u003Cp\u003EParikh\u0026rsquo;s lab is doing research in an area called visual question answering. Developed in 2017, this demo allows you to upload an image and have a conversation with a chatbot about it. Pick out an image you\u0026rsquo;ve taken or just grab one from the web and ask questions to see just how quickly and accurately this AI can perform the task. This research is key to developing agents that can reason about specific tasks in the real world.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ELINK: \u003C\/strong\u003E\u003Ca href=\u0022http:\/\/demo-visualdialog.cloudcv.org\/\u0022\u003Ehttp:\/\/demo-visualdialog.cloudcv.org\/\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ELearn to Code Using EarSketch and TunePad\u003C\/strong\u003E \u0026ndash; submitted by Brian Magerko\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHave you been dying to learn how to code? There\u0026rsquo;s no time like the present. Without the benefit of a classroom setting to learn all the ins and outs, you might find a usable tool like EarSketch beneficial. EarSketch uses music to guide the learner. With sounds from the EarSketch library or your own uploads, along with Python or JavaScript to code, you can produce quality music online.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELike EarSketch, TunePad \u0026ndash; developed in collaboration with Northwestern University \u0026ndash; is a tool for creating music using the Python programming language. No knowledge in music or coding is required to get started. Get those musical juices flowing, and start creating.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ELINK: \u003C\/strong\u003E\u003Ca href=\u0022http:\/\/earsketch.gatech.edu\/\u0022\u003Eearsketch.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ELearn About Grasping Tasks Using this Online Tool \u003C\/strong\u003E\u0026ndash; submitted by Samarth Brahmbhatt\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis tool allows people to interactively explore how we grasp household objects. So, why is this important? Grasping is a key capability in the development of household robotics. In order to train robots how to grab and use items in the house, we need to identify the most efficient approach. Explore this tool, which includes items from an apple to a doorknob to a video game controller.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ELINK: \u003C\/strong\u003E\u003Ca href=\u0022https:\/\/contactdb.cc.gatech.edu\/contactdb_explorer.html\u0022\u003Ehttps:\/\/contactdb.cc.gatech.edu\/contactdb_explorer.html\u003C\/a\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"These are just a couple of the tools you can interact with online, giving you opportunities from learning how to code to creating art."}],"uid":"33939","created_gmt":"2020-04-04 00:00:47","changed_gmt":"2020-04-04 00:00:47","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-04-03T00:00:00-04:00","iso_date":"2020-04-03T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"444971":{"id":"444971","type":"image","title":"EarSketch","body":null,"created":"1449256205","gmt_created":"2015-12-04 19:10:05","changed":"1475895184","gmt_changed":"2016-10-08 02:53:04","alt":"EarSketch","file":{"fid":"203156","name":"static1.squarespace.png","image_path":"\/sites\/default\/files\/images\/static1.squarespace_0.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/static1.squarespace_0.png","mime":"image\/png","size":411122,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/static1.squarespace_0.png?itok=lWrzDShH"}}},"media_ids":["444971"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182940","name":"cc-research; ic-ai-ml; ic-robotics; ic-hcc"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"633985":{"#nid":"633985","#data":{"type":"news","title":"Pitch Perfect: GT Computing Undergrads Provide Automated Training Upgrade for Softball Team","body":[{"value":"\u003Cp\u003EThere\u0026rsquo;s a classic story that former Atlanta Braves pitching coach Leo Mazzone used to share about Hall-of-Famer Greg Maddux, one of the smartest hurlers of all time. Although the exact details have changed in retelling over time, it goes something like this:\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMaddux, a meticulous documenter of pitch sequences and batter results throughout his career, once explained to Mazzone in between innings that the leadoff batter in the following frame would pop out to third base on the fourth pitch of the at-bat. He\u0026rsquo;d start him with a fastball, change speeds for strike two, waste a pitch outside, and then induce the popup on a one-ball, two-strike count. Sure enough, a few minutes later, Maddux did exactly as he\u0026rsquo;d said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThere are a couple of lessons here: One, Maddux was a wizard. Many pitchers over time have tried to replicate his impeccable approach to the game, but few have ever succeeded at that level; two, pitch sequence matters \u0026ndash; perhaps more than how overpowering your fastball is or how sharp the break is on your curve.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECapitalizing on this intuition, a group of undergraduate students at \u003Ca href=\u0022http:\/\/gatech.edu\u0022\u003EGeorgia Tech\u003C\/a\u003E are working with the softball team to provide an automated upgrade to players\u0026rsquo; training. Using the wealth of statistics kept by the team \u0026ndash; pitch-by-pitch data for balls, strikes, types of pitches thrown, and results \u0026ndash; they have trained an algorithm that can select the best pitch to throw in any given situation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe tool is used by the coaches and pitchers for game planning purposes, generating daily reports after every game and practice to help inform coaches of trends in sequences and results.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In baseball and softball nowadays, data analytics has become such an incredibly important part of the game,\u0026rdquo; said \u003Cstrong\u003EJack Bennett\u003C\/strong\u003E, a third-year \u003Ca href=\u0022http:\/\/isye.gatech.edu\u0022\u003EIndustrial Engineering\u003C\/a\u003E student. \u0026ldquo;Anything that can get them data to go into games more prepared. Technology is at the forefront of this.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThey began using the approach during the 2019 season. Bennett and partners \u003Cstrong\u003EZach Panzarino\u003C\/strong\u003E (third-year \u003Ca href=\u0022http:\/\/cc.gatech.edu\u0022\u003EComputer Science\u003C\/a\u003E) and Ron Kushkuley (third-year \u003Ca href=\u0022http:\/\/coe.gatech.edu\u0022\u003EComputer Engineering\u003C\/a\u003E) had demonstrated a similar capability at last year\u0026rsquo;s Sports Innovation Hackathon using data for Atlanta Braves pitcher Mike Foltynewicz, finishing in third place. \u003Cstrong\u003EDoug Allvine\u003C\/strong\u003E, assistant athletics director for innovation at Georgia Tech, put the team in touch with softball coach \u003Cstrong\u003EAileen Morales\u003C\/strong\u003E. Morales was interested, and the students were able to begin testing the approach.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt works like this: The softball team keeps track of its own data \u0026ndash; not just player statistics, but pitch selections and results for every pitcher in every game throughout the season. That\u0026rsquo;s a lot of data and can offer a lot of information. What happened when Pitcher X threw a 3-2 changeup to a lefthanded batter?\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut it goes a little deeper than that. Panzarino, Bennett, and Kushkuley found that the pitch sequence is what matters most. That follows the standard strategic thinking \u0026ndash; a slider away can be more effective if set up by an inside fastball on the previous pitch, for example. What the algorithm does, however, is consider the order each pitch is thrown in the at-bat and provide a score for which pitch will be most effective based on past data.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We leverage sequences, the count, outs, everything,\u0026rdquo; Panzarino said. \u0026ldquo;Looking at the current state and the previous pitches, it will score all the potential future routes a pitcher can choose. We give them reports before each game so that they can prepare, and then we look at success or failure after the game.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAfter a test run a year ago, the students have honed the technology and are working with the team again this year. Qualitatively speaking, they said they noticed results throughout the year.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;When we first gave them our analysis, it would recommend certain stuff in certain situations,\u0026rdquo; Bennett said. \u0026ldquo;Maybe it would say a changeup should be thrown more in this situation. Then, when we\u0026rsquo;d get postgame data later, we\u0026rsquo;d see that more changeups were being thrown and were continuing to be effective.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;When I first saw what they were developing, I was beyond impressed,\u0026rdquo;\u0026nbsp;Morales said. \u0026ldquo;We are very meticulous with collecting data in our program and trying to find ways to learn more about what is and what is not working for our athletes. It\u0026rsquo;s remarkable to see how they can take the data we had and leverage it in a way that allowed us to finetune our training.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERecently, at the 2020 Sports Innovation Hackathon, the group developed a similar solution for baseball. They received runner-up in the competition, and hope to connect further with the Georgia Tech baseball team in the future.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Tons of theory has been written on how pitchers should approach sequencing in games, but this is a model that can show you the data about how well that works,\u0026rdquo; Panzarino said.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A group of undergraduate students at Georgia Tech are working with the softball team to provide an automated upgrade to players\u2019 training."}],"uid":"33939","created_gmt":"2020-04-01 17:57:21","changed_gmt":"2020-04-01 17:57:21","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-04-01T00:00:00-04:00","iso_date":"2020-04-01T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"520851":{"id":"520851","type":"image","title":"Softball","body":null,"created":"1459789200","gmt_created":"2016-04-04 17:00:00","changed":"1475895289","gmt_changed":"2016-10-08 02:54:49","alt":"Softball","file":{"fid":"206045","name":"softball.png","image_path":"\/sites\/default\/files\/images\/softball_0.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/softball_0.png","mime":"image\/png","size":301482,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/softball_0.png?itok=plm8gn5h"}}},"media_ids":["520851"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181639","name":"cc-research; ic-ai-ml"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"633834":{"#nid":"633834","#data":{"type":"news","title":"Passing the Torch: Georgia Tech Roboticists Lead Future Generation of Women in the Field","body":[{"value":"\u003Cp\u003EThere\u0026rsquo;s a piece of advice \u003Ca href=\u0022http:\/\/gatech.edu\u0022\u003EGeorgia Tech\u003C\/a\u003E Ph.D. student \u003Cstrong\u003EDe\u0026rsquo;Aira Bryant\u003C\/strong\u003E recalls most often when it comes to her adviser, \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E Chair \u003Cstrong\u003EAyanna Howard\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;You\u0026rsquo;ve got to start somewhere,\u0026rdquo; said Bryant, a robotics student in the school. \u0026ldquo;I feel like whenever I\u0026rsquo;m going through my research, the way I approach it is I have these grand ideas, and I have to break it down to this and this and that. I\u0026rsquo;m the type who\u0026rsquo;s normally working on four or five things at the same time. Dr. Howard always tells me: \u0026lsquo;Okay, slow down. We have to start somewhere. We have to start somewhere so we have something to move toward.\u0026rsquo;\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt\u0026rsquo;s an appropriate metaphor for Bryant, who began her career in computer science with no previous experience as an undergraduate student at the University of South Carolina. It also applies to all the other women in the field who, like Bryant, rely heavily on those who come before them and pass the torch to those who come after.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUnlike many robotic students, who have stories about being introduced through Lego Mindstorms that allow them to build and program their own Lego robots, the concepts of computer science and robotics were the furthest things from Bryant\u0026rsquo;s mind. They were completely foreign ideas that she had never given a second thought during her time in middle school and high school.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;That wasn\u0026rsquo;t me,\u0026rdquo; Byrant said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EInstead, she happened to take an Intro to Java class in her first year. There, she met her first collegiate mentor, \u003Cstrong\u003EKarina Liles\u003C\/strong\u003E. Liles was a graduate student who worked in a robotics lab and, after first semester, invited Bryant to come work with her as an undergraduate assistant. Bryant saw it as a part-time job and a place she could have her own desk. She wasn\u0026rsquo;t thinking about it as much more than that.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I had no idea what her research was,\u0026rdquo; Bryant said. \u0026ldquo;I knew it was a robotics lab, so that was cool. And she was in education for low-resource communities. I came from a school that didn\u0026rsquo;t offer computer science at all, so I found that appealing.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOnce Bryant was introduced to the research process, asking and answering new questions, she was hooked. Collecting data, programming and testing robots then seeing children interact with them face-to-face.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It made all the difference,\u0026rdquo; she said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt was a start, but it was still a new world. Neither of her parents had earned four-year degrees, and her dad had passed away when she was in middle school. When she told her mom and grandmother that she was interested in computer science, she received some hesitancy back. But, while neither had experience in technology, they had raised her to be inquisitive and to seek out mentorship. That\u0026rsquo;s exactly what Bryant did.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThrown into the deep end, she relied on Karina and a handful of other women she came across at South Carolina or conferences like \u003Ca href=\u0022https:\/\/humanrobotinteraction.org\/\u0022\u003EHuman Robot Interaction\u003C\/a\u003E and \u003Ca href=\u0022https:\/\/ghc.anitab.org\/\u0022\u003EGrace Hopper\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I was drawn to women in the field, because the nurturing and the support from people who are also in an underrepresented group \u0026ndash; whether it\u0026rsquo;s gender or race or whatever \u0026ndash; they can talk to you about those specific challenges that you might come across,\u0026rdquo; Bryant said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEventually, that led her to Howard.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAfter her junior year in Columbia, Bryant applied to a program called \u003Ca href=\u0022https:\/\/cra.org\/cra-wp\/dreu\/\u0022\u003EDistributed Research Experiences for Undergrads\u003C\/a\u003E (DREU). For minority undergraduate students, DREU matches students with mentors who have signed up to take on undergrads into their labs over the summer. Although students can be matched with anyone in the United States, Bryant\u0026rsquo;s mentor happened to be Howard.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I was so excited,\u0026rdquo; said Bryant, who knew of Howard\u0026rsquo;s research through her own work at South Carolina. The work she was doing with social robots for kids with autism aligned with Howard\u0026rsquo;s, and it wasn\u0026rsquo;t uncommon for the Georgia Tech professor\u0026rsquo;s name to be cited in one of their papers. \u0026ldquo;There was a student matched with a mentor in Hawaii, and everyone thought that was the luckiest one. I was like, \u0026lsquo;No, I\u0026rsquo;m pretty sure I got the best deal.\u0026rsquo;\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBryant worked on a project in Howard\u0026rsquo;s lab that summer with three other undergrads. Howard was immediately impressed with Bryant because of her unique programming ability.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I remember needing someone to program the robot, and she was just like, \u0026lsquo;Oh, I can do it,\u0026rsquo;\u0026rdquo; Howard said. \u0026ldquo;She impressed me right away, and when it was time for her to choose a graduate program I knew she\u0026rsquo;d fit perfectly in our lab.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETheir work together now is impacting individuals with disabilities, making technology work for everybody including those with motor or visual or hearing impairments. They are investigating robot gendering and its impact on human trust, and work toward inclusivity with programs like \u003Ca href=\u0022http:\/\/ai-4-all.org\/\u0022\u003EAI4All\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBryant is using the inspiration that Howard has provided to her and feels a responsibility to continue that for the next generation of women roboticists. She is humbled by people who now look up to her like she did to Howard, being left speechless by a young student who featured her for a school project on Black History Month.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhen she goes to Grace Hopper, Bryant loves meeting the undergrads and passing on her advice about academics and the challenges women face in the field. She also watches to make sure they are asking questions or calls on those who look like they have a question, but are afraid to ask.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I remember being that person in the room,\u0026rdquo; she said. \u0026ldquo;Women don\u0026rsquo;t just need representation, they need a voice. I want to be their champion, connect them to the right people.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnd her biggest advice to them might sound familiar.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Just start,\u0026rdquo; she said. \u0026ldquo;You\u0026rsquo;ve got to start somewhere.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Ph.D. Student De\u0027Aira Bryant uses the leadership of adviser Ayanna Howard to help guide her and future generations of women in robotics."}],"uid":"33939","created_gmt":"2020-03-25 19:19:03","changed_gmt":"2020-03-25 19:19:03","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-03-25T00:00:00-04:00","iso_date":"2020-03-25T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"622962":{"id":"622962","type":"image","title":"De\u0027Aira Bryant","body":null,"created":"1562099179","gmt_created":"2019-07-02 20:26:19","changed":"1562099179","gmt_changed":"2019-07-02 20:26:19","alt":"","file":{"fid":"237242","name":"unadjustednonraw_thumb_29ba.jpg","image_path":"\/sites\/default\/files\/images\/unadjustednonraw_thumb_29ba.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/unadjustednonraw_thumb_29ba.jpg","mime":"image\/jpeg","size":250014,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/unadjustednonraw_thumb_29ba.jpg?itok=umnmWiYV"}}},"media_ids":["622962"],"related_links":[{"url":"https:\/\/www.cc.gatech.edu\/news\/628437\/startup-zyrobotics-creates-more-opportunities-impact","title":"Startup Zyrobotics Creates More Opportunities for Impact"},{"url":"https:\/\/www.youtube.com\/watch?v=JBg7nZXb1Vo","title":"Ph.D. Student Seeks to Help Children Through Robotics"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182940","name":"cc-research; ic-ai-ml; ic-robotics; ic-hcc"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"632297":{"#nid":"632297","#data":{"type":"news","title":"GT Computing Student Plans to Study Connections Between Law and Machine Learning as a J.P. Morgan Ph.D. Fellow","body":[{"value":"\u003Cp\u003ERecognized for his exceptional talent for using artificial intelligence (AI) to solve real-world problems, \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003Emachine learning\u003C\/a\u003E\u0026nbsp;Ph.D. student \u003Cstrong\u003EAshwin Vijayakumar \u003C\/strong\u003Ehas been named a 2020 recipient of the prestigious \u003Ca href=\u0022https:\/\/www.jpmorgan.com\/global\/technology\/artificial-intelligence\/awards\u0022\u003EJ.P. Morgan Ph.D. Fellowship\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EVijayakumar typically studies how to develop machine-learning solutions for assistive technology and focuses on improving reasoning systems, modeling human preferences, and producing diverse outputs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHowever, with the funds from the fellowship, Vijayakumar plans to pivot and explore the relationship between machine learning and law.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;With machine learning still being relatively new, there are a lot of questions surrounding its impact legally. I want to dive into that and hopefully discover some connections that will make a broad impact. I wouldn\u0026rsquo;t be able to explore this new area without J.P. Morgan\u0026rsquo;s support,\u0026rdquo; said Vijayakumar, who is advised by \u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E associate professor \u003Cstrong\u003EDhruv Batra\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs a leading financial institution, J.P. Morgan is keen on collaborating with academia on ways to use AI to create better solutions, better protect their customers, and create better products. The awards are a part of the company\u0026rsquo;s $10 billion-plus annual investment in technology and innovation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Our goal is to recognize and enable the next generation of leading AI researchers. We want to create an environment where researchers can inspire change and make a lasting impact in our communities and across our industry,\u0026rdquo; said \u003Cstrong\u003EManuela Veloso\u003C\/strong\u003E, head of J.P. Morgan AI Research.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe fellowship begins in fall 2020, granting Vijayakumar $100,000 to be used for tuition with a stipend and travel expenses for technical conferences.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Machine learning \u00a0Ph.D. student Ashwin Vijayakumar has been named a 2020 recipient of the prestigious J.P. Morgan Ph.D. Fellowship."}],"uid":"34773","created_gmt":"2020-02-11 15:18:40","changed_gmt":"2020-02-14 15:17:28","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-02-14T00:00:00-05:00","iso_date":"2020-02-14T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"632296":{"id":"632296","type":"image","title":"Ashwin Vijayakuma is a 2020 recipient of the J.P. Morgan Ph.D. Fellowship","body":null,"created":"1581434102","gmt_created":"2020-02-11 15:15:02","changed":"1581434102","gmt_changed":"2020-02-11 15:15:02","alt":"Aswhin Vijayakuma","file":{"fid":"240575","name":"img.jpg","image_path":"\/sites\/default\/files\/images\/img.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/img.jpg","mime":"image\/jpeg","size":721128,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/img.jpg?itok=BFnngDS2"}}},"media_ids":["632296"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"134","name":"Student and Faculty"},{"id":"8862","name":"Student Research"}],"keywords":[{"id":"368","name":"Fellowship"},{"id":"169385","name":"Student award"},{"id":"183914","name":"JP Morgan"},{"id":"4175","name":"finance"},{"id":"4121","name":"law"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"632082":{"#nid":"632082","#data":{"type":"news","title":"Changing the Conversation: Georgia Tech Researchers Provide New Approach to Automated Story Generation","body":[{"value":"\u003Cp\u003EIt\u0026rsquo;s a situation familiar to anyone who\u0026rsquo;s ever communicated with a voice assistant on a smart device. You pose a request: \u0026ldquo;Hey Voice Assistant, tell me a story about Georgia Tech.\u0026rdquo; More often than not, you get a related response \u0026ndash; \u0026ldquo;Georgia Tech is located in Atlanta, Georgia. Would you like me to provide you with directions?\u0026rdquo; \u0026ndash; but one with slightly unnatural language and only limited information.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDespite the enormous strides made in artificial intelligence to develop systems that can answer simple questions and requests, the kinds of natural conversational language humans have with each other when giving more complex directions or telling stories has thus far been out of reach.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearch from \u003Ca href=\u0022http:\/\/gatech.edu\u0022\u003EGeorgia Tech\u003C\/a\u003E\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E, however, provides a novel approach that improves the combination of automated story generation with natural language. The development is an important step in providing AI assistants the capability to more naturally converse with humans.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Let\u0026rsquo;s think of a future version of Siri or Alexa, where you have a complex task that\u0026rsquo;s not just \u0026lsquo;Look this thing up on the internet,\u0026rsquo; or \u0026lsquo;Tell me what the weather is outside,\u0026rsquo;\u0026rdquo; said Mark Riedl, an associate professor at Georgia Tech and the faculty lead on the research. \u0026ldquo;Maybe you want to plan your day or a birthday party. Think of the response like a little story, a narrative that conveys the requested information.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s a missing capability in AI \u0026ndash; they just don\u0026rsquo;t understand us or communicate with us in the same ways that we understand each other.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERiedl and his team approached the challenge by viewing the exchange of information as stories \u0026ndash; a series of events, one after the other, that leads to some conclusion. Past research on the topic identified patterns in language to identify how stories are constructed \u0026ndash; namely that a verb generally changes the action and conveys a new event in a story.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;By boiling down these stories drawn from the internet to essential verbs and actions, we can extract patterns from stories better,\u0026rdquo; Riedl said. \u0026ldquo;There are a lot of ways to talk about marriage, but at the end of the day someone is marrying someone else.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis paper, the third in the series, took the next step: If you take away all the words to identify the patterns in a story, you need to be able to put them back in naturally and intelligently in a way that humans are accustomed to. Put simply, it\u0026rsquo;s like building an outline and then filling in the details.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe system works by building the outline through a neural network trained on sequencing events. With the help of story examples drawn from the internet, it applies machine learning to produce a series of events, one leading to the most likely next outcome. That outline guides a second neural network that applies natural language \u0026ndash; grammar, syntax, spelling, everything else you need to make the story intelligible \u0026ndash; to produce more elaborate sentences.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If you\u0026rsquo;re asking for directions for how a birthday party should go, you don\u0026rsquo;t want just \u0026lsquo;Jill eats cake; Jill opens presents,\u0026rsquo;\u0026rdquo; Riedl said. \u0026ldquo;You want something more akin to the stories we share as humans. It\u0026rsquo;s actually more difficult for us to process information when it\u0026rsquo;s delivered in a way we\u0026rsquo;re not accustomed to.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe researchers found that an ensemble approach works the best. They use a series of five algorithms, each with different capabilities in accuracy and natural language generation, produces the best stories. Because one algorithm isn\u0026rsquo;t uniformly better at all aspects of the task, it will be run through all five to find the highest confidence level of the sentence.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;One technique might provide bland sentences, but is accurate with the actual content,\u0026rdquo; Riedl said. \u0026ldquo;Another might be very good at putting in a narrative flourish, but they fail more often. You want that nicer sentence, but you also want it to be able to catch mistakes in the content.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe ensemble approach scored significantly higher in human studies than the individual algorithms alone. Human trust in their AI and robot assistants, Riedl said, was key to adoption in the future.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The key is that you want to place that trust in your machine counterpart, but it has to earn that trust on correctness and accuracy,\u0026rdquo; he said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe paper is titled \u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1909.03480\u0022\u003E\u003Cem\u003EStory Realization: Expanding Plot Events into Sentences\u003C\/em\u003E\u003C\/a\u003E, and will be presented at the \u003Ca href=\u0022https:\/\/aaai.org\/Conferences\/AAAI-20\/\u0022\u003E34\u003Csup\u003Eth\u003C\/sup\u003E AAAI Conference on Artificial Intelligence\u003C\/a\u003E on Feb. 7-12 in New York City. The research is funded under a grant from \u003Ca href=\u0022https:\/\/www.darpa.mil\/\u0022\u003EDARPA\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Research from Georgia Tech\u2019s School of Interactive Computing provides a novel approach that improves the combination of automated story generation with natural language."}],"uid":"33939","created_gmt":"2020-02-04 15:56:44","changed_gmt":"2020-02-07 18:44:39","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-02-04T00:00:00-05:00","iso_date":"2020-02-04T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"632081":{"id":"632081","type":"image","title":"Amazon Alexa","body":null,"created":"1580831771","gmt_created":"2020-02-04 15:56:11","changed":"1580831771","gmt_changed":"2020-02-04 15:56:11","alt":"","file":{"fid":"240496","name":"alexa-alexa-talking-amazon-cortana-717235.jpg","image_path":"\/sites\/default\/files\/images\/alexa-alexa-talking-amazon-cortana-717235.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/alexa-alexa-talking-amazon-cortana-717235.jpg","mime":"image\/jpeg","size":52083,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/alexa-alexa-talking-amazon-cortana-717235.jpg?itok=CD6Eyss8"}}},"media_ids":["632081"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1317","name":"News Briefs"}],"categories":[],"keywords":[{"id":"181639","name":"cc-research; ic-ai-ml"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"632102":{"#nid":"632102","#data":{"type":"news","title":"Jill Watson Team Reaches Semifinals in IBM AI XPrize Competition","body":[{"value":"\u003Cp\u003EAlgorithms that help answer the stream of questions college students have each semester might be welcome by any instructor who can offload FAQs to such an artificially intelligent teaching assistant (TA).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EJill Watson \u0026ndash; Georgia Tech\u0026rsquo;s AI designed explicitly for this purpose \u0026ndash; turned four years old this January, with the AI\u0026rsquo;s birthday coinciding with the announcement of the\u0026nbsp;\u003Ca href=\u0022https:\/\/ai.xprize.org\/prizes\/artificial-intelligence\/teams\u0022 target=\u0022_blank\u0022\u003E10 semifinalists for IBM\u0026rsquo;s AI XPrize competition\u003C\/a\u003E. Georgia Tech\u0026rsquo;s\u0026nbsp;\u003Ca href=\u0022https:\/\/ai.xprize.org\/prizes\/artificial-intelligence\/teams\/emprize\u0022 target=\u0022_blank\u0022\u003EemPrize team\u003C\/a\u003E, led by Professor of Interactive Computing\u0026nbsp;\u003Cstrong\u003EAshok Goel\u0026nbsp;\u003C\/strong\u003Eand utilizing Jill Watson as the key technology, was named as one of the semifinalists.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe competition started in 2016, the year of Jill\u0026rsquo;s arrival in a graduate computer science\u0026nbsp;course at Georgia Tech, and has \u0026ldquo;sought to accelerate the adoption of AI technologies and spark creative, innovative, and audacious demonstrations of the technology that are truly scalable to solve societal grand challenges.\u0026rdquo; After nearly four calendar years, XPrize will name a winner in April.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs part of the GT emPrize team\u0026rsquo;s work, the Jill Watson TA not only answers student questions about course requirements but can answer questions about another AI named\u0026nbsp;\u003Ca href=\u0022http:\/\/vera.cc.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003EVERA\u003C\/a\u003E, or the Virtual Ecological Research Assistant.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EJill helps users learn how to use VERA, a system which enables students in GT\u0026rsquo;s Intro to Biology course (and online science seekers) to create their own ecological models from\u0026nbsp;a\u0026nbsp;web browser. Unlike the Jill Watson TA, which is currently used only by GT students, VERA is open to anyone with an internet connection.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnother part of emPrize\u0026nbsp;is the Jill Social Agent, whose lead designer,\u0026nbsp;\u003Cstrong\u003EIda Camacho\u003C\/strong\u003E, is a recent\u0026nbsp;alumna of Georgia Tech\u0026rsquo;s Online Master of Science in Computer Science program (OMSCS) and understands the pressures and uncertainties of online learning.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe Jill Social Agent in essence gives students just starting online courses a chance at \u0026ldquo;speed friending\u0026rdquo;. If online students feel they have more peer support and connections from the start, this might\u0026nbsp;translate into success in the course. Hear from Camacho on the\u0026nbsp;\u003Ca href=\u0022https:\/\/www.spreaker.com\/user\/10751784\/tu-ep10-jill-social-ai-online-learninG\u0022 target=\u0022_blank\u0022\u003ETech Unbound podcast with GVU Center\u003C\/a\u003E\u0026nbsp;as she reveals some of her AI\u0026rsquo;s design and the educational experience that informed her work on emPrize.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELearn more at\u0026nbsp;\u003Ca href=\u0022http:\/\/emprize.gatech.edu\/\u0022\u003Ehttp:\/\/emprize.gatech.edu\/\u003C\/a\u003E\u0026nbsp;or explore a\u0026nbsp;\u003Ca href=\u0022https:\/\/public.tableau.com\/views\/JillWatsonTurns4\/Dashboard?:display_count=y\u0026amp;:origin=viz_share_link:showVizHome=no\u0022 target=\u0022_blank\u0022\u003Etimeline of Jill\u0026#39;s evolution\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech\u0027s emPrize team was one of 10 semifinalists for the IBM Watson XPRIZE competition, which carries a prize of $5 million."}],"uid":"33939","created_gmt":"2020-02-04 20:13:55","changed_gmt":"2020-02-04 20:13:55","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-02-04T00:00:00-05:00","iso_date":"2020-02-04T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"631547":{"id":"631547","type":"image","title":"Timeline: Jill Watson AI at 4","body":null,"created":"1579883925","gmt_created":"2020-01-24 16:38:45","changed":"1580406385","gmt_changed":"2020-01-30 17:46:25","alt":"Timeline: Jill Watson AI at 4yo","file":{"fid":"240330","name":"Jill Timeline at 4yo.png","image_path":"\/sites\/default\/files\/images\/Jill%20Timeline%20at%204yo.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Jill%20Timeline%20at%204yo.png","mime":"image\/png","size":743294,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Jill%20Timeline%20at%204yo.png?itok=omFER7Lg"}}},"media_ids":["631547"],"groups":[{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182525","name":"cc-research; ic-hcc; ic-ai-ml"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EJosh Preston\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearch Communications Manager, GVU Center and College of Computing\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"631545":{"#nid":"631545","#data":{"type":"news","title":"Jill Watson, an AI Pioneer in Education, Turns 4","body":[{"value":"\u003Cp\u003EGeorgia Tech\u0026rsquo;s most well-known artificially intelligent teaching assistant, Jill Watson, turns four years old this January. The brainchild of \u003Cstrong\u003EAshok Goel\u003C\/strong\u003E, professor in Interactive Computing, and launched at the start of 2016, the virtual TA was introduced into one of the courses for the then-fledgling Online Master of Science in Computer Science (OMSCS) program, now one of Georgia Tech\u0026rsquo;s largest graduate degree programs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EStudents and faculty would be forgiven in thinking Jill Watson is a single teaching assistant. Each course that utilizes the Jill TA has its own custom \u0026ldquo;knowledge base\u0026rdquo; that the AI leverages to answer basic student questions 24\/7.\u003C\/p\u003E\r\n\r\n\u003Ch5\u003E\u003Ca href=\u0022https:\/\/public.tableau.com\/views\/JillWatsonTurns4\/Dashboard?:display_count=y\u0026amp;:origin=viz_share_link:showVizHome=no\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EExplore the Timeline of Jill\u0026rsquo;s Growth\u003C\/strong\u003E\u003C\/a\u003E\u003C\/h5\u003E\r\n\r\n\u003Cp\u003EIn addition, a new AI, the \u003Cstrong\u003EJill Social Agent\u003C\/strong\u003E, was designed and launched in 2019 to explicitly connect students quickly and get them working together. The agent was developed in part as a response to high attrition rates that plague online learning in general.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe lead architect for the Jill Social Agent, \u003Cstrong\u003EIda Camacho\u003C\/strong\u003E, OMSCS \u0026rsquo;19, discusses\u0026nbsp;the\u0026nbsp;AI\u0026nbsp;on an episode of the\u0026nbsp;\u003Ca href=\u0022https:\/\/gvu.gatech.edu\/tech-unbound-podcast\u0022\u003ETech Unbound Podcast\u003C\/a\u003E from the GVU Center. It\u0026rsquo;s a fascinating inside look at Camacho\u0026rsquo;s approach to building social structures for online education and her own journey as an OMSCS student.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOther major milestones from the Jill TA in 2019:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EIntroduced in residential classroom for first time.\u003C\/li\u003E\r\n\t\u003Cli\u003EDeployed in first non-CS course (Intro to Biology).\u003C\/li\u003E\r\n\t\u003Cli\u003ECustomized to train users on the \u003Ca href=\u0022http:\/\/vera.cc.gatech.edu\/\u0022\u003EVERA AI\u003C\/a\u003E, an ecology modeling system.\u003C\/li\u003E\r\n\t\u003Cli\u003E\u0026nbsp;\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EThe new decade promises more educational advances made possible by the Jill Watson AI framework. Learn more at \u003Ca href=\u0022http:\/\/emprize.gatech.edu\/\u0022\u003Eemprize.gatech.edu\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech\u0026rsquo;s most well-known artificially intelligent teaching assistant, Jill Watson, turns four years old this January.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech\u2019s most well-known artificially intelligent teaching assistant, Jill Watson, turns four years old this January."}],"uid":"27592","created_gmt":"2020-01-24 16:30:24","changed_gmt":"2020-01-24 17:23:58","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-01-24T00:00:00-05:00","iso_date":"2020-01-24T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"631547":{"id":"631547","type":"image","title":"Timeline: Jill Watson AI at 4","body":null,"created":"1579883925","gmt_created":"2020-01-24 16:38:45","changed":"1580406385","gmt_changed":"2020-01-30 17:46:25","alt":"Timeline: Jill Watson AI at 4yo","file":{"fid":"240330","name":"Jill Timeline at 4yo.png","image_path":"\/sites\/default\/files\/images\/Jill%20Timeline%20at%204yo.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Jill%20Timeline%20at%204yo.png","mime":"image\/png","size":743294,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Jill%20Timeline%20at%204yo.png?itok=omFER7Lg"}}},"media_ids":["631547"],"related_links":[{"url":"http:\/\/gvu.gatech.edu\/news\/ai-agent-breaks-down-social-barriers-online-education","title":"A Closer Look at the Jill Social Agent"},{"url":"http:\/\/emprize.gatech.edu\/","title":"Georgia Tech Finalist in IBM AI XPrize Competition"},{"url":"https:\/\/www.spreaker.com\/user\/10751784\/tu-ep10-jill-social-ai-online-learninG","title":"Tech Unbound EP10: Online Education Gets a Social Boost with Artificial Intelligence"}],"groups":[{"id":"1299","name":"GVU Center"},{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39431","name":"Data Engineering and Science"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\nResearch Communications Manager\u003Cbr \/\u003E\r\n\u003Cem\u003EGVU Center and College of Computing\u003C\/em\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"630839":{"#nid":"630839","#data":{"type":"news","title":"Award Recognizes Professor\u0027s Impact on the Evolution of Online Learning","body":[{"value":"\u003Cp\u003EA longtime College of Computing professor is among the recently announced winners of 2020 Regents\u0026rsquo; Awards for Teaching Excellence.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EAshok Goel\u003C\/strong\u003E will receive a Regents\u0026rsquo; Award for the Scholarship of Teaching and Learning from the University System of Georgia Board of Regents. The Regent\u0026rsquo;s Award for Goel recognizes his groundbreaking contributions to the evolution of online learning.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGoel and the other Regents\u0026rsquo; Award winners, one of whom is also from Georgia Tech, are being honored as part of the Board of Regents\u0026rsquo; upcoming Scholarship Gala, which is set for Feb. 21.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWidely known as creators of Jill Watson \u0026shy;\u0026ndash; \u003Ca href=\u0022https:\/\/www.news.gatech.edu\/features\/jill-watsons-terrific-twos\u0022\u003Ethe world\u0026rsquo;s first artificially intelligent (AI) teaching assistant\u003C\/a\u003E \u0026ndash; Goel and his Design and Intelligence Lab team continue to build upon the Jill platform to create next-generation AI tools that increase engagement, help retention, and improve learning outcomes for online teachers and learners.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/youtu.be\/WbCguICyfTA\u0022 target=\u0022_blank\u0022\u003E[RELATED: A Teaching Assistant Named Jill Watson]\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We\u0026rsquo;re moving well beyond answering questions about a particular class and getting closer to developing AI technologies that can scale globally, work in tandem with other AIs, and truly be transformative for a broad spectrum of online learners,\u0026rdquo; said \u003Ca href=\u0022http:\/\/dilab.gatech.edu\/ashok-k-goel\/\u0022\u003EGoel\u003C\/a\u003E, a professor in the \u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I\u0026rsquo;m honored to receive a Regents\u0026rsquo; Award and proud to accept it on behalf of the \u003Ca href=\u0022http:\/\/dilab.gatech.edu\/\u0022\u003EDesign and Intelligence Lab\u003C\/a\u003E.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUnder the \u003Ca href=\u0022http:\/\/emprize.gatech.edu\/\u0022\u003Eteam name, Emprize\u003C\/a\u003E, Goel and his lab team are currently semifinalists in the $5 million IBM Watson Artificial Intelligence XPrize Initiative. Their entry bundles Jill Watson with other AI-learning agents developed by the team currently deployed in online and \u003Ca href=\u0022https:\/\/b.gatech.edu\/33x9qZ2\u0022\u003Eresidential courses\u003C\/a\u003E. XPrize finalists will be announced in conjunction with an Association for the Advancement of Artificial Intelligence conference in New York City next month.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"GT Computing Professor Ashok Goel has been named as a 2020 Regents\u0027 Award winner."}],"uid":"32045","created_gmt":"2020-01-09 17:47:09","changed_gmt":"2020-01-09 18:00:45","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-01-09T00:00:00-05:00","iso_date":"2020-01-09T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"583441":{"id":"583441","type":"image","title":"Ashok Goel TEDxSanFrancisco","body":null,"created":"1478111111","gmt_created":"2016-11-02 18:25:11","changed":"1478111111","gmt_changed":"2016-11-02 18:25:11","alt":"","file":{"fid":"222419","name":"Ashok Screenshot.png","image_path":"\/sites\/default\/files\/images\/Ashok%20Screenshot.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Ashok%20Screenshot.png","mime":"image\/png","size":248585,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Ashok%20Screenshot.png?itok=K-RsrOsT"}}},"media_ids":["583441"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182670","name":"goel"},{"id":"46361","name":"GT computing"},{"id":"183501","name":"Emprize"},{"id":"169183","name":"Jill Watson"},{"id":"174671","name":"xprize"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAlbert Snedeker, Sr. Communications Manager\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:albert.snedeker@cc.gatech.edu?subject=Regents\u0027%20Award\u0022\u003Ealbert.snedeker@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["albert.snedeker@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"630479":{"#nid":"630479","#data":{"type":"news","title":"ML@GT Adds Six New Associate Directors to Leadership Team","body":[{"value":"\u003Cp\u003EThe \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EMachine Learning Center at Georgia Tech (ML@GT)\u003C\/a\u003E continues to diversify and expand its leadership team. Starting in January the leadership team will add \u003Cstrong\u003EDeven Desai, Polo Chau, Mark Davenport, Yao Xie, Mark Riedl, \u003C\/strong\u003Eand \u003Cstrong\u003EGeorge Lan\u003C\/strong\u003E as associate directors\u003Cstrong\u003E.\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDesai, an associate professor in the \u003Ca href=\u0022https:\/\/www.scheller.gatech.edu\/directory\/faculty\/desai\/index.html\u0022\u003EScheller College of Business\u003C\/a\u003E, will be the center\u0026rsquo;s first associate director for Legal, Policy, Ethics, and Machine Learning. Not a technologist by training, Desai will draw from his experience working at Princeton\u0026#39;s Center for Information Technology Policy and Google as Academic Research Counsel to help policy makers, legal scholars and technologists work better together. This includes helping each party understand how a given technology works and what issues it might raise.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I am excited to be part of ML@GT because of the opportunity to be part of a world class group of thinkers and to connect our work to the world. \u0026nbsp;I believe there is a need to bridge the worlds of technology and law, policy, and ethics,\u0026rdquo; said Desai. \u0026ldquo;ML@GT is poised to increase not only machine learning insights and breakthroughs but also the way in which machine learning is built and used to serve society. I am honored and thrilled to be part of building that future.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EXie, an associate professor in the \u003Ca href=\u0022https:\/\/www.isye.gatech.edu\/\u0022\u003EH. Milton Stewart School of Industrial Systems Engineering (ISyE),\u003C\/a\u003E is the first woman to join the leadership team. She will serve as the associate director for machine learning and data science where she will create better synergy between the ongoing research and education efforts between data science and machine learning as Georgia Tech builds a leading program in these areas.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I am particularly excited to work with the broader community of students and faculty on campus who are interested or involved with machine learning and data science and foster their participation,\u0026rdquo; said Xie.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELan, also an associate professor in ISyE has been appointed as the associate director for machine learning and statistics. In this role, Lan will promote research at the intersections between optimization, statistics, and machine learning and how they also apply in engineering. He will also help better facilitate communications for students coming from different home colleges or schools across campus.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I am excited to be joining the team with active and dynamic academic leaders. I look forward to working with them to address a diverse set of challenges that ML@GT faces, e.g., being adaptive to the priorities and criterions for our affiliated faculty members and students across different academic units,\u0026rdquo; said Lan.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs the associate director for machine learning and artificial intelligence, Riedl, an associate professor in the \u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E, will coordinate ML@GT\u0026rsquo;s strategy with respect to the broader field of artificial intelligence.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Artificial intelligence and machine learning have the potential to radically change virtually every aspect of our lives. With thought and care, these technologies can be a force for good. Georgia Tech is well-positioned to be a major voice in how technology and policy shape the future,\u0026rdquo; said Riedl.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWith more corporations integrating machine learning and artificial intelligence into their businesses, the center\u0026rsquo;s need for managing those relationships has increased significantly. Chau, an associate professor in the \u003Ca href=\u0022https:\/\/cse.gatech.edu\/\u0022\u003ESchool of Computational Science and Engineering\u003C\/a\u003E, will lead those relationships as the associate director for corporate relations for machine learning.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I enjoy bringing people together, connecting industry with Georgia Tech researchers, bridging disciplines and innovating at their intersections. I\u0026rsquo;m excited to begin my new role as it will be a great way to help Georgia Tech further expand its national and global footprint,\u0026rdquo; said Chau.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs the associate director for community and students, Davenport is charged with creating a tight-knit community among faculty and students. Davenport, an associate professor in the \u003Ca href=\u0022https:\/\/www.ece.gatech.edu\/\u0022\u003ESchool of Electrical and Computer Engineering\u003C\/a\u003E, will work closely with the center staff to coordinate events and other opportunities to increase discussion and collaboration between research units.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe six new members will join \u003Ca href=\u0022http:\/\/ml.gatech.edu\/leadership\u0022\u003Eexisting leadership members\u003C\/a\u003E \u003Cstrong\u003EIrfan Essa, Justin Romberg, Zsolt Kira, \u003C\/strong\u003Eand \u003Cstrong\u003ELe Song. \u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Ch4\u003EAbout the Machine Learning Center at Georgia Tech\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EThe Machine Learning Center at Georgia Tech is an interdisciplinary research center bringing together more than 190 faculty members and 60 machine learning Ph.D. students from across the institute for meaningful collaboration and innovation in machine learning and artificial intelligence. Students and faculty are experts in areas including, but not limited, to computer vision, natural language processing, robotics, deep learning, ethics and fairness, computational finance, information security, and logistics and manufacturing. For more information, visit \u003Ca href=\u0022http:\/\/www.ml.gatech.edu\u0022\u003Ewww.ml.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The Machine Learning Center at Georgia Tech enters the new year with an expanded leadership team. "}],"uid":"34773","created_gmt":"2020-01-03 21:55:17","changed_gmt":"2020-01-06 13:00:55","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-01-06T00:00:00-05:00","iso_date":"2020-01-06T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"630495":{"id":"630495","type":"image","title":"ML@GT adds six new associate directors to the leadership team from across the institute.","body":null,"created":"1578314978","gmt_created":"2020-01-06 12:49:38","changed":"1578315834","gmt_changed":"2020-01-06 13:03:54","alt":"ML@GT adds six new associate directors to the leadership team","file":{"fid":"240039","name":"ML_AssociateDirectors.png","image_path":"\/sites\/default\/files\/images\/ML_AssociateDirectors.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/ML_AssociateDirectors.png","mime":"image\/png","size":804221,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/ML_AssociateDirectors.png?itok=iMIhhF4U"}},"630498":{"id":"630498","type":"image","title":"Deven Desai, Associate Director for Legal, Policy, Ethics, and Machine Learning","body":null,"created":"1578315260","gmt_created":"2020-01-06 12:54:20","changed":"1578315260","gmt_changed":"2020-01-06 12:54:20","alt":"Deven Desai","file":{"fid":"240042","name":"desai_deven_profile.jpg","image_path":"\/sites\/default\/files\/images\/desai_deven_profile_0.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/desai_deven_profile_0.jpg","mime":"image\/jpeg","size":73508,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/desai_deven_profile_0.jpg?itok=LyBZKrKM"}},"630501":{"id":"630501","type":"image","title":"Yao Xie, Associate Director for Machine Learning and Data Science ","body":null,"created":"1578315482","gmt_created":"2020-01-06 12:58:02","changed":"1578315482","gmt_changed":"2020-01-06 12:58:02","alt":"Yao Xie","file":{"fid":"240045","name":"yao_xie_2013_3.jpg","image_path":"\/sites\/default\/files\/images\/yao_xie_2013_3.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/yao_xie_2013_3.jpg","mime":"image\/jpeg","size":112071,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/yao_xie_2013_3.jpg?itok=YF1suppd"}},"630499":{"id":"630499","type":"image","title":"George Lan, Associate Director for Machine Learning and Statistics","body":null,"created":"1578315328","gmt_created":"2020-01-06 12:55:28","changed":"1578315328","gmt_changed":"2020-01-06 12:55:28","alt":"George Lan","file":{"fid":"240043","name":"gl_2.jpg","image_path":"\/sites\/default\/files\/images\/gl_2.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/gl_2.jpg","mime":"image\/jpeg","size":63569,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/gl_2.jpg?itok=H7Kg9FBb"}},"630496":{"id":"630496","type":"image","title":"Mark Riedl, Associate Director for Machine Learning and Artificial Intelligence","body":null,"created":"1578315077","gmt_created":"2020-01-06 12:51:17","changed":"1578315077","gmt_changed":"2020-01-06 12:51:17","alt":"Mark Riedl","file":{"fid":"240040","name":"mark_riedl_007.jpg","image_path":"\/sites\/default\/files\/images\/mark_riedl_007.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/mark_riedl_007.jpg","mime":"image\/jpeg","size":213042,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/mark_riedl_007.jpg?itok=cvjhVAME"}},"630500":{"id":"630500","type":"image","title":"Polo Chau, Associate Director for Corporate Relations for Machine Learning","body":null,"created":"1578315397","gmt_created":"2020-01-06 12:56:37","changed":"1578315397","gmt_changed":"2020-01-06 12:56:37","alt":"Polo Chau","file":{"fid":"240044","name":"polo_chau_550x688_01_2.jpg","image_path":"\/sites\/default\/files\/images\/polo_chau_550x688_01_2.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/polo_chau_550x688_01_2.jpg","mime":"image\/jpeg","size":222467,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/polo_chau_550x688_01_2.jpg?itok=A5nX3-qd"}},"630497":{"id":"630497","type":"image","title":"Mark Davenport, Associate Director for Community and Students","body":null,"created":"1578315143","gmt_created":"2020-01-06 12:52:23","changed":"1578315143","gmt_changed":"2020-01-06 12:52:23","alt":"Mark Davenport","file":{"fid":"240041","name":"davenport-square.jpg","image_path":"\/sites\/default\/files\/images\/davenport-square.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/davenport-square.jpg","mime":"image\/jpeg","size":31569,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/davenport-square.jpg?itok=z5EYby4U"}}},"media_ids":["630495","630498","630501","630499","630496","630500","630497"],"groups":[{"id":"576481","name":"ML@GT"},{"id":"47223","name":"College of Computing"},{"id":"37041","name":"Computational Science and Engineering"},{"id":"1299","name":"GVU Center"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"129","name":"Institute and Campus"},{"id":"134","name":"Student and Faculty"}],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u0026nbsp;\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"629259":{"#nid":"629259","#data":{"type":"news","title":"Georgia Tech Researchers Explore New Ways to Give Navigation Directions to Robots","body":[{"value":"\u003Cp\u003ERobots can navigate buildings, but how do they know where to go? While some robots can follow pre-programmed routes, or be controlled by setting waypoints on a map, these methods are inflexible and can be unnatural to use. Researchers at Georgia Tech believe the best way to give robots navigation instructions is by talking to them.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Giving natural language instructions to a robot is a fundamental research problem on the critical path to developing more flexible domestic robots that can work with people,\u0026rdquo; said \u003Cstrong\u003EPeter Anderson\u003C\/strong\u003E, a research scientist at Georgia Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn a \u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1907.02022.pdf\u0022\u003Erecent paper\u003C\/a\u003E, Georgia Tech has introduced a new way for robots to reason about navigation instructions in an unknown environment.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe team created a semantic map representation that updates each time the robot moves or sees something new. To reason about navigation instructions using this map, the lab found a way to leverage an algorithm used in classical robotics and apply it to artificial intelligence. The algorithm, called Bayesian state estimation, usually tracks the location of a robot from sensor measurements like lidar and wheel odometry. By manipulating the algorithm, Georgia Tech says their robots can use it model language instruction inputs instead.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe paper got its title \u0026quot;Chasing Ghosts: Instruction Following as Bayesian State Tracking\u0026quot; because rather than tracking a robot from sensor measurements, the team is tracking the likely trajectory taken by an ideal agent or human demonstrator in response to the instructions. In this approach, the sensor measurements are the instructions themselves.\u0026nbsp; This algorithm allows the agent to \u0026ldquo;reason\u0026rdquo; about all the different trajectories it could take and the probability of each trajectory when completing a task. By using an explicit map, researchers are easily able to inspect the model to see where the agent thinks the goal is and where it is likely to move next.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECurrently, the robots move in simulated reconstructions of buildings, and communication is through written text, though some applications and off-the-shelf speech-to-text systems could work in conjunction with the existing system, according to researchers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Spoken language would definitely be more natural in many situations, so we might in the future investigate models that go directly from speech to robot actions,\u0026rdquo; said Anderson.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnderson particularly likes to think about this work in regards to telepresence robots, though it could be applied to any robot.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Telepresence robots are a great idea, but they are not as popular as they could be. Maybe we need smarter, more natural robots that just go where you tell them to go and look at what you ask them to look at,\u0026rdquo; said Anderson.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThink about all of the time that is lost commuting to work and walking to meetings. Imagine how climate change might be positively impacted if people needed to travel less for business. Anderson hopes that this work will allow people to focus more on their meetings, conversations with people, and, perhaps, help with climate change, rather than micromanaging a robot or jetting off around the world.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis work will be presented in December at the \u003Ca href=\u0022https:\/\/neurips.cc\/\u0022\u003EThirty-third Conference on Neural Information Processing Systems (NeurIPS)\u003C\/a\u003E 2019 in Vancouver, British Columbia.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The latest work from Georgia Tech researchers finds a way to give better directions to robots."}],"uid":"34773","created_gmt":"2019-11-22 16:00:25","changed_gmt":"2019-12-06 14:41:00","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-11-22T00:00:00-05:00","iso_date":"2019-11-22T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"629258":{"id":"629258","type":"image","title":"Anderson and his co-authors will present this work at at the Thirty-third Conference on Neural Information Processing Systems (NeurIPS) 2019 in Vancouver, British Columbia.","body":null,"created":"1574438198","gmt_created":"2019-11-22 15:56:38","changed":"1574438198","gmt_changed":"2019-11-22 15:56:38","alt":"Map of robot moving through building","file":{"fid":"239649","name":"Screen Shot 2019-11-08 at 10.53.41 AM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202019-11-08%20at%2010.53.41%20AM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202019-11-08%20at%2010.53.41%20AM.png","mime":"image\/png","size":2435837,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202019-11-08%20at%2010.53.41%20AM.png?itok=X1EqTTVJ"}}},"media_ids":["629258"],"groups":[{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"629306":{"#nid":"629306","#data":{"type":"news","title":"ML@GT Displays Diverse Research Interests at NeurIPS","body":[{"value":"\u003Cp\u003EWith 30\u0026nbsp;papers to present, the \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EMachine Learning Center at Georgia Tech (ML@GT)\u003C\/a\u003E will make a strong showing at this year\u0026rsquo;s Neural Information Processing Systems (NeurIPS) conference, Dec. 8-14 in Vancouver, British Columbia.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe conference fosters the exchange of research on the theoretical, technological, biological, and mathematical aspects of neural information processing systems. ML@GT research spans all of the categories, including work on \u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1908.07896.pdf\u0022\u003Eneural data\u003C\/a\u003E, \u003Ca href=\u0022https:\/\/b.gatech.edu\/2NS3Bz9\u0022\u003Efairness in machine learning algorithms\u003C\/a\u003E, and \u003Ca href=\u0022http:\/\/bit.ly\/2NEH1Lr\u0022\u003Eteaching artificial intelligence to work in changing environments\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;NeurIPS continues to be an exciting conference to attend because of the diverse research that is being presented each year. It is one of the most sought-after and anticipated conferences every year, and it\u0026rsquo;s good to see ML@GT have a good variety of papers being accepted,\u0026rdquo; said \u003Cstrong\u003ETuo Zhao\u003C\/strong\u003E, an assistant professor in the \u003Ca href=\u0022https:\/\/www.isye.gatech.edu\/\u0022\u003EH. Milton Stewart School of Industrial and Systems Engineering (ISyE)\u003C\/a\u003E. Zhao has three accepted papers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENeurIPS also continues to be a hotspot for major technology companies like Google, Microsoft, Facebook and to recruit new talent.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo see a full list and recaps of ML@GT\u0026rsquo;s accepted papers \u003Ca href=\u0022http:\/\/bit.ly\/2WTlnGo\u0022\u003Eclick here\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech will present 30 papers at one of the hottest conferences in artificial intelligence."}],"uid":"34773","created_gmt":"2019-11-25 13:50:19","changed_gmt":"2019-11-25 13:50:19","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-11-25T00:00:00-05:00","iso_date":"2019-11-25T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"628944":{"id":"628944","type":"image","title":"Georgia Tech will present 30 papers at the Thirty-third Conference on Neural Information Processing Systems","body":null,"created":"1573672076","gmt_created":"2019-11-13 19:07:56","changed":"1573672217","gmt_changed":"2019-11-13 19:10:17","alt":"NeurIPS 2019","file":{"fid":"239533","name":"NeurIPS 2019_Twitter.png","image_path":"\/sites\/default\/files\/images\/NeurIPS%202019_Twitter_0.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/NeurIPS%202019_Twitter_0.png","mime":"image\/png","size":764596,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/NeurIPS%202019_Twitter_0.png?itok=fHpwKoXh"}}},"media_ids":["628944"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"629228":{"#nid":"629228","#data":{"type":"news","title":"GT Experts Bring Diverse Perspectives on the Challenges and Importance of Algorithmic Fairness","body":[{"value":"\u003Cp\u003EAcademics and industry experts are still not entirely on the same page when it comes to researching fairness and bias in machine learning, even though the results impact people in huge ways, such as if they can receive an organ transplant, are recognized by autonomous vehicles, or advance in the hiring process.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo help aide discussion around this hot topic, \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003Ethe Machine Learning Center at Georgia Tech (ML@GT)\u003C\/a\u003E hosted a seminar and panel discussion about the work that its faculty members are doing in these areas. On Nov. 6, four faculty members affiliated with ML@GT presented their recent research that is focused on different aspects of fairness and bias.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPanelist and \u003Ca href=\u0022https:\/\/www.isye.gatech.edu\/\u0022\u003EH. Milton Stewart School of Industrial and Systems Engineering (ISyE\u003C\/a\u003E) assistant professor \u003Cstrong\u003ERachel Cummings\u003C\/strong\u003E encouraged attendees to make fairness a priority.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This field is still so new but also so important. We need more people doing fairness research,\u0026rdquo; said Cummings.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECummings\u0026rsquo; presentation focused on privacy, data, and algorithmic fairness. Her colleagues \u003Cstrong\u003ESwati Gupta\u003C\/strong\u003E, an assistant professor in ISyE and \u003Cstrong\u003EJudy Hoffman\u003C\/strong\u003E, an assistant professor in the \u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E discussed the mathematics of bias and fairness and analyzing fairness in computer vision systems respectively.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This work encourages us to get back to the basics of what we are doing and why we are doing it. Looking into how these algorithms actually affect people is huge, and we should all be thinking about the impact our work can have on all kinds of people,\u0026rdquo; said Gupta.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe session was moderated by \u003Cstrong\u003EDeven Desai\u003C\/strong\u003E, an associate professor in the \u003Ca href=\u0022https:\/\/www.scheller.gatech.edu\/index.html\u0022\u003EScheller College of Business\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;A goal of ML@GT\u0026rsquo;s is to develop the next generation of AI pioneers who are creating new technology that is both socially and ethically responsible, and events like these are a great way to continue to have that conversation with our students,\u0026rdquo; said Desai.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe center plans to continue hosting events like this on a variety of topics.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWatch a recording of the talk at \u003Ca href=\u0022https:\/\/smartech.gatech.edu\/handle\/1853\/62034\u0022\u003Ehttps:\/\/smartech.gatech.edu\/handle\/1853\/62034\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"ML@GT faculty members hosted a conversation with students on fairness and bias in machine learning."}],"uid":"34773","created_gmt":"2019-11-21 19:48:18","changed_gmt":"2019-11-22 14:32:42","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-11-21T00:00:00-05:00","iso_date":"2019-11-21T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"629225":{"id":"629225","type":"image","title":"ML@GT faculty members Deven Desai, Swati Gupta, Judy Hoffman, and Rachel Cummings hosted a panel discussion with students on fairness and bias in machine learning.","body":null,"created":"1574365151","gmt_created":"2019-11-21 19:39:11","changed":"1574365151","gmt_changed":"2019-11-21 19:39:11","alt":"Machine Learning at Georgia Tech faculty members","file":{"fid":"239634","name":"IMG_5275.jpg","image_path":"\/sites\/default\/files\/images\/IMG_5275.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/IMG_5275.jpg","mime":"image\/jpeg","size":729122,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/IMG_5275.jpg?itok=WKrnJrs-"}},"629227":{"id":"629227","type":"image","title":"Judy Hoffman presented on her research regarding analyzing fairness in computer vision systems.","body":null,"created":"1574365654","gmt_created":"2019-11-21 19:47:34","changed":"1574365654","gmt_changed":"2019-11-21 19:47:34","alt":"Judy Hoffman","file":{"fid":"239636","name":"IMG_5248.jpg","image_path":"\/sites\/default\/files\/images\/IMG_5248.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/IMG_5248.jpg","mime":"image\/jpeg","size":1042313,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/IMG_5248.jpg?itok=Cbw74XSn"}},"629226":{"id":"629226","type":"image","title":"Swati Gupta discussed the mathematics of bias and fairness during her presentation.","body":null,"created":"1574365586","gmt_created":"2019-11-21 19:46:26","changed":"1574365586","gmt_changed":"2019-11-21 19:46:26","alt":"Swati Gupta","file":{"fid":"239635","name":"IMG_5243.jpg","image_path":"\/sites\/default\/files\/images\/IMG_5243.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/IMG_5243.jpg","mime":"image\/jpeg","size":746746,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/IMG_5243.jpg?itok=0uOi-alC"}}},"media_ids":["629225","629227","629226"],"groups":[{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"},{"id":"47223","name":"College of Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"628399":{"#nid":"628399","#data":{"type":"news","title":"Newly Endowed Chair Underscores Value of Computational Journalism","body":[{"value":"\u003Cp\u003EThe creator of Google News has endowed a new faculty chair position in computational journalism at Georgia Tech\u0026rsquo;s College of Computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EKrishna Bharat\u003C\/strong\u003E \u0026ndash; Georgia Tech alumnus (Ph.D. CS 1996) and Distinguished Research Scientist at Google \u0026ndash;\u0026nbsp;announced his donation during a reception\u0026nbsp;at the College, held Oct. 31.\u0026nbsp;President \u003Cstrong\u003E\u0026Aacute;ngel Cabrera\u003C\/strong\u003E\u0026nbsp;and Dean of Computing \u003Cstrong\u003ECharles Isbell\u003C\/strong\u003E attended the reception, along with members of the College\u0026#39;s Advisory Board, faculty, staff, and students.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Given the relative maturity of the field and the promise ahead, I felt this was the right time to create an endowed chair in computational journalism. This is a way for me to give back to Georgia Tech, and also to help the College of Computing expand its research portfolio, and show leadership in this important area,\u0026rdquo; said Bharat, who also earned his Master of Science in Computer Science at Georgia Tech in 1993.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe \u003Cstrong\u003EKrishna A. Bharat Chair in Computational Journalism\u003C\/strong\u003E recognizes the College\u0026rsquo;s contributions to the field, which Georgia Tech is credited with creating in 2006. A nationwide search to fill the newly endowed chair position will begin in the coming months.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Computing is now such a part of so many other fields that we cannot innovate, we cannot research, and we cannot teach in isolation. Journalism is central to informing and engaging citizens in a free society. It is a field that has both changed and been changed by computing,\u0026rdquo; said Isbell, who also holds the John P. Imlay Jr. Chair in Computing.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The College is deeply thankful to Krishna Bharat and his wife \u003Cstrong\u003EKavita Thirumalai\u003C\/strong\u003E for supporting our efforts in computational journalism, and enabling our faculty and students to further impact this most important area.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003EQ\u0026amp;A with Google\u0026nbsp;Distinguished Research Scientist Krishna Bharat\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EBharat sat down this week for a few questions about computational journalism and his own journey from Georgia Tech to the top of the tech world.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EGT Computing:\u003C\/strong\u003E What was your motivation for creating this endowed chair? Why do you want to associate your name with this research?\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EKB:\u003C\/strong\u003E When I was a Ph.D. student at the College of Computing in the early 90s, I built a piece of software to aggregate online news and construct a personalized newspaper. That project broke new ground technically, and it taught me a lot about applying computing to the processing and analysis of news. It also launched me on a successful career path that led to Google and Google News.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn the 25 years since then, that intersection between computing and news that I happened to explore has grown tremendously. Online news has exploded, and computing is used in every part of the news lifecycle and ecosystem. This field is now well established in practice, in a variety of institutions in the news space, but in academia somewhat less so.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGiven the relative maturity of the field and the promise ahead, I felt this was the right time to create an endowed chair in computational journalism. This is a way for me to give back to Georgia Tech, and also to help the College expand its research portfolio and show leadership in this important area.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EGT Computing:\u003C\/strong\u003E Why do you think computational journalism is a field worth supporting at this point in time? Where do you think the field is going?\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EKB:\u003C\/strong\u003E We are now poised at the beginning of a new AI revolution where machines can understand content and synthesize derivatives in powerful and useful ways. This is going to open up opportunities for research and innovation that will transform journalism going forward. Machine intelligence will help journalists scale their efforts to source stories and build compelling new narrative experiences based on data and in-depth reporting. Technology in newsrooms, platforms, and consumer apps will evolve how news content is packaged, distributed, monetized, personalized, and optimized for consumer satisfaction \u0026ndash; and publisher revenue.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile technology holds a lot of promise, it can also create challenges for publishers and society -- ranging from information overload and filter bubbles to disinformation and manipulation by outside actors. Publishers face business model challenges in a crowded and continuously evolving ecosystem. There is both a need to study these phenomena and craft solutions that will in part, depend on computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlso, using computing as a social science tool, we can address and answer fundamental questions about the quality, diversity, and utility of the news that consumers get. We can assess its impact on our society and democracy by analyzing data at scale.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile a lot of the innovation is happening in technology companies and publishing houses, academia still has a big role to play. Many of the technologies that matter are pioneered in universities. Academia can focus broadly on the public good and take a long-term view of technology. They can innovate both on the practice and analysis of news, and bring expertise to bear from many disciplines.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EGT Computing:\u003C\/strong\u003E How did you get interested in computational journalism? Did working on Google News spark your interest?\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EKB:\u003C\/strong\u003E I have always had a deep interest in the news. When I grew up in India we used to get multiple newspapers at home as well as local and foreign magazines. We would also listen to the news on the radio and television. The divergence in reporting between different channels and their complementary nature has always been fascinating to me.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI got a chance to apply computing to news when I was doing my Ph.D. at Georgia Tech. After this though, I joined Google to work on web search. However, when the September 11th attacks happened I became super interested in computing with the news again. My explorations led to Google News, a platform to crawl index and aggregate news to support search and automated headlines on Google. Our mission was to help users understand the news better by presenting them with a diverse set of articles on a given news story. That product has since grown internationally and provides consumer access to relevant news and traffic to publishers worldwide.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis certainly advanced my interest in computational journalism. I also co-teach a course on this topic at Stanford, where we have students work in interdisciplinary groups to apply computing to journalism challenges.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EGT Computing:\u003C\/strong\u003E You mentioned your work at Stanford University and you also have a connection to Columbia University\u0026rsquo;s computational journalism program. What do you think Georgia Tech brings to the table that other schools might be missing?\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EKB:\u003C\/strong\u003E I have a personal connection to Georgia Tech. I got my Ph.D. here and was inspired to work on projects that transformed my career.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBeyond this, however, one thing I found unique about Georgia Tech is its emphasis on interdisciplinary research and the collaborations it builds with an eye to the future. The GVU Center, which I was a part of, is a great example of this. (Professor Emeritus) Jim Foley brought together researchers in HCI, Animation, Robotics, Visualization, VR, Psychology, Industrial Engineering, Ubiquitous Computing, etc. to work together on projects that were trying to look ahead and invent how computing could integrate into our lives. We were one of the earliest groups to work on the WWW, starting in the first year of its existence. The center was extremely successful and the Institute has replicated this model many times since.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EGT Computing:\u003C\/strong\u003E What advice would you give to students who are interested in learning and working at the intersection of computing and journalism?\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EKB:\u003C\/strong\u003E One thing I have found is that since news touches everyone, every computer scientist has some intuition on how to apply computing to the news. What they often need is exposure to real data and use cases to test out their ideas and develop new ones.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFortunately, there are many avenues for that -- especially in an institute that encourages collaboration with practitioners. Working in interdisciplinary teams is important, teaming up with journalists or people with experience with the news. Also, I find that almost every subfield within computing seems to have relevance to computational journalism and the ability to contribute valuable technologies.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDeep learning, in particular, has direct applicability. As the AI technology used to understand text, audio, and video improves, you can think of ways to apply it to tasks of interest in the news lifecycle. There is a lot of low hanging fruit there. The Computation+Journalism symposium features interesting research that can serve as inspiration.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EGT Computing:\u003C\/strong\u003E Finally, you did both an MS and a Ph.D. at GT Computing. Can you give an example of how your experience at Tech helped you build a successful career? What have you learned that you think could benefit current students?\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EKB:\u003C\/strong\u003E One of the things I think proved really useful during my Ph.D. was the opportunity to meet with practitioners both in their environment and in ours. We would travel to Silicon Valley to meet with companies like Sun Microsystems and SGI and understand the problems they cared about. We also had a constant stream of visitors to our lab from industry, government and other universities to see demos of our prototypes.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis constant feedback loop helped ground our research in reality and the needs of the real world. We had to articulate our vision and clarify how we were planning to make a difference to the state of the art.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhen we were being naive in our approach or had blind spots, they would tell us and we would learn from it and course correct. I think that kind of feedback loop is worth seeking out. It helps you orient your research correctly and prepares you for a successful career when you graduate.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The creator of Google News, a GT Computing alumnus, is funding a new endowed faculty position in the College."}],"uid":"32045","created_gmt":"2019-10-31 15:06:58","changed_gmt":"2019-11-01 13:50:37","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-11-01T00:00:00-04:00","iso_date":"2019-11-01T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"628472":{"id":"628472","type":"image","title":"Krishna Bharat GT Computing Alum endowed chair announcement_casual","body":null,"created":"1572615603","gmt_created":"2019-11-01 13:40:03","changed":"1572615603","gmt_changed":"2019-11-01 13:40:03","alt":"Krishna Bharat GT Computing Alum, endowed chair announcement","file":{"fid":"239353","name":"Krishna Bharat_MG_8337.jpg","image_path":"\/sites\/default\/files\/images\/Krishna%20Bharat_MG_8337.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Krishna%20Bharat_MG_8337.jpg","mime":"image\/jpeg","size":266298,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Krishna%20Bharat_MG_8337.jpg?itok=ZQNXPV5l"}},"628475":{"id":"628475","type":"image","title":"GT Computing endowed chair announcement group","body":null,"created":"1572616027","gmt_created":"2019-11-01 13:47:07","changed":"1572616027","gmt_changed":"2019-11-01 13:47:07","alt":"Computing Professor Emeritus James Foley, Google Distinguished Research Scientist Krishna Bharat, President\u00a0\u00c1ngel Cabrera, and Dean of Computing Charles Isbell.","file":{"fid":"239357","name":"Krishna Bharat_MG_9868.jpg","image_path":"\/sites\/default\/files\/images\/Krishna%20Bharat_MG_9868.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Krishna%20Bharat_MG_9868.jpg","mime":"image\/jpeg","size":293421,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Krishna%20Bharat_MG_9868.jpg?itok=5x9fghSD"}},"628474":{"id":"628474","type":"image","title":"President Cabrera remarks GT Computing endowed chair annoucement","body":null,"created":"1572615758","gmt_created":"2019-11-01 13:42:38","changed":"1572615758","gmt_changed":"2019-11-01 13:42:38","alt":"President Cabrera ","file":{"fid":"239356","name":"Krishna Bharat_MG_8414.jpg","image_path":"\/sites\/default\/files\/images\/Krishna%20Bharat_MG_8414.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Krishna%20Bharat_MG_8414.jpg","mime":"image\/jpeg","size":243357,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Krishna%20Bharat_MG_8414.jpg?itok=rtbdYPiY"}}},"media_ids":["628472","628475","628474"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAnn Claycombe, Communications Director\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:ann.claycombe@cc.gatech.edu?subject=Bharat%20endowment\u0022\u003Eann.claycombe@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["ann.claycombe@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"628444":{"#nid":"628444","#data":{"type":"news","title":"Keep Forgetting Your Password? Try This Novel Virtual Authentication Technique","body":[{"value":"\u003Ch3\u003E\u003Cem\u003EFirst-person Virtual Maze Offers More Memorable, Harder-to Break Passwords for Infrequent Authentication\u003C\/em\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EWe\u0026rsquo;ve all been there. For the first time in months, you\u0026rsquo;ve been logged out of your social media account and need to log back in. The problem is it\u0026rsquo;s been so long since your last log in that you don\u0026rsquo;t remember your password. You try every combination of baby and pet name, sister\u0026rsquo;s birthday, childhood street address \u0026ndash; nothing works, and now you\u0026rsquo;re locked out.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIf only there was a better way to remember these passwords after extended periods of disuse.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELuckily, researchers at \u003Ca href=\u0022http:\/\/gatech.edu\u0022 target=\u0022_blank\u0022\u003EGeorgia Tech\u003C\/a\u003E have come up with a novel solution to this longstanding problem, applying an old memory technique to new technology to offer users a more effective authentication method. Known as \u0026lsquo;the Memory Palace, the new tool is a three-dimensional virtual labyrinth navigated in the first-person perspective.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn cases of infrequent authentication, the Memory Palace works in place of an account\u0026rsquo;s password. Users create their own personal path with multiple left or right turns through a maze that must then be recreated to log in to their account. If the user makes it through the maze, similar to the one found in the old Windows three-dimensional labyrinth screensaver, they gain access.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EStudies evaluating the technique showed that visual-spatial secrets were most memorable if navigated in the three-dimensional first-person perspective. They also showed that, in comparison to Android\u0026rsquo;s 9-dot pattern lock, the Memory Palace was significantly more memorable after one week, was harder to break through shoulder surfing (capturing passwords by looking over someone\u0026rsquo;s shoulders), and were not significantly slower to enter.\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.youtube.com\/watch?v=I02XDR7Mg0\u0022\u003EVIDEO: Explore \u0026#39;The Memory Palace\u0026#39;\u003C\/a\u003E\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Humans have evolved with remarkably persistent and fast-imprinting spatial memories, owing in no small part to our nomadic history,\u0026rdquo; said \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022 target=\u0022_blank\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E Assistant Professor \u003Cstrong\u003ESauvik Das\u003C\/strong\u003E, the lead researcher on the project. \u0026ldquo;Many people can, for example, clearly visualize and mentally walk through their childhood homes, even if they haven\u0026rsquo;t stepped foot in it for decades. They may only need to be shown once or twice how to drive to a new part of a familiar city.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Our key insight was simple: Why not co-opt this incredibly strong spatial memory system for infrequent authentication?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis visual-spacial authentication is based upon an old memory technique of the same name, also called the \u0026ldquo;method of loci.\u0026rdquo; That approach uses visualizations with the use of spatial memory, familiar information about one\u0026rsquo;s environment, to quickly and efficiently recall information. World Memory champions have applied this technique in competition for years, associating vivid images along a specific path with digits, letters, or playing cards they are required to memorize. In fact, the technique dates all the way back to ancient Greeks and Romans.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhen developing their program, researchers focused on a few keys to their method. In addition to security against common attacks like random guessing or shoulder surfing, they needed the authentication secret to be memorable without much practice or reinforcement and they needed it to be deployable to the public.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Users are unlikely to accept a solution that requires significant upfront training or effort,\u0026rdquo; said Das, an expert in a field dubbed social cybersecurity that examines social norms that impact the adoption or rejection of security techniques. \u0026ldquo;Also, the solution should be cost-effective and not require specialized hardware. Many authentication solutions have been proposed, but most fail to be widely adopted for these reasons.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EExisting solutions fall short in these requirements. Biometrics, like a thumb print or facial recognition, require specialized hardware that can be expensive for infrequent use cases. PINs and graphical passwords have problems in long-term memorability without frequent reinforcement, or are otherwise vulnerable to shoulder surfing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The Memory Palace addresses each of these concerns with a proven memory technique that can hold up over time but is not easily stolen,\u0026rdquo; Das said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDas provided a handful of potential instances of infrequent authentication. Perhaps a session persists for a long period of time, like social media accounts, or a user must log in on a different device than normal, like a Netflix account on a web browser versus a smart TV. Other situations include occasionally-accessed resources, like a conference room secured with a smart lock, or as a fallback authentication method where a secondary secret is needed to recover access to an account where the primary secret has been compromised.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo deploy to the public, an app could implement the Memory Palace as a means of authenticating users. Alternatively, an operating system like Android could implement it as a means of authenticating into a device and automatically handle authenticating into any existing apps on the device.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis work was presented in a paper, titled \u003Cem\u003ET\u003Ca href=\u0022https:\/\/sauvikdas.com\/uploads\/paper\/pdf\/22\/file.pdf\u0022 target=\u0022_blank\u0022\u003Ehe Memory Palace: Exploring Visual-Spatial Paths for Strong, Memorable, Infrequent Authentication\u003C\/a\u003E\u003C\/em\u003E (Sauvik Das, David Lu, Taehoon Lee, Joanne Lo, Jason I. Hong), at the \u003Ca href=\u0022https:\/\/uist.acm.org\/uist2019\/\u0022 target=\u0022_blank\u0022\u003EACM Symposium on User Interface Software and Technology\u003C\/a\u003E (UIST 2019), which was held\u0026nbsp;Oct. 20-23 in New Orleans.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"This first-person virtual maze offers more memorable, harder-to-break passwords for infrequent authentication."}],"uid":"33939","created_gmt":"2019-10-31 18:40:16","changed_gmt":"2019-10-31 18:40:16","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-10-31T00:00:00-04:00","iso_date":"2019-10-31T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"628443":{"id":"628443","type":"image","title":"The Memory Palace","body":null,"created":"1572547175","gmt_created":"2019-10-31 18:39:35","changed":"1572547175","gmt_changed":"2019-10-31 18:39:35","alt":"The Memory Palace - A person navigates a virtual maze on a smartphone","file":{"fid":"239338","name":"Screen Shot 2019-10-31 at 2.38.07 PM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202019-10-31%20at%202.38.07%20PM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202019-10-31%20at%202.38.07%20PM.png","mime":"image\/png","size":434428,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202019-10-31%20at%202.38.07%20PM.png?itok=utXQSFpn"}}},"media_ids":["628443"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182941","name":"cc-research; ic-cybersecurity; ic-hcc"}],"core_research_areas":[{"id":"145171","name":"Cybersecurity"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"628437":{"#nid":"628437","#data":{"type":"news","title":"Opportunities for Impact: Startup Zyrobotics Helped Ayanna Howard Reach More People","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003EAyanna Howard\u003C\/strong\u003E was not thinking about starting a business. Working as a professor in \u003Ca href=\u0022http:\/\/gatech.edu\u0022 target=\u0022_blank\u0022\u003EGeorgia Tech\u003C\/a\u003E\u0026rsquo;s \u003Ca href=\u0022http:\/\/ece.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003ESchool of Electrical and Computer Engineering\u003C\/a\u003E (ECE) in 2013, her focus was on her research into assistive robotics and therapy gaming applications for children, not launching a startup outside of her lab.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDoing research in an environment like Georgia Tech\u0026rsquo;s, however, where entrepreneurship and risk-taking is not only encouraged but required, has a way of making even the clearest of plans veer off in varying unforeseen directions. Thus, out of her lab came \u003Ca href=\u0022http:\/\/zyrobotics.com\/\u0022 target=\u0022_blank\u0022\u003EZyrobotics\u003C\/a\u003E, a technology company that develops educational technologies for children with differing abilities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor the past six years, Zyrobotics has developed personalized technologies that stimulate social, cognitive, and motor skill development using fun and educational applications. Now, there are five products, three hardware and two software. The software comprises about 15 different programs in math, robot, and coding education. There have been over 600,000 downloads and about 80 distributors using or distributing the products in clinics and school systems.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;As researchers, we\u0026rsquo;re not only concerned with development,\u0026rdquo; said Howard, now the Chair of Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022 target=\u0022_blank\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC). \u0026ldquo;We want to know the impact. What Zyrobotics has done is allowed the research we were doing in the lab to touch so many more people than we otherwise would have done.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch3\u003EA proof of concept\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EIt started as the work of one of her graduate students in ECE. \u003Cstrong\u003EHae Won Park\u003C\/strong\u003E was finishing up her Ph.D. when she came to a bit of a crossroads. Trying to decide whether to pursue a career in academia or to, perhaps, go into industry, she looked to Howard for some guidance.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;There was an opportunity with the \u003Ca href=\u0022https:\/\/www.nsf.gov\/\u0022 target=\u0022_blank\u0022\u003ENational Science Foundation\u003C\/a\u003E \u003Ca href=\u0022https:\/\/www.nsf.gov\/news\/special_reports\/i-corps\/\u0022 target=\u0022_blank\u0022\u003EI-Corps grant\u003C\/a\u003E where you have to write a proposal, put your ideas down, defend to a program manager, et cetera,\u0026rdquo; Howard said. \u0026ldquo;It seemed like a good program that would allow her to experience all of these aspects in a low-risk way. If it didn\u0026rsquo;t work out, oh well.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPark\u0026rsquo;s research examined methods for utilizing touchscreen interfaces for accessible human-robot interaction. It was a project called \u003Ca href=\u0022http:\/\/tabaccess.com\/\u0022 target=\u0022_blank\u0022\u003ETabAccess\u003C\/a\u003E, an assistive technology that provides alternative switch inputs to control smartphones and tablets for users with motor impairments.\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Ca href=\u0022https:\/\/www.youtube.com\/watch?v=z3q5C2yTxU8\u0022 target=\u0022_blank\u0022\u003EVIDEO: How does TabAccess work?\u003C\/a\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EThroughout the course of customer discovery, where Park and Howard spoke with varying professionals and potential users, Howard said she realized just how big of a difference the technology could make outside of the lab.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;A year later, it was enough of a concept,\u0026rdquo; Howard said. \u0026ldquo;It looked like we could design something that made sense. The company was founded, and then it went off and did its own thing.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch3\u003EA broader impact\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EIt was the impact that led Howard to push forward on the project as a startup. At the time, she had been doing robotics educational STEM camps focused on children with special needs. Students, who had primarily visual and motor impairments, were taught how to code robots.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe camps were successful, but the touch points, as Howard called them, were relatively few.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The touch points were just the kids I was able to recruit along with my clinical collaborator,\u0026rdquo; she said. \u0026ldquo;My touch point was: If I show up, I touched. If I didn\u0026rsquo;t, there was nothing going on. Whereas, in customer discovery, you weren\u0026rsquo;t necessarily speaking with the people you were impacting \u0026ndash; the kids \u0026ndash; but you were speaking with the teachers who interact with kids.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESuddenly, the impact in her mind shifted from the 1-to-1 relationship of STEM camps to 1-to-100, 1-to-1,000, and more.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;My workshop on a good day had maybe 10 kids,\u0026rdquo; she said. \u0026ldquo;I did these in a good year maybe twice. So, maybe like 20 kids in a year. You can\u0026rsquo;t possibly do what we\u0026rsquo;re doing now without Zyrobotics.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u0026#39;For students, (entrepreneurship) is a no-brainer\u0026#39;\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EHoward said it\u0026rsquo;s this mindset that sets Georgia Tech apart.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Students have always thought about the impact of what they\u0026rsquo;re doing,\u0026rdquo; she said. \u0026ldquo;Socially-responsible engineering. That\u0026rsquo;s always been the core mission. Being an entrepreneur has this aspect of knowing the exact problems you want to attack, versus maybe going into industry and working on someone else\u0026rsquo;s.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt\u0026rsquo;s important, she said, that academics continue to have a place in the technological market. If left to major tech conglomerates, we up against group think and hesitating to take necessary risks.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;For students, it\u0026rsquo;s a no-brainer to engage in some entrepreneurial pursuit,\u0026rdquo; she said. \u0026ldquo;That mindset of thinking about problems and impacts allows you to view it differently.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We need to be willing to make mistakes. The probability is that your startup will fail. But students understand that and still do it. We need to get rid of that fear of failure, or else we\u0026rsquo;ll never make significant change.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"For the past six years, Zyrobotics has developed personalized technologies that stimulate social, cognitive, and motor skill development using fun and educational applications."}],"uid":"33939","created_gmt":"2019-10-31 18:18:45","changed_gmt":"2019-10-31 18:18:45","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-10-31T00:00:00-04:00","iso_date":"2019-10-31T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"628356":{"id":"628356","type":"image","title":"Ayanna Howard\u0027s Zyrobotics","body":null,"created":"1572456199","gmt_created":"2019-10-30 17:23:19","changed":"1572456199","gmt_changed":"2019-10-30 17:23:19","alt":"Ayanna Howard\u0027s Zyrobotics","file":{"fid":"239303","name":"ayanna_zyrobotics.png","image_path":"\/sites\/default\/files\/images\/ayanna_zyrobotics.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/ayanna_zyrobotics.png","mime":"image\/png","size":150863,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/ayanna_zyrobotics.png?itok=BOCKp8ID"}}},"media_ids":["628356"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182940","name":"cc-research; ic-ai-ml; ic-robotics; ic-hcc"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"627489":{"#nid":"627489","#data":{"type":"news","title":"In Memoriam: Scholarship Honors Alumnus Sanat Moningi","body":[{"value":"\u003Cdiv\u003E\r\n\u003Cdiv\u003E\r\n\u003Cdiv\u003E\r\n\u003Cdiv\u003E\r\n\u003Cp\u003EWhen \u003Cstrong\u003ESanat Moningi\u003C\/strong\u003E died in 2018 at the age of 24, his friends and family were not the only ones who felt like the world lost a unique spirit.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEveryone he worked with, helped, or even spoke to knew that there was never going to be another Sanat.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe was one of those people that you couldn\u0026rsquo;t describe with a word or two. His qualities were unlike most. His actions, thoughts, and words made an impact on this world that many cannot do.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe had three separate memorial services: one held by his family in West Virginia, where he grew up, one at Georgia Tech, where he went to college, and one in San Francisco, where he moved to work afterward. There were many different gatherings and events to honor Sanat.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;That paints a true picture of how many people he impacted,\u0026rdquo; said \u003Cstrong\u003ERyan Merklen\u003C\/strong\u003E, who knew Sanat from their time together in Chi Phi. \u0026ldquo;He always asked what he could do to help those around him.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn fact, everyone who knew Sanat says the same things about him. The words brilliant, caring, trustworthy, reliable, hilarious, and beautiful were used to describe Sanat. One crucial thing that separated Sanat from others was that he always, from his earliest childhood, knew exactly what he wanted to do in the world: help others.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor the caring person he was, his family is honoring his spirit by endowing the Sanat Moningi Memorial Scholarship, which is being offered for the first time this fall. The scholarship, worth $4,000, will support a student who shares Sanat\u0026rsquo;s commitment to hard work and to using serving their community.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/gatech.co1.qualtrics.com\/jfe\/form\/SV_6WPCIhCtYndWNyR\u0022 target=\u0022_blank\u0022\u003E[APPLY: Sanat Moningi Memorial Scholarship Application Deadline is Nov. 3]\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We think Sanat would be proud of us for doing this for Georgia Tech students,\u0026rdquo; said his sister, \u003Cstrong\u003EShalini Moningi\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This scholarship represents who he was.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003EGrowing Up Generous\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003ESanat\u0026rsquo;s burning curiosity and selfless qualities were already seen at a young age. He offered to build a helper robot for his mother, Dr. Prasuna Jami, so that she could see more patients and spend more time with her children.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Mom, I don\u0026rsquo;t want to work like you, all day and all night,\u0026rdquo; she remembers him saying. \u0026ldquo;I want to change the world.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEven at school, Sanat was noticed for his selflessness. In 2011, he attended the American Legion\u0026rsquo;s Mountaineer Boys\u0026rsquo; State program and won the Frank Taylor, Jr. Award for his enthusiastic interest in the law and for displaying high moral character with honor, respect, and integrity.\u003C\/p\u003E\r\n\r\n\u003Cblockquote\u003E\r\n\u003Cp\u003E\u0026quot;San Francisco was blessed to have Sanat for the time we did. He contributed hundreds of hours of volunteer time applying his skills to help others. His contributions will surely impact others for years to come.\u0026quot; - \u003Cstrong\u003EJoy Bonaguro, City of San Francisco chief data officer\u003C\/strong\u003E\u003C\/p\u003E\r\n\u003C\/blockquote\u003E\r\n\r\n\u003Cp\u003EWhen his sister was having a tough time adjusting to college, Sanat decided to cheer her up.\u003C\/p\u003E\r\n\u003C\/div\u003E\r\n\u003C\/div\u003E\r\n\u003C\/div\u003E\r\n\u003C\/div\u003E\r\n\r\n\u003Cdiv\u003E\r\n\u003Cdiv\u003E\r\n\u003Cdiv\u003E\r\n\u003Cdiv\u003E\r\n\u003Cp\u003E\u0026ldquo;He planned a complete surprise birthday party with our family and friends,\u0026rdquo; Shalini Moningi said. \u0026ldquo;I still don\u0026rsquo;t know how \u0026mdash; I mean, he was in eighth grade, he didn\u0026rsquo;t even have a car.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It really meant a lot to me,\u0026rdquo; she said. \u0026ldquo;He was a little boy genius, but he was also a lot more. He really cared for people.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESanat always made sure to make everyone as happy as they could ever be.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I remember when I was about 6 or 7, at a family gathering everyone was having fun and all the kids were older than me so they left me out,\u0026rdquo; his cousin Meenal explained. \u0026ldquo;As I was sitting in the corner bored, Sanat comes over to me. He starts making jokes and playing with me. Though he was 6 years older than me, he made sure I was having the best time I could have\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003EThe Tech Years\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EIt was obvious that Sanat blossomed at Georgia Tech, both socially and academically. He was the top student in the class and was named the Outstanding Freshman in Computing after his first year. In 2014, he won the ConocoPhillips Innovation Challenge before graduating with honors in 2015. These awards that Sanat got throughout his life are just symbols of the great kid Sanat was.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAfter his first semester, he joined Chi Phi. His parents were suspicious of fraternities at first, but Sanat\u0026rsquo;s enthusiasm and success changed their minds.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I\u0026rsquo;m very impressed with how much support his friends gave him,\u0026rdquo; Dr. Jami said. In return, Sanat gave a lot of his time and talents as the fraternity\u0026rsquo;s philanthropy chair. In his senior year, he won a national award from Chi Phi for his leadership and altruism.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;He set a new standard,\u0026rdquo; said Merklen. \u0026ldquo;He connected us to the Boys and Girls Club, to Habitat for Humanity. He encouraged us to always try to place ourselves in our community.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESanat\u0026rsquo;s drive to help also took him into less conventional channels. He spent time tutoring a local high school student in the basics of computing. One Thanksgiving, he and a friend were grabbing dinner when they ran into a homeless man. They brought him back to their dorm and shared their food. This stands for the caring man Sanat was and the love he had towards others.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;He was very empathetically aware,\u0026rdquo; Merklen said. \u0026ldquo;It bothered him when he saw someone he couldn\u0026rsquo;t help.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003EThe Real World\u003C\/strong\u003E\u003C\/h4\u003E\r\n\u003C\/div\u003E\r\n\u003C\/div\u003E\r\n\u003C\/div\u003E\r\n\u003C\/div\u003E\r\n\r\n\u003Cdiv\u003E\r\n\u003Cdiv\u003E\r\n\u003Cdiv\u003E\r\n\u003Cdiv\u003E\r\n\u003Cp\u003EAfter graduation, Sanat Moningi moved to San Francisco for a job with Salesforce, where he was quickly promoted to the position of product owner. He moved in with another Salesforce employee, Ryan Flood, and their shared house became the center of a vibrant social scene.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We had a lot of barbecues,\u0026rdquo; Flood said. \u0026ldquo;Sanat would invite anyone and everyone.\u0026rdquo; Once, a friend showed up at a barbecue in a suit, having come straight from a work function. The next thing Flood saw was that Sanat had changed into a suit to make his friend feel welcome.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe found time to do good as a member of \u003Ca href=\u0022https:\/\/codeforsanfrancisco.org\u0022 target=\u0022_blank\u0022\u003ECode for San Francisco\u003C\/a\u003E, a nonprofit that finds ways to use technology to improve life in the city. Sanat co-founded the nonprofit\u0026rsquo;s Data Science Working Group, which worked on issues including energy efficiency and housing approvals.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;A couple of years into that, we decided we wanted to do something in the government and politics space full-time,\u0026rdquo; said Catherine Zhang, a fellow working group member. The two went on to found \u003Ca href=\u0022https:\/\/voterly.com\u0022 target=\u0022_blank\u0022\u003EVoterly\u003C\/a\u003E, a nonprofit that provided data services to local political campaigns.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThey were a couple of months into their new venture when Sanat died accidentally and unexpectedly on April 21, 2018. More than a year later, his parents still hear from people who were touched by Sanat\u0026rsquo;s kindness.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;He was so intelligent, so successful,\u0026rdquo; said his sister Shalini. \u0026ldquo;But the best word to describe him would be caring.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESanat used his data science skills since he was in college and developed his knowledge and talent to help the homeless all throughout the nation. He created data science working groups in San Francisco to better care for everyone, as well as projects for the environment and global state. He did all of this as part of his non-profit work.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003EMoving Forward\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003ESanat loved Georgia Tech, and Sanat loved to help other people. He truly made a change and to honor his work, dedication, and love, his family, endowing a scholarship in his memory just seemed right.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/gatech.co1.qualtrics.com\/jfe\/form\/SV_6WPCIhCtYndWNyR\u0022 target=\u0022_blank\u0022\u003EThe Sanat Moningi Memorial Scholarship\u003C\/a\u003E is for students with at least a 3.0 GPA and a drive to use technology to improve society and help others.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Sanat wanted to use his intelligence and technical skills to do good for society,\u0026rdquo; his mother said. \u0026ldquo;We are looking for someone with a passion to create who also wants to serve society in a creative way.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EJami said that applicants should know a few other things about Sanat.\u003C\/p\u003E\r\n\u003C\/div\u003E\r\n\u003C\/div\u003E\r\n\u003C\/div\u003E\r\n\u003C\/div\u003E\r\n\r\n\u003Cdiv\u003E\r\n\u003Cdiv\u003E\r\n\u003Cdiv\u003E\r\n\u003Cdiv\u003E\r\n\u003Cp\u003E\u0026ldquo;He was selfless, and he never cared about publicity,\u0026rdquo; she said. \u0026ldquo;He was goofy sometimes, and other times he was hilarious. He saw problems and solved them on a large scale. He wanted to make a lasting impact in this world and that is just what he did.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;He wanted to use his intelligence to do some good.\u0026rdquo;\u003C\/p\u003E\r\n\u003C\/div\u003E\r\n\u003C\/div\u003E\r\n\u003C\/div\u003E\r\n\u003C\/div\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"College of Computing announces the Sanat Moningi Memorial Scholarship. The scholarship, worth\u00a0$4,000, will support a student who shares Sanat\u2019s commitment to hard work and community."}],"uid":"34540","created_gmt":"2019-10-11 14:20:29","changed_gmt":"2019-10-22 00:52:15","author":"Kristen Perez","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-10-17T00:00:00-04:00","iso_date":"2019-10-17T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"627490":{"id":"627490","type":"image","title":"Sanat Moningi Headshot","body":null,"created":"1570803823","gmt_created":"2019-10-11 14:23:43","changed":"1570803823","gmt_changed":"2019-10-11 14:23:43","alt":"Sanat Moningi stands outside in a blue blazer with a tie smiling.","file":{"fid":"238914","name":"SanatMoningi.jpg","image_path":"\/sites\/default\/files\/images\/SanatMoningi.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/SanatMoningi.jpg","mime":"image\/jpeg","size":215728,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/SanatMoningi.jpg?itok=udHovrZO"}}},"media_ids":["627490"],"related_links":[{"url":"https:\/\/gatech.co1.qualtrics.com\/jfe\/form\/SV_6WPCIhCtYndWNyR","title":"Sanat Monongi Scholarship Application"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"431631","name":"OMS"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182639","name":"sanat"},{"id":"182640","name":"memorial fund"},{"id":"654","name":"College of Computing"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAnn Claycombe\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Director\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["ann.claycombe@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"627758":{"#nid":"627758","#data":{"type":"news","title":"Students Cultivate Community at Grace Hopper","body":[{"value":"\u003Cp\u003EInfinite possibility is the general theme of the \u003Ca href=\u0022https:\/\/ghc.anitab.org\/\u0022 target=\u0022_blank\u0022\u003EGrace Hopper Celebration\u003C\/a\u003E (GHC), and College of Computing women were in the center of it.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEighty students represented the College, including fourth-year computer science student \u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/jhillika\/\u0022 target=\u0022_blank\u0022\u003EJhillika Kumar\u003C\/a\u003E\u003C\/strong\u003E, who was given the \u003Ca href=\u0022https:\/\/anitab.org\/awards-grants\/abie-awards\/student-of-vision-abie-award\/\u0022 target=\u0022_blank\u0022\u003EStudent of Vision Abie Award\u003C\/a\u003E and delivered part of the opening keynote.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKumar helped kick-off the three-day conference with her speech on improving employment opportunities for people with autism through her startup, \u003Ca href=\u0022https:\/\/www.mentra.me\/\u0022 target=\u0022_blank\u0022\u003EAxisAbility\u003C\/a\u003E. The project has shown her how computing can reframe common misconceptions and empower individuals.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Together we can turn disability into a world filled with infinite possibility,\u0026rdquo; she said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E[\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/625706\/startup-connects-job-seekers-autism-new-opportunities\u0022 target=\u0022_blank\u0022\u003ERELATED: Startup Connects Job Seekers with Autism to New Opportunities\u003C\/a\u003E]\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe conference has become the largest gathering of women in technology with a \u003Ca href=\u0022https:\/\/www.nbcnews.com\/tech\/tech-news\/turning-tables-women-tech-job-interviews-have-questions-their-own-n1064336\u0022 target=\u0022_blank\u0022\u003Erecord-breaking\u003C\/a\u003E 26,000 attendees at this year\u0026rsquo;s event in Orlando, Florida. Organized by the \u003Ca href=\u0022https:\/\/anitab.org\/\u0022 target=\u0022_blank\u0022\u003EAnita Borg Institute for Women and Technology\u003C\/a\u003E and the \u003Ca href=\u0022https:\/\/www.acm.org\/\u0022 target=\u0022_blank\u0022\u003EAssociation for Computing Machinery\u003C\/a\u003E, GHC brings together industry leaders, pioneering academics, and students for keynotes, technical and career development panels, mentoring sessions, and a career fair.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWith scholarships provided by the College, 36 undergraduate students, 18 master\u0026rsquo;s students, 2 Ph.D. students, and 24 \u003Ca href=\u0022http:\/\/www.omscs.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003EOnline Master of Science in Computer Science (OMSCS)\u003C\/a\u003E students attended.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFaculty also joined, including \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC) Chair \u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/people\/ayanna-howard\u0022 target=\u0022_blank\u0022\u003EAyanna Howard\u003C\/a\u003E \u003C\/strong\u003Eand IC Senior Research Scientist \u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/carrie-bruce\u0022 target=\u0022_blank\u0022\u003ECarrie Bruce\u003C\/a\u003E\u003C\/strong\u003E. Howard led mentoring sessions and moderated a panel on design inclusion in artificial intelligence.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECollege staff from \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/student-life\/gt-computing-community\/oec-office\u0022 target=\u0022_blank\u0022\u003Ethe Office of Outreach, Enrollment and Community\u003C\/a\u003E hosted a recruiting booth at the career fair as a gold-level sponsor of the conference.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlthough many students come to find internship and job opportunities, one of the conference\u0026rsquo;s strongest attractions for attendees is the community it provides.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/niazpour.weebly.com\/\u0022 target=\u0022_blank\u0022\u003ENiaz Pour\u003C\/a\u003E\u003C\/strong\u003E, a third-year computational media student, believed seeing different examples of what women do in the field could influence her career in user experience (UX) design.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I really wanted to see the diversity of women in computing,\u0026rdquo; she said. \u0026ldquo;I don\u0026rsquo;t want to work somewhere that\u0026rsquo;s only designers. I\u0026rsquo;m a Persian in UX design, so I wanted to find someone on a similar path, and I met two women on the first day.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGHC was also a networking opportunity for many students, like for OMSCS student \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/shijie-shi\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EShijie Shi\u003C\/strong\u003E.\u003C\/a\u003E As a financial analyst for the World Bank, she doesn\u0026rsquo;t get to interact with many women in computing on a day-to-day basis.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I wanted to find a community because working in finance, sometimes I forget I have another part of me that does computing,\u0026rdquo; she said. \u0026ldquo;It\u0026rsquo;s very inspiring to see there are computer scientists making amazing things.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Eighty students represented the College, including fourth-year computer science student Jhillika Kumar, who was given the Student of Vision Abie Award and delivered part of the opening keynote."}],"uid":"34541","created_gmt":"2019-10-17 22:05:06","changed_gmt":"2019-10-17 22:08:43","author":"Tess Malone","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-10-17T00:00:00-04:00","iso_date":"2019-10-17T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"627759":{"id":"627759","type":"image","title":"GHC Group Photo 2019","body":null,"created":"1571350097","gmt_created":"2019-10-17 22:08:17","changed":"1571350097","gmt_changed":"2019-10-17 22:08:17","alt":"Student group photo","file":{"fid":"239036","name":"GHC.jpg","image_path":"\/sites\/default\/files\/images\/GHC.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/GHC.jpg","mime":"image\/jpeg","size":740457,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/GHC.jpg?itok=_XWHgk2G"}}},"media_ids":["627759"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ETess Malone, Communications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:tess.malone@cc.gatech.edu\u0022\u003Etess.malone@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["tess.malone@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"627578":{"#nid":"627578","#data":{"type":"news","title":"Jill Watson Now Fielding Questions on New AI-enabled Research Tool","body":[{"value":"\u003Cp\u003EA new artificially intelligent (AI) research tool that harnesses the power of the Smithsonian Institution\u0026rsquo;s massive\u0026nbsp;\u003Ca href=\u0022https:\/\/eol.org\u0022 target=\u0022_blank\u0022\u003EEncyclopedia of Life\u003C\/a\u003E\u0026nbsp;(EOL) ecological database debuted this semester at Georgia Tech.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe \u003Ca href=\u0022http:\/\/vera.cc.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003Evirtual ecological research assistant, known as VERA\u003C\/a\u003E, was developed at Georgia Tech and enables students to perform virtual experiments to explain existing ecological systems or to predict possible outcomes based on variables they input into the tool.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003EGetting to Know VERA\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;People using VERA have access to the EOL and can test a hypothesis using countless organisms, make as many changes to variables as they want, and study the effects on any ecosystem through real-time modeling,\u0026rdquo; said\u0026nbsp;\u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/sungeun-an-89730063\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003ESungeun An\u003C\/strong\u003E, human-centered computing Ph.D. student\u003C\/a\u003E and lead developer of the AI system.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This is a unique opportunity that doesn\u0026rsquo;t exist anywhere else.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlthough the EOL has extensive data entries for more than two million species, An says that VERA has an intuitive user interface and design that is relatively easy to use.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Students don\u0026rsquo;t need extensive scientific knowledge or programming and math skills to use VERA. They can build a conceptual model with simple visual cues on the computer screen, such as dragging elements or selecting input options,\u0026rdquo; said An.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003ECombining the Strength\u0026nbsp;of Two AIs\u003C\/strong\u003E\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EHowever, to get the most out of VERA, An says there can be a learning curve.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo flatten the\u0026nbsp;curve and help students optimize their experience with VERA, An and her fellow researchers turned to\u0026nbsp;Jill Watson, the \u003Ca href=\u0022https:\/\/www.wsj.com\/articles\/if-your-teacher-sounds-like-a-robot-you-might-be-on-to-something-1462546621\u0022 target=\u0022_blank\u0022\u003Efamed AI-enabled virtual teaching assistant (TA) that premiered in 2016\u003C\/a\u003E\u0026nbsp;supporting Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/www.omscs.gatech.edu\u0022 target=\u0022_blank\u0022\u003Eonline Master of Science in Computer Science (OMSCS) program\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EJill Watson\u0026nbsp;answers student questions about VERA via the collaborative messaging app, Slack. These range from technical questions about the tool \u0026ndash; \u0026ldquo;How do I add a new project\u0026rdquo; \u0026ndash; to subject matter questions \u0026ndash; \u0026ldquo;What is consumption rate?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Leveraging the Jill Watson virtual TA and VERA together is a powerful demonstration of how to scale technology to serve more populations and provide access to the world\u0026rsquo;s scientific knowledge,\u0026rdquo; said\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/ashok-goel\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EAshok Goel\u003C\/strong\u003E, professor of Interactive Computing\u003C\/a\u003E and director of the \u003Ca href=\u0022http:\/\/dilab.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003EDesign \u0026amp; Intelligence Lab, which created both AI agents\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECombining the strength of the two AI agents, said Goel, is part of \u003Ca href=\u0022https:\/\/emprize.gatech.edu\u0022 target=\u0022_blank\u0022\u003Ean intentional approach to rethinking instructional design for online learning\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;VERA is a significant advancement for artificial intelligence in science education and meant to be used anywhere by anyone interested in science exploration, so making it as accessible as possible is key to the system\u0026rsquo;s adoption,\u0026rdquo; Goel said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EStudents and others using VERA \u0026ndash; it\u0026rsquo;s also publicly available\u0026nbsp;and linked on the Smithsonian\u0026rsquo;s EOL homepage \u0026shy;\u0026ndash; can learn more through\u0026nbsp;a \u003Ca href=\u0022https:\/\/www.youtube.com\/playlist?list=PLwXogtSxXaLCP4AXU_VFUP92TVmotGLMv\u0022 target=\u0022_blank\u0022\u003Evideo series produced by Georgia Tech\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe videos demonstrate VERA\u0026rsquo;s capabilities using kudzu growth in the southeastern United States as an example. The videos are co-hosted by\u0026nbsp;\u003Ca href=\u0022http:\/\/www.emilygweigelphd.com\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EEmily Weigel\u003C\/strong\u003E, School of Biological Sciences\u003C\/a\u003E instructor for the biology course using VERA, and \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/fac\/Spencer.Rugaber\/\u0022 target=\u0022_blank\u0022\u003ECollege of Computing faculty member \u003Cstrong\u003ESpencer Rugaber\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EVERA research is funded by a grant from the National Science Foundation, #NSF-1636848.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor more information about Georgia Tech\u0026#39;s emPRIZE, contact\u0026nbsp;\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu?subject=Jill%20Watson%20Helping%20With%20Questions%20on%20New%20Research%20AI\u0022\u003EJoshua Preston, research communications manager\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A new AI-enabled research tool powered by the Smithsonian debuted in an undergraduate biology class at Georgia Tech this semester."}],"uid":"32045","created_gmt":"2019-10-14 19:31:05","changed_gmt":"2019-10-15 23:47:10","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-10-14T00:00:00-04:00","iso_date":"2019-10-14T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"627580":{"id":"627580","type":"image","title":"Jill Watson 2019 AI Teaching Assistant","body":null,"created":"1571083583","gmt_created":"2019-10-14 20:06:23","changed":"1571083583","gmt_changed":"2019-10-14 20:06:23","alt":"Stock image of personified female AI looking at reflection in mirror","file":{"fid":"238946","name":"093086626-technology-and-science-abstrac.jpeg","image_path":"\/sites\/default\/files\/images\/093086626-technology-and-science-abstrac.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/093086626-technology-and-science-abstrac.jpeg","mime":"image\/jpeg","size":638699,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/093086626-technology-and-science-abstrac.jpeg?itok=rnZrpPTP"}},"627584":{"id":"627584","type":"image","title":"Sungeun An - Ph.D. Human-Centered Computing Student","body":null,"created":"1571086076","gmt_created":"2019-10-14 20:47:56","changed":"1571086076","gmt_changed":"2019-10-14 20:47:56","alt":"Sungeun An, Georgia Tech human-centered computing PhD student","file":{"fid":"238949","name":"Sungeun An_human-centered-computingPhD.-student-2019.jpg","image_path":"\/sites\/default\/files\/images\/Sungeun%20An_human-centered-computingPhD.-student-2019.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Sungeun%20An_human-centered-computingPhD.-student-2019.jpg","mime":"image\/jpeg","size":51969,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Sungeun%20An_human-centered-computingPhD.-student-2019.jpg?itok=5zUyAGmu"}}},"media_ids":["627580","627584"],"related_links":[{"url":"https:\/\/emprize.gatech.edu","title":"Georgia Tech\u2019s emPRIZE: AI-Powered Learning. Anytime. Anywhere."}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"66442","name":"MS HCI"},{"id":"50876","name":"School of Interactive Computing"},{"id":"1299","name":"GVU Center"}],"categories":[],"keywords":[{"id":"2556","name":"artificial intelligence"},{"id":"9167","name":"machine learning"},{"id":"182669","name":"VERA"},{"id":"169183","name":"Jill Watson"},{"id":"182670","name":"goel"},{"id":"168873","name":"Smithsonian"},{"id":"182671","name":"encyclopedia of life"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAlbert Snedeker, Communications Manager\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto:albert.snedeker@cc.gatech.edu?subject=Jill%20Watson%20Answering%20Questions%20on%20Research%20AI%20Tool\u0022\u003Ealbert.snedeker@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EJoshua Preston, Research Communications Manager\u003Cbr \/\u003E\r\n\u003Ca href=\u0022mailto: jpreston@cc.gatech.edu\u0022\u003Ejpreston@cc.gatech.edu\u003C\/a\u003E\u003Cbr \/\u003E\r\n\u0026nbsp;\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["albert.snedeker@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"627425":{"#nid":"627425","#data":{"type":"news","title":"Premier Computer Vision Conference Accepts 10 Georgia Tech Papers","body":[{"value":"\u003Cp\u003EFrom helping chair umpires make better line calls in professional tennis to teaching robots to \u0026ldquo;see\u0026rdquo;, the field of computer vision continues to expand and become a part of people\u0026rsquo;s everyday lives. A subfield of artificial intelligence, computer vision teaches computers to understand and interpret the visual world through photos or videos.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe \u003Ca href=\u0022http:\/\/iccv2019.thecvf.com\/\u0022\u003EInternational Conference on Computer Vision (ICCV)\u003C\/a\u003E takes place from Oct. 27 to Nov. 2 and brings together researchers from Georgia Tech and around the world to discuss recent breakthroughs and research in the field. Researchers in the \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EMachine Learning Center at Georgia Tech (ML@GT)\u003C\/a\u003E have ten accepted papers in the conference.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003ESchool of Interactive Computing (IC)\u003C\/a\u003E and ML@GT associate professor \u003Cstrong\u003EDevi Parikh\u003C\/strong\u003E leads with seven research papers. Her work spans from \u003Ca href=\u0022https:\/\/www.voguebusiness.com\/technology\/facebook-ai-fashion-styling\u0022\u003Eusing artificial intelligence (AI) to help people make more stylish outfit choices\u003C\/a\u003E to \u003Ca href=\u0022http:\/\/bit.ly\/2ndC6qv\u0022\u003Eembodied visual recognition\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIC assistant professor \u003Cstrong\u003EJudy Hoffman \u003C\/strong\u003Eand professor \u003Cstrong\u003EJames Rehg\u003C\/strong\u003E are 2019 area chairs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;As the computer vision field continues to expand and create novel ideas, conferences like ICCV become increasingly important. There was a lot of impressive work submitted to the conference this year. With computer vision being one of ML@GT\u0026rsquo;s strongest areas, I\u0026rsquo;m thrilled to see the center\u0026rsquo;s presence in this premier conference,\u0026rdquo; said Hoffman.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOther work from Georgia Tech includes papers on \u003Ca href=\u0022https:\/\/mlatgt.blog\/2019\/09\/10\/overcoming-large-scale-annotation-requirements-for-understanding-videos-in-the-wild\/\u0022\u003Elessening the need for additional annotation in videos\u003C\/a\u003E, making vision and language models more grounded, and \u003Ca href=\u0022http:\/\/bit.ly\/2ndC6qv\u0022\u003Eagents learning to move to better perceive objects.\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026quot;Having a paper accepted, especially as an oral presentation, especially in a top conference gives me lots of confidence and encouragement for my Ph.D. research. I can\u0026#39;t wait to attend ICCV to share my work, talk with other talented people, and learn other interesting topics in both academic and industrial areas,\u0026quot; said \u003Cstrong\u003EMin-Hung Chen\u003C\/strong\u003E, a sixth-year electrical and computer engineering Ph.D. student.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOrganized by IEEE, ICCV is one of the premier international computer vision conferences and will take place at the COEX Convention Center in Seoul, South Korea.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor more information on ML@GT\u0026rsquo;s involvement with the conference, visit \u003Ca href=\u0022http:\/\/bit.ly\/339BYaS\u0022\u003Ehttp:\/\/bit.ly\/339BYaS\u003C\/a\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The Machine Learning Center will make a splash at the International Conference on Computer Vision later this month."}],"uid":"34773","created_gmt":"2019-10-09 19:54:48","changed_gmt":"2019-10-10 12:11:43","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-10-10T00:00:00-04:00","iso_date":"2019-10-10T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"627424":{"id":"627424","type":"image","title":"Seoul, South Korea","body":null,"created":"1570650742","gmt_created":"2019-10-09 19:52:22","changed":"1570650742","gmt_changed":"2019-10-09 19:52:22","alt":"","file":{"fid":"238886","name":"sunyu-kim-HjsWTyyVDgg-unsplash.jpg","image_path":"\/sites\/default\/files\/images\/sunyu-kim-HjsWTyyVDgg-unsplash.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/sunyu-kim-HjsWTyyVDgg-unsplash.jpg","mime":"image\/jpeg","size":317658,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/sunyu-kim-HjsWTyyVDgg-unsplash.jpg?itok=00vn_fSV"}}},"media_ids":["627424"],"groups":[{"id":"576481","name":"ML@GT"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["allie.mcfadden@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"627023":{"#nid":"627023","#data":{"type":"news","title":"New $1.2 Million NSF Grant Aims to Improve Treatment for PTSD Patients","body":[{"value":"\u003Cp\u003EPost-traumatic stress disorder (PTSD), particularly among veterans returning from combat zones or other troubling situations, is a devastating mental condition with tremendous individual and societal costs. About 12 percent of Gulf War veterans and 15 percent of Vietnam veterans suffer from PTSD according to a 2019 article in \u003Cem\u003EU.S. News and World Report\u003C\/em\u003E. While recovery is possible, it requires intensive therapeutic engagement that less than 50 percent of affected veterans actually seek out.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.nsf.gov\/awardsearch\/showAward?AWD_ID=1915504\u0026amp;HistoricalAwards=false\u0022 target=\u0022_blank\u0022\u003EA new four-year, $1.2 million grant\u003C\/a\u003E from the \u003Ca href=\u0022http:\/\/nsf.gov\u0022 target=\u0022_blank\u0022\u003ENational Science Foundation\u003C\/a\u003E to a team of researchers from Georgia Tech, Emory University, and the University of Rochester will help bridge this gap by funding the development of a computational assessment toolkit for PTSD patients and clinicians, called PE Collective Sensing System (PECSS). PECSS, which will sit atop the PE Coach App developed by the Veterans Health Administration and the Department of Defense, will aim to improve current treatment practices and increase the number of veterans who seek treatment.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;PECSS will allow clinicians to use automated predictions to deliver better therapeutic treatment and individualized feedback, and patients to better understand the progress they are making and how to improve their exposure exercises,\u0026rdquo; said \u003Cstrong\u003ERosa Arriaga\u003C\/strong\u003E, a Senior Research Scientist in \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022 target=\u0022_blank\u0022\u003EGeorgia Tech\u0026rsquo;s School of Interactive Computing\u003C\/a\u003E and the principal investigator on the project.\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Ca href=\u0022https:\/\/podcasts.apple.com\/us\/podcast\/is-technology-game-changer-for-care-ptsd-patients-rosa\/id1435564422?i=1000451292353\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003E[THE INTERACTION HOUR PODCAST: IS TECHNOLOGY A GAME CHANGER FOR CARE OF PTSD PATIENTS?, FEATURING DR. ROSA ARRIAGA]\u003C\/strong\u003E\u003C\/a\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003ECurrently, the most common and empirically-supported treatment for PTSD is \u0026ldquo;prolonged exposure\u0026rdquo; (PE) therapy. The treatment consists of imaginal exposure, where patients imagine themselves and narrate their traumatic event, and in-vivo exposure to real-world stimuli in safe but challenging environments.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThere are, however, challenges in data collection and extraction, which is often subjective and narrow. This project will address those challenges by developing a novel, user-tailored sensing system that can record and transfer information from exercises, continuously monitoring patients and clinicians\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Clinicians are in urgent need of methods, tools, and data to efficiently track, assess, and respond to mental health needs throughout the treatment process,\u0026rdquo; Arriaga said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe project will involve insights from experts in multiple fields \u0026ndash; ubiquitous computing, human-computer interaction, applied machine learning, psychology, and more. When complete, the system will be deployed at the \u003Ca href=\u0022https:\/\/www.emoryhealthcare.org\/centers-programs\/veterans-program\/index.html\u0022 target=\u0022_blank\u0022\u003EEmory Healthcare Veterans Program\u003C\/a\u003E, a nationally-renowned initiative that treats members of the military suffering from PTSD.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The Trauma and Anxiety Recovery Program that includes the Emory Veterans Program has been on the cutting edge in using technology to advance the care of people suffering with anxiety since it was founded by Dr. \u003Cstrong\u003EBarbara Rothbaum\u003C\/strong\u003E over 25 years ago,\u0026rdquo; said \u003Cstrong\u003ESheila Rauch\u003C\/strong\u003E, an associate professor in Emory\u0026rsquo;s Department of Psychiatry and Behavioral Sciences and a co-principal investigator on the project.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;As a team of international experts in PTSD treatment, we integrate technology to speed response to treatment and help patients to visualize the changes as they respond to care. Our aim is to use this real-time data to find tune practice for the individual patient and learn across patients how we can improve care.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Mental health clinicians and their patients are in urgent need of 21\u003Csup\u003Est\u003C\/sup\u003E-centural methods, tools, and objective data to optimize therapy,\u0026rdquo; added Emory Assistant Professor \u003Cstrong\u003EAndrew Sherrill\u003C\/strong\u003E, another co-principal investigator. \u0026ldquo;This partnership will bring together innovators in HCI and evidence-based psychotherapy to transform mental health care for PTSD patients.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis grant is provided under the \u003Ca href=\u0022https:\/\/www.nsf.gov\/funding\/pgm_summ.jsp?pims_id=504739\u0022 target=\u0022_blank\u0022\u003ENSF Smart and Connect Health Funding Program\u003C\/a\u003E in its \u003Ca href=\u0022https:\/\/www.nsf.gov\/div\/index.jsp?div=IIS\u0022 target=\u0022_blank\u0022\u003EDivision of Information and Intelligent Systems\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The grant -- which includes Georgia Tech, Emory, and the University of Rochester -- will fund the development of a computational assessment toolkit for patients and clinicians."}],"uid":"33939","created_gmt":"2019-10-02 16:44:21","changed_gmt":"2019-10-04 12:56:17","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-10-02T00:00:00-04:00","iso_date":"2019-10-02T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"627021":{"id":"627021","type":"image","title":"Veteran battling PTSD","body":null,"created":"1570033840","gmt_created":"2019-10-02 16:30:40","changed":"1570033840","gmt_changed":"2019-10-02 16:30:40","alt":"Veteran battling PTSD with head in hands","file":{"fid":"238743","name":"Battling_PTSD_(4949341330).jpg","image_path":"\/sites\/default\/files\/images\/Battling_PTSD_%284949341330%29.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Battling_PTSD_%284949341330%29.jpg","mime":"image\/jpeg","size":370770,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Battling_PTSD_%284949341330%29.jpg?itok=zsgS4tt0"}}},"media_ids":["627021"],"related_links":[{"url":"https:\/\/www.ic.gatech.edu\/academics\/human-centered-computing-phd-program","title":"Human-Centered Computing at Georgia Tech"},{"url":"https:\/\/podcasts.apple.com\/us\/podcast\/is-technology-game-changer-for-care-ptsd-patients-rosa\/id1435564422?i=1000451292353","title":"The Interaction Hour: Is Technology a Game Changer for Care of PTSD Patients?"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181216","name":"cc-research"},{"id":"181214","name":"ic-hcc"},{"id":"182582","name":"ic-ai-ml"},{"id":"181949","name":"PTSD"},{"id":"55581","name":"military veterans"},{"id":"10681","name":"veterans"},{"id":"11178","name":"Rosa Arriaga"},{"id":"166848","name":"School of Interactive Computing"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71891","name":"Health and Medicine"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"626926":{"#nid":"626926","#data":{"type":"news","title":"Cleaning Up the Community: Shagun Jhaver Explores Impact of Content Moderation Practices on Social Media","body":[{"value":"\u003Cp\u003EOnline communities like Reddit or Twitter act like town halls, where opinions are shared and everyone, in theory, has a voice. Only, it doesn\u0026rsquo;t always work like that. What was once optimistically viewed as a solution to public discourse, offering promises of open and logical discussions where anyone with a keyboard and an internet connection could speak their piece, has instead become a bit of a Wild West. Message boards have degraded into sources of harassment, misinformation, radicalization, and more.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENow, the largely techno-utopian view has been adjusted, and moderation of content has become the norm. The question is: how can you moderate, while also maintaining the promise of free speech? Also, how can you avoid discouraging posters whose content was moderated or removed while encouraging them to remain a part of the public discourse?\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThese are just a few of the questions being posed and pursued by \u003Cstrong\u003EShagun Jhaver\u003C\/strong\u003E, a Ph.D. student in \u003Ca href=\u0022http:\/\/gatech.edu\u0022 target=\u0022_blank\u0022\u003EGeorgia Tech\u003C\/a\u003E\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022 target=\u0022_blank\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC), whose papers at the upcoming \u003Ca href=\u0022http:\/\/cscw.acm.org\/2019\/\u0022 target=\u0022_blank\u0022\u003EComputer-Supported Cooperative Work and Social Computing\u003C\/a\u003E (CSCW) conference provide some context and, perhaps, solutions.\u003C\/p\u003E\r\n\r\n\u003Ch3\u003EFairness, accountability, and transparency\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EJhaver is a computer scientist at heart. He earned his bachelor\u0026rsquo;s degree in India in electrical engineering and then studied computer science for his master\u0026rsquo;s at the University of Texas at Dallas. Like most in IC, though, his primary focus is on humans.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;One of the main attractions to our School was that, although it is a computer science school, I am able to do interviews and surveys with people,\u0026rdquo; Jhaver explained. \u0026ldquo;What good are technological developments if they don\u0026rsquo;t work for humans, if they don\u0026rsquo;t improve society? In order to understand the interactions between technology and society, I wanted to develop a mixed-methods background, and the resources and faculty here are perfect for that.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne of his first projects as a graduate student was investigating communication on social media around the Black Lives Matter movement.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I wanted to understand the emergent collective participation around this movement and what people were feeling on the ground in the moment,\u0026rdquo; he said. \u0026ldquo;That\u0026rsquo;s how I entered this area of social computing.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESocial computing is an area of computer science that focuses on the intersection between social behavior and computational systems. Integral to Jhaver\u0026rsquo;s study was how social media and the data gathered within those systems reflected what was happening within society as a whole.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThere may be no more adequate reflection of this phenomena than on Reddit and Twitter, two communities his research has looked at. At CSCW, he\u0026rsquo;ll present a handful of studies that have examined the topic of content moderation. One of the papers, titled \u003Ca href=\u0022https:\/\/medium.com\/acm-cscw\/does-transparency-in-moderation-really-matter-b86bab9b4810\u0022 target=\u0022_blank\u0022\u003E\u003Cem\u003EDoes Transparency in Moderation Really Matter?: User Behavior After Content Removal Explanations on Reddit\u003C\/em\u003E\u003C\/a\u003E, earned a best paper award. Another, titled \u003Ca href=\u0022https:\/\/medium.com\/acm-cscw\/did-you-suspect-the-post-would-be-removed-1dd1839277cb\u0022 target=\u0022_blank\u0022\u003E\u003Cem\u003EDid You Suspect the Post Would be Removed?: Understanding User Reactions to Content Removals on Reddit\u003C\/em\u003E\u003C\/a\u003E, earned an honorable mention.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHow, he wonders, do you develop good moderation practices that enforce community rules while also maintaining the free expression of ideas? And, what practices improve how posters feel about their moderated content and encourage them to continue participating in these forums?\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Content moderation is more nuanced than just editing and removing content,\u0026rdquo; Jhaver said. \u0026ldquo;It\u0026rsquo;s about the overall experience of the user and the community and how they interact.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHis research came to a few conclusions:\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne, fairness matters; two, accountability is important; three, the platforms should be transparent in their decisions. From the perspective of end users, that means that rules are clear and easy to follow, and when the post is removed they are notified and given a clear explanation of why. If they appeal, they are given an appropriate response.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut there are multiple stakeholders involved in the exchange, and who determines what is fair?\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;These Reddit moderators are volunteers,\u0026rdquo; Jhaver said. \u0026ldquo;Is it fair for us to expect them to take on these increased responsibilities for providing explanations?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn other words, these issues are much more nuanced than they would seem to many casual participants. \u003Cstrong\u003EAmy Bruckman\u003C\/strong\u003E, a professor in IC and Jhaver\u0026rsquo;s co-advisor (with IC adjunct faculty \u003Cstrong\u003EEric Gilbert\u003C\/strong\u003E), said she can\u0026rsquo;t think of other research that has examined this aspect of social communities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I don\u0026rsquo;t think it has been studied \u0026ndash; okay, your content was just removed, so how do you feel about that?\u0026rdquo; she said. \u0026ldquo;Taking that other side of it is unique.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch3\u003EGiving everyone a voice\u003C\/h3\u003E\r\n\r\n\u003Cp\u003ESo, why do these explanations even matter? Why not just remove bad content and move on?\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;But free speech is interesting,\u0026rdquo; Jhaver said. \u0026ldquo;There\u0026rsquo;s this dichotomy where if you are free to harass certain people over their race, gender, or other aspects of identity, then you are preventing them from having the voice to speak their truth. So, you are infringing on their freedom of speech. That\u0026rsquo;s why there\u0026rsquo;s this need.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhatever the case, these issues are not going away. Methods of communication will continue to change over time, particularly as technology continues to advance. But, Jhaver said, these conversations aren\u0026rsquo;t anything new either.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;These are age old problems,\u0026rdquo; he said. \u0026ldquo;Harassment, free speech, suppression of free speech. These topics have always been discussed, but the internet has changed the way we see them and changed how they manifest themselves.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I want my research to help minorities and other vulnerable groups have a greater voice in society,\u0026rdquo; Jhaver said. \u0026ldquo;I want to contribute to the design of more equitable, inclusive, and participatory technologies.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Online communities, once thought to be a place everyone had a voice, has instead become a Wild West. Understanding the impact of content moderation on user behavior could improve the free flow of ideas."}],"uid":"33939","created_gmt":"2019-09-30 19:34:51","changed_gmt":"2019-09-30 19:34:51","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-09-30T00:00:00-04:00","iso_date":"2019-09-30T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"626923":{"id":"626923","type":"image","title":"Shagun Jhaver","body":null,"created":"1569871871","gmt_created":"2019-09-30 19:31:11","changed":"1569871871","gmt_changed":"2019-09-30 19:31:11","alt":"Shagun Jhaver","file":{"fid":"238699","name":"Shagun_Jhaver.JPG","image_path":"\/sites\/default\/files\/images\/Shagun_Jhaver.JPG","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Shagun_Jhaver.JPG","mime":"image\/jpeg","size":168715,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Shagun_Jhaver.JPG?itok=LZnt5yPA"}}},"media_ids":["626923"],"related_links":[{"url":"https:\/\/www.ic.gatech.edu\/content\/human-centered-computing-cognitive-science","title":"Human-Centered Computing at Georgia Tech"},{"url":"https:\/\/www.ic.gatech.edu\/content\/social-computing-computational-journalism","title":"Social Computing at Georgia Tech"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182508","name":"cc-research; ic-hcc; ic-social-computing"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"626874":{"#nid":"626874","#data":{"type":"news","title":"Georgia Tech Helps Build a Stronger and More Diverse Computing Future through Tapia 2019","body":[{"value":"\u003Cp\u003EGeorgia Tech\u0026rsquo;s College of Computing participates annually in the \u003Ca href=\u0022http:\/\/tapiaconference.org\/\u0022\u003EACM Richard Tapia Celebration of Diversity in Computing\u003C\/a\u003E, and this year was no exception.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWith 34 undergraduate and master\u0026rsquo;s students, and five \u003Ca href=\u0022http:\/\/www.omscs.gatech.edu\/\u0022\u003Eonline master\u0026rsquo;s in computer science (OMSCS)\u003C\/a\u003E students attending, the Yellow Jackets took San Diego by storm.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe conference brought together more than 1,800 faculty, students, industry professionals, and researchers from all backgrounds and ethnicities to discuss and celebrate diversity in computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;As a Latina woman, I am very excited to be here and get to know other Latinas who are interested in computing. It\u0026rsquo;s fun getting to share my passion for diversity with new friends and recruiters,\u0026rdquo; said \u003Cstrong\u003EValentina Brambila Grando\u003C\/strong\u003E, a second-year computer science major.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHeld September 18-21, the conference focused conversations around its theme, \u0026ldquo;Diversity: Building A Stronger Future.\u0026rdquo; Session topics included supporting students with disabilities, broadening participation in computing, and addressing bias and unfairness in artificial intelligence (AI). The three-day conference also gave attendees the opportunity to network and interview at prestigious companies like Google, Cisco, and JP Morgan.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ECharity \u0026ldquo;Alistair\u0026rdquo; Tarver,\u003C\/strong\u003E who is the first student to graduate from the \u003Ca href=\u0022http:\/\/constellations.gatech.edu\/\u0022\u003EConstellations Center for Equity in Computing\u003C\/a\u003E high school program and attend Georgia Tech, was also in attendance.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I\u0026rsquo;ve never really gotten to travel before, so it\u0026rsquo;s cool to come to a new city and hear from people I would have never otherwise had the chance to hear from. It\u0026rsquo;s been interesting to get different perspectives and find common ground with other students from around the world,\u0026rdquo; said Tarver.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003EA Leader in Diversity\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech leadership, alumni, and current students were involved throughout the conference.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDean of Computing and John P. Imlay Jr. Chair\u003Cstrong\u003E Charles Isbell \u003C\/strong\u003Ehelped kick off the conference as a guest for the opening fireside chat where he discussed the evolving methods of disseminating misinformation through deep learning and embedding ethics within the computing curriculum from the start. Isbell also discussed the importance of diversifying doctoral programs during a panel discussion held later in the week.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOther highlights included:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EUniversity of Washing Richard E. Ladner professor and Georgia Tech alumna, \u003Cstrong\u003EJennifer Mankoff\u003C\/strong\u003E (Ph.D. CS 2001) gave one of the opening plenaries and shared her experience as a researcher and academic while also living with a disability and chronic illness. Her advice to students: \u0026ldquo;Just follow your passion.\u0026rdquo; \u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EAyanna Howard\u003C\/strong\u003E, \u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC) chair, co-hosted a caf\u0026eacute;-style panel where she facilitated meaningful discussion about the\u003Ca href=\u0022https:\/\/cra.org\/cra-wp\/\u0022\u003E Computing Research Association Widening Participation (CRA-WP).\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003ELara Martin\u003C\/strong\u003E, a HCC Ph.D. candidate advised by School of IC Associate Professor Mark Riedl, took home the top prize for best presentation at the doctoral consortium.\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EAnother big part of the College\u0026rsquo;s participation at Tapia is hosting a booth. As a platinum sponsor of the conference, the \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/\u0022\u003ECollege of Computing\u003C\/a\u003E hosted a booth with information on the college\u0026rsquo;s programs for prospective graduate students.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We love coming to Tapia every year because we want to recruit a diverse group of students for our programs. It\u0026rsquo;s exciting to see them celebrate who they are and talk with them about what their experience at Georgia Tech could be like,\u0026rdquo; said \u003Cstrong\u003EJennifer Whitlow\u003C\/strong\u003E, director of enrollment and alumni engagement.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EFor more photos, view the \u003Ca href=\u0022http:\/\/bit.ly\/2oDp4CU\u0022\u003EFlickr album.\u003C\/a\u003E\u003C\/em\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The College of Computing participated in the ACM Richard Tapia Celebration of Diversity in Computing in San Diego, Calif."}],"uid":"34773","created_gmt":"2019-09-30 15:30:19","changed_gmt":"2019-09-30 19:05:36","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-09-30T00:00:00-04:00","iso_date":"2019-09-30T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"626887":{"id":"626887","type":"image","title":"34 computer science undergraduate students and five OMSCS students traveled to San Diego, Calif. to celebrate diversity in computing at the 2019 ACM Richard Tapia Celebration of Diversity in Computing conference.","body":null,"created":"1569860201","gmt_created":"2019-09-30 16:16:41","changed":"1569860201","gmt_changed":"2019-09-30 16:16:41","alt":"Group photo of 2019 Tapia students","file":{"fid":"238681","name":"IMG_4628.jpg","image_path":"\/sites\/default\/files\/images\/IMG_4628.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/IMG_4628.jpg","mime":"image\/jpeg","size":1243003,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/IMG_4628.jpg?itok=jRNMpMEI"}},"626891":{"id":"626891","type":"image","title":"OMSCS students James Gan, Andy Singh, Manoel Mendez, and Taylor Isom were excited to meet other students in their program and discuss their takeaways from the conference. ","body":null,"created":"1569860866","gmt_created":"2019-09-30 16:27:46","changed":"1569860866","gmt_changed":"2019-09-30 16:27:46","alt":"OMSCS Students","file":{"fid":"238685","name":"IMG_4614.jpg","image_path":"\/sites\/default\/files\/images\/IMG_4614.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/IMG_4614.jpg","mime":"image\/jpeg","size":1149470,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/IMG_4614.jpg?itok=p8sQayKS"}},"626888":{"id":"626888","type":"image","title":"Dean of Computing and John P. Imlay Jr. Chair Charles Isbell participated in the opening fireside chat and moderated a panel on diversifying doctoral programs ","body":null,"created":"1569860346","gmt_created":"2019-09-30 16:19:06","changed":"1569860346","gmt_changed":"2019-09-30 16:19:06","alt":"","file":{"fid":"238682","name":"IMG_4519.jpg","image_path":"\/sites\/default\/files\/images\/IMG_4519.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/IMG_4519.jpg","mime":"image\/jpeg","size":971066,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/IMG_4519.jpg?itok=t8I-80ZR"}},"626889":{"id":"626889","type":"image","title":"Ph.D. student Lara Martin celebrates her win for best presentation at the doctoral consortium with School of Interactive Computing chair Ayanna Howard and Dean of Computing and John P. Imlay Jr. Chair Charles Isbell.","body":null,"created":"1569860420","gmt_created":"2019-09-30 16:20:20","changed":"1569860420","gmt_changed":"2019-09-30 16:20:20","alt":"Lara Martin wins an award","file":{"fid":"238683","name":"IMG_4641.jpg","image_path":"\/sites\/default\/files\/images\/IMG_4641.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/IMG_4641.jpg","mime":"image\/jpeg","size":1347455,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/IMG_4641.jpg?itok=15SgUejN"}},"626890":{"id":"626890","type":"image","title":"Assistant Dean of Outreach, Enrollment, and Community Cedric Stallworth and Troy Peace, Educational Outreach Manager hang out with College of Computing students before attending the closing banquet. ","body":null,"created":"1569860578","gmt_created":"2019-09-30 16:22:58","changed":"1569860578","gmt_changed":"2019-09-30 16:22:58","alt":"Group of computing students before closing banquet.","file":{"fid":"238684","name":"IMG_4618.jpg","image_path":"\/sites\/default\/files\/images\/IMG_4618.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/IMG_4618.jpg","mime":"image\/jpeg","size":1193318,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/IMG_4618.jpg?itok=vuXEdhgE"}},"626894":{"id":"626894","type":"image","title":"School of Interactive Computing chair Ayanna Howard led a caf\u00e9-style panel about different organizations working to further diversity in tech.","body":null,"created":"1569861275","gmt_created":"2019-09-30 16:34:35","changed":"1569861275","gmt_changed":"2019-09-30 16:34:35","alt":"Ayanna Howard leads a discussion","file":{"fid":"238687","name":"IMG_4362.jpg","image_path":"\/sites\/default\/files\/images\/IMG_4362.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/IMG_4362.jpg","mime":"image\/jpeg","size":980613,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/IMG_4362.jpg?itok=-bavknM8"}},"626893":{"id":"626893","type":"image","title":"Former Constellations student and first-year Georgia Tech student, Charity \u0022Alistair\u0022 Tarver caught up with Constellations communications officer Allie McFadden at the College of Computing booth during the career fair.","body":null,"created":"1569861009","gmt_created":"2019-09-30 16:30:09","changed":"1569861009","gmt_changed":"2019-09-30 16:30:09","alt":"Charity \u0022Alistair\u0022 Tarver and Allie McFadden at the career fair ","file":{"fid":"238686","name":"IMG_5646.jpg","image_path":"\/sites\/default\/files\/images\/IMG_5646.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/IMG_5646.jpg","mime":"image\/jpeg","size":1854953,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/IMG_5646.jpg?itok=tIx1r0o2"}}},"media_ids":["626887","626891","626888","626889","626890","626894","626893"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"606703","name":"Constellations Center"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"134","name":"Student and Faculty"}],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"625706":{"#nid":"625706","#data":{"type":"news","title":"Startup Connects Job Seekers with Autism to New Opportunities","body":[{"value":"\u003Cp\u003EApplying for jobs can be one of the biggest challenges for people with autism. Disclosing a disorder to an employer is difficult, but sometimes even the interview is daunting. But a new app called \u003Ca href=\u0022https:\/\/www.axisability.net\/mentra\u0022\u003EMentra\u003C\/a\u003E could make the process a lot easier.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMentra pairs autistic people with students not on the spectrum to help them navigate the job hunt. Once the algorithm matches a team, they can meet and develop a mentorship together.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The goal of Mentra is to help autistic individuals find meaningful employment, and for mentors to cultivate a friendship with someone who might have a different perspective,\u0026rdquo; said fourth-year computational media student \u003Ca href=\u0022https:\/\/www.axisability.net\/mentra\u0022\u003E\u003Cstrong\u003EJhillika Kumar\u003C\/strong\u003E\u003C\/a\u003E, the founder of \u003Ca href=\u0022https:\/\/www.axisability.net\/\u0022\u003EAxisAbility\u003C\/a\u003E, the startup behind the app.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EFinding Technology for Autism\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor Kumar, the work has a personal connection: Her older brother, Vikram, has nonverbal autism. She noticed that when Vikram got an iPad when they were kids, everything changed for him.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It was so intuitive and allowed him to meaningfully and independently interact with the world,\u0026rdquo; she said. \u0026ldquo;I realized this is a form of using technology to empower people in a different way than we think.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EVikram\u0026rsquo;s use of the iPad got Kumar interested in user interaction design. She was attracted to Georgia Tech because of its unique computational media degree that combines design and computer science.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;One thing this degree really taught me is that we need to work side-by-side with our users so we\u0026rsquo;re not just designing what\u0026rsquo;s in our heads, but we\u0026rsquo;re actually getting feedback at every point,\u0026rdquo; Kumar said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EResearching Autism and Technology\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EVikram\u0026rsquo;s experience with the iPad was direct feedback, but there isn\u0026rsquo;t any quantifiable research right now on why using technology to communicate works so well for many people with autism. Kumar\u0026rsquo;s desire to prove this that treatment was effective lead her to working with Regents\u0026rsquo; Professor \u003Ca href=\u0022http:\/\/ubicomp.cc.gatech.edu\/gregory-d-abowd\/\u0022\u003E\u003Cstrong\u003EGregory Abowd\u003C\/strong\u003E\u003C\/a\u003E in the \u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAbowd \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/content\/single-observation-autism-research-blooms-georgia-tech\u0022\u003Espearheaded\u003C\/a\u003E Tech\u0026rsquo;s presence in the autism and technology research field after he noticed he could start to see his son presenting signs of autism in a family home video before he even got diagnosed. In the years since, this initial observation has led a network of researchers and labs to study all aspects of autism and technology, such as \u003Ca href=\u0022https:\/\/ic.gatech.edu\/developing-new-microscope-autism-research-georgia-techs-school-interactive-computing\u0022\u003Emapping behaviors\u003C\/a\u003E to diagnose the disorder.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E[RELATED: \u003Ca href=\u0022https:\/\/ic.gatech.edu\/developing-new-microscope-autism-research-georgia-techs-school-interactive-computing\u0022\u003EDeveloping a \u0026#39;New Microscope\u0026#39; for Autism Research in Georgia Tech\u0026#39;s School of Interactive Computing\u003C\/a\u003E]\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhen Kumar met Abowd, though, he told her one of the main problems people with autism face isn\u0026rsquo;t a lack of technology, but that there aren\u0026rsquo;t enough people willing to help. Kumar decided her role would be to connect the people to the technology she saw change her brother\u0026rsquo;s life.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ELaunching AxisAbility\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EShe launched the startup \u003Ca href=\u0022https:\/\/www.axisability.net\/\u0022\u003EAxisAbility\u003C\/a\u003E in 2019 to empower people with autism to enter the workforce. The team includes three other Tech students: \u0026nbsp;\u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/connerreinhardt\/\u0022\u003E\u003Cstrong\u003EConner Reinhardt\u003C\/strong\u003E\u003C\/a\u003E, \u003Ca href=\u0022https:\/\/www.rishma.co\/\u0022\u003E\u003Cstrong\u003ERishma Mendhekar\u003C\/strong\u003E\u003C\/a\u003E, and \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/fangxiao-yu\/\u0022\u003E\u003Cstrong\u003ERicky Yu\u003C\/strong\u003E\u003C\/a\u003E. They are launching the Mentra app in partnership with \u003Ca href=\u0022https:\/\/bitsofgood.org\/about-us\u0022\u003EGT Bits of Good\u003C\/a\u003E as their first initiative.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKumar has been awarded the \u003Ca href=\u0022https:\/\/anitab.org\/awards-grants\/abie-awards\/student-of-vision-abie-award\/\u0022\u003EStudent of Vision Abie Award\u003C\/a\u003E from women in computing conference \u003Ca href=\u0022http:\/\/ghc.anitab.org\/\u0022\u003EGrace Hopper\u003C\/a\u003E for her work. She will speak at the upcoming conference\u003Cem\u003E \u003C\/em\u003Eon Thursday, October 3, from 2:15 to 3:15 p.m., in room OCCC W300.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Mentra pairs autistic people with students not on the spectrum to help them navigate the job hunt. "}],"uid":"34541","created_gmt":"2019-09-05 18:26:09","changed_gmt":"2019-09-05 18:27:08","author":"Tess Malone","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-09-05T00:00:00-04:00","iso_date":"2019-09-05T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"625707":{"id":"625707","type":"image","title":"Jhillika Kumar","body":null,"created":"1567708010","gmt_created":"2019-09-05 18:26:50","changed":"1567708010","gmt_changed":"2019-09-05 18:26:50","alt":"Jhillika Kumar","file":{"fid":"238212","name":"Headshot Best.png","image_path":"\/sites\/default\/files\/images\/Headshot%20Best.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Headshot%20Best.png","mime":"image\/png","size":118312,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Headshot%20Best.png?itok=8nEvUSXy"}}},"media_ids":["625707"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ETess Malone, Communications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:tess.malone@cc.gatech.edu\u0022\u003Etess.malone@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["tess.malone@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"624901":{"#nid":"624901","#data":{"type":"news","title":"Researchers Use Social Media to Help Measure Outcomes of Psychiatric Medication","body":[{"value":"\u003Cp\u003ESocial media posts are becoming a vital tool to assessing the effects of psychiatric medication, according to a new study from researchers in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022 target=\u0022_blank\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC). The approach offers clinicians a more effective method to measure mental health outcomes in a notoriously imprecise space.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn treating mental illness, clinicians are often forced into a trial-and-error approach to prescribing medication to patients. Each patient may react differently \u0026ndash; oftentimes with negative outcomes \u0026ndash; to drugs that have been matched with conditions based on incomplete and potentially biased data from clinical trials.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In most non-mental health treatment where particular symptoms like a fever or chronic pain might indicate a specific physical condition, there exists a more definitive matching approach to prescription,\u0026rdquo; said \u003Cstrong\u003EKoustuv Saha\u003C\/strong\u003E, an IC Ph.D. student who led the study. \u0026ldquo;In psychiatric care, that matching approach is unknown.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPatients taking the wrong medication could experience increased depression or anxiety, suicidal ideation, or other symptoms like fluctuations in sleep and weight. In many cases, they are forced to return to their clinician for a change in medication or, in worse cases, may lose trust in the medication entirely and stop using it.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Considering that five of the top 50 drugs sold in the United States are psychiatric medications, it\u0026rsquo;s extremely important to understand how they actually work on individuals,\u0026rdquo; Saha said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn the past, clinical trials have taken a disease-centered approach that attempts to prescribe specific medications with psychiatric symptoms, neglecting those psychoactive effects of the drug. Trials are conducted for smaller cohorts over shorter periods of time, eliminate some individuals who experience more extreme symptoms, and are often biased, being conducted by the drug companies themselves.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAdopting a \u0026ldquo;patient-centered\u0026rdquo; model that considers individual outcomes for patients using a specific medication, this study leveraged longitudinal and large-scale social media data to achieve a form of digital-based matching of patients to medications.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe researchers collected a list of medications approved by the Food and Drug Administration, then collected Tweets that mentioned these medications between 2015-16. From that, they collected over 600,000 Tweets that identified users of a specific medication. Interestingly enough, their data matched the top four prescription psychiatric medications in that period: Sertraline (Zoloft), Escitalopram (Lexapro), Fluoxetine (Prozac), and Duloxetine (Cymbalta).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing a control group of random Twitter users that did not take the medication and building on prior work that showed the ability for language found in social posts to predict mental health conditions, researchers could match specific medications with their outcomes, positive or negative, after use.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe findings indicated that Selective Serotonin Reuptake Inhibitors (Sertraline, Escitalopram, Fluoxetine) \u0026ndash; three of the most popular prescription medications \u0026ndash; are actually associated with worsening symptoms. Tricyclic Antidepressants like Dosulepin, Imipramine, and Clomipramine, by comparison, were more associated with improving conditions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Clinically, our findings reveal signals of the most common effects of the psychiatric medications over a large population, with the potential for improved characterization of their occurrence,\u0026rdquo; Saha writes in the paper. \u0026ldquo;Technologically, we show the potential of novel technologies in digital therapeutics.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis research, Saha said, exists as a proof of concept to show levels of a specific condition \u0026ndash; before and after medication use \u0026ndash; using digital data. He stressed it is not a replacement for clinical care, only a way to help augment treatment using additional available data.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe work was presented at the \u003Ca href=\u0022https:\/\/www.icwsm.org\/2019\/\u0022\u003E13\u003C\/a\u003E\u003Ca href=\u0022https:\/\/www.icwsm.org\/2019\/\u0022 target=\u0022_blank\u0022\u003E\u003Csup\u003Eth\u003C\/sup\u003E\u003C\/a\u003E International AAAI Conference on Web and Social Media\u003Ca href=\u0022https:\/\/www.icwsm.org\/2019\/\u0022 target=\u0022_blank\u0022\u003E in a paper titled \u003Cem\u003EA Social Media Study on the Effects of Psychiatric Medication Use\u003C\/em\u003E\u003C\/a\u003E (Koustuv Saha, \u003Cstrong\u003EBenjamin Sugar\u003C\/strong\u003E, \u003Cstrong\u003EJohn Torous\u003C\/strong\u003E, \u003Cstrong\u003EBruno Abrahao\u003C\/strong\u003E, \u003Cstrong\u003EEmre Kiciman\u003C\/strong\u003E, \u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E). It was awarded Outstanding Study Design Paper at the conference. It is funded in part by a grant from the \u003Ca href=\u0022http:\/\/www.nih.gov\u0022 target=\u0022_blank\u0022\u003ENational Institutes of Health\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"This research exists as a proof of concept to show levels of a specific condition \u2013 before and after medication use \u2013 using digital data."}],"uid":"33939","created_gmt":"2019-08-21 16:58:57","changed_gmt":"2019-08-21 16:58:57","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-08-21T00:00:00-04:00","iso_date":"2019-08-21T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"624519":{"id":"624519","type":"image","title":"Social Media Logos","body":null,"created":"1565805908","gmt_created":"2019-08-14 18:05:08","changed":"1565805908","gmt_changed":"2019-08-14 18:05:08","alt":"A keyboard featuring different social media logos","file":{"fid":"237806","name":"Social Media logos.jpg","image_path":"\/sites\/default\/files\/images\/Social%20Media%20logos.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Social%20Media%20logos.jpg","mime":"image\/jpeg","size":215846,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Social%20Media%20logos.jpg?itok=G7qWkSGs"}}},"media_ids":["624519"],"related_links":[{"url":"https:\/\/www.ic.gatech.edu\/content\/social-computing-computational-journalism","title":"Social Computing Research at Georgia Tech"}],"groups":[{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182015","name":"cc-research; ic-ai-ml; ic-hcc; ic-social-computing"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"624538":{"#nid":"624538","#data":{"type":"news","title":"Researchers Use Social Media to Help Measure Outcomes of Psychiatric Medication","body":[{"value":"\u003Cp\u003ESocial media posts are becoming a vital tool to assessing the effects of psychiatric medication, according to a new study from researchers in Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022 target=\u0022_blank\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC). The approach offers clinicians a more effective method to measure mental health outcomes in a notoriously imprecise space.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn treating mental illness, clinicians are often forced into a trial-and-error approach to prescribing medication to patients. Each patient may react differently \u0026ndash; oftentimes with negative outcomes \u0026ndash; to drugs that have been matched with conditions based on incomplete and potentially biased data from clinical trials.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In most non-mental health treatment where particular symptoms like a fever or chronic pain might indicate a specific physical condition, there exists a more definitive matching approach to prescription,\u0026rdquo; said \u003Cstrong\u003EKoustuv Saha\u003C\/strong\u003E, an IC Ph.D. student who led the study. \u0026ldquo;In psychiatric care, that matching approach is unknown.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPatients taking the wrong medication could experience increased depression or anxiety, suicidal ideation, or other symptoms like fluctuations in sleep and weight. In many cases, they are forced to return to their clinician for a change in medication or, in worse cases, may lose trust in the medication entirely and stop using it.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Considering that five of the top 50 drugs sold in the United States are psychiatric medications, it\u0026rsquo;s extremely important to understand how they actually work on individuals,\u0026rdquo; Saha said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn the past, clinical trials have taken a disease-centered approach that attempts to prescribe specific medications with psychiatric symptoms, neglecting those psychoactive effects of the drug. Trials are conducted for smaller cohorts over shorter periods of time, eliminate some individuals who experience more extreme symptoms, and are often biased, being conducted by the drug companies themselves.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAdopting a \u0026ldquo;patient-centered\u0026rdquo; model that considers individual outcomes for patients using a specific medication, this study leveraged longitudinal and large-scale social media data to achieve a form of digital-based matching of patients to medications.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe researchers collected a list of medications approved by the Food and Drug Administration, then collected Tweets that mentioned these medications between 2015-16. From that, they collected over 600,000 Tweets that identified users of a specific medication. Interestingly enough, their data matched the top four prescription psychiatric medications in that period: Sertraline (Zoloft), Escitalopram (Lexapro), Fluoxetine (Prozac), and Duloxetine (Cymbalta).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing a control group of random Twitter users that did not take the medication and building on prior work that showed the ability for language found in social posts to predict mental health conditions, researchers could match specific medications with their outcomes, positive or negative, after use.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe findings indicated that Selective Serotonin Reuptake Inhibitors (Sertraline, Escitalopram, Fluoxetine) \u0026ndash; three of the most popular prescription medications \u0026ndash; are actually associated with worsening symptoms. Tricyclic Antidepressants like Dosulepin, Imipramine, and Clomipramine, by comparison, were more associated with improving conditions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Clinically, our findings reveal signals of the most common effects of the psychiatric medications over a large population, with the potential for improved characterization of their occurrence,\u0026rdquo; Saha writes in the paper. \u0026ldquo;Technologically, we show the potential of novel technologies in digital therapeutics.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis research, Saha said, exists as a proof of concept to show levels of a specific condition \u0026ndash; before and after medication use \u0026ndash; using digital data. He stressed it is not a replacement for clinical care, only a way to help augment treatment using additional available data.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe work was presented at the \u003Ca href=\u0022https:\/\/www.icwsm.org\/2019\/\u0022\u003E13\u003C\/a\u003E\u003Ca href=\u0022https:\/\/www.icwsm.org\/2019\/\u0022 target=\u0022_blank\u0022\u003E\u003Csup\u003Eth\u003C\/sup\u003E\u003C\/a\u003E International AAAI Conference on Web and Social Media\u003Ca href=\u0022https:\/\/www.icwsm.org\/2019\/\u0022 target=\u0022_blank\u0022\u003E in a paper titled \u003Cem\u003EA Social Media Study on the Effects of Psychiatric Medication Use\u003C\/em\u003E\u003C\/a\u003E (Koustuv Saha, \u003Cstrong\u003EBenjamin Sugar\u003C\/strong\u003E, \u003Cstrong\u003EJohn Torous\u003C\/strong\u003E, \u003Cstrong\u003EBruno Abrahao\u003C\/strong\u003E, \u003Cstrong\u003EEmre Kiciman\u003C\/strong\u003E, \u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E). It was awarded Outstanding Study Design Paper at the conference. It is funded in part by a grant from the \u003Ca href=\u0022http:\/\/www.nih.gov\u0022 target=\u0022_blank\u0022\u003ENational Institutes of Health\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"This research exists as a proof of concept to show levels of a specific condition \u2013 before and after medication use \u2013 using digital data."}],"uid":"33939","created_gmt":"2019-08-14 19:16:13","changed_gmt":"2019-08-19 21:09:48","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-08-14T00:00:00-04:00","iso_date":"2019-08-14T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"624519":{"id":"624519","type":"image","title":"Social Media Logos","body":null,"created":"1565805908","gmt_created":"2019-08-14 18:05:08","changed":"1565805908","gmt_changed":"2019-08-14 18:05:08","alt":"A keyboard featuring different social media logos","file":{"fid":"237806","name":"Social Media logos.jpg","image_path":"\/sites\/default\/files\/images\/Social%20Media%20logos.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Social%20Media%20logos.jpg","mime":"image\/jpeg","size":215846,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Social%20Media%20logos.jpg?itok=G7qWkSGs"}}},"media_ids":["624519"],"related_links":[{"url":"https:\/\/www.ic.gatech.edu\/content\/social-computing-computational-journalism","title":"Social Computing Research at Georgia Tech"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"182015","name":"cc-research; ic-ai-ml; ic-hcc; ic-social-computing"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"624130":{"#nid":"624130","#data":{"type":"news","title":"\u0027MacGyver\u0027-like Robot Can Build Own Tools By Assessing Form, Function of Supplies","body":[{"value":"\u003Cp\u003EThanks to new technology that enables them to create simple tools, robots may be on the verge of their own version of the Stone Age.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing a novel capability to reason about shape, function, and attachment of unrelated parts, researchers have for the first time successfully trained an intelligent agent to create basic tools by combining objects.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe breakthrough comes from Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/www.rail.gatech.edu\/\u0022\u003ERobot Autonomy and Interactive Learning\u003C\/a\u003E (RAIL) research lab and is a significant step toward enabling intelligent agents to devise more advanced tools that could prove useful in hazardous \u0026ndash; and potentially life-threatening \u0026ndash; environments.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe concept may sound familiar. It\u0026rsquo;s called \u0026ldquo;MacGyvering,\u0026rdquo; based off the name of a 1980s \u0026mdash; and recently rebooted \u0026mdash; television series. In the series, the title character is known for his unconventional problem-solving ability using differing resources available to him.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor years, computer scientists and others have been working to provide robots with similar capabilities. In their new robot-MacGyvering work, RAIL lab researchers led by Associate Professor \u003Cstrong\u003ESonia Chernova\u003C\/strong\u003E used as a starting point a robotics technique previously developed by former Georgia Tech Professor \u003Cstrong\u003EMike Stilman\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn this latest work, a robot trained using the team\u0026rsquo;s novel approach is given a set of optional parts and told to make a specific tool. Much like its human counterparts, the robot first examines the shapes of each part and how one might be attached to another.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing machine learning, the robot is trained to match form to function \u0026ndash; which object shapes facilitate a particular outcome \u0026ndash; from numerous examples of everyday objects. For example, by learning that the concavity of bowls enables them to hold liquids, it makes use of this knowledge when constructing a spoon. Similarly, the robots were taught how to attach objects together from examples of materials that could be pierced or grasped.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn the study, researchers successfully created hammers, spatulas, scoops, squeegees, and screwdrivers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The screwdriver was particularly interesting because the robot combined pliers and a coin,\u0026rdquo; said \u003Cstrong\u003ELakshmi Nair\u003C\/strong\u003E, a Ph.D. student in the \u003Ca href=\u0022http:\/\/www.ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E and one of the researchers on the project. \u0026ldquo;It reasoned that the pliers were able to grasp something and said that the coin sort of matched the head of a screwdriver. Put them together, and it creates an effective tool.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECurrently, the robot is limited only to the shape and attachment. It cannot yet effectively reason about particular material properties, a crucial step in advancing to a real-world scenario.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/news\/623044\/robot-able-instantly-identify-household-materials-using-near-infrared-light\u0022\u003E\u003Cstrong\u003E[RELATED: Robot Able to Instantly Identify Household Materials Using Near-Infrared Light]\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;People reason that hammers are sturdy and strong, so you wouldn\u0026rsquo;t make a hammer out of foam blocks,\u0026rdquo; Nair said. \u0026ldquo;We want to reach that level of reasoning in our work, which is something we\u0026rsquo;re working on now.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe inspiration for the work comes from the popular story of Apollo 13, the doomed seventh crewed flight of the Apollo space program. After an oxygen tank in the ship\u0026rsquo;s service module exploded two days into the mission, crew members were forced to make makeshift modifications to the carbon dioxide removal system.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDespite a dangerously tight window of time and extremely high tension among all aboard and at mission control, the rescue proved successful. Nair and collaborators hope this research will prove foundational to future robotics technology that could reason faster and without the burden of stress.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;They were able to make this filter, but the solution took a long time to come up with,\u0026rdquo; Nair said. \u0026ldquo;We want to make robots that can assist humans in these kinds of scenarios to take the pressure off of them to come up with innovative solutions and potentially save their lives.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis work was presented at the 2019 Robotics: Science and Systems conference in a paper titled \u003Ca href=\u0022http:\/\/www.roboticsproceedings.org\/rss15\/p09.pdf\u0022\u003E\u003Cem\u003EAutonomous Tool Construction Using Part Shape and Attachment Prediction \u003C\/em\u003E\u003C\/a\u003E(Lakshmi Nair, \u003Cstrong\u003ENithin Shrivatsav\u003C\/strong\u003E, \u003Cstrong\u003EZackory Erickson\u003C\/strong\u003E, Sonia Chernova). It is supported in part by grants from the \u003Ca href=\u0022https:\/\/www.nsf.gov\/\u0022\u003ENational Science Foundation\u003C\/a\u003E and the \u003Ca href=\u0022https:\/\/www.onr.navy.mil\/\u0022\u003EOffice of Naval Research\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The breakthrough is a significant step toward enabling intelligent agents to devise more advanced tools that could prove useful in hazardous and potentially life-threatening environments."}],"uid":"33939","created_gmt":"2019-08-07 21:04:09","changed_gmt":"2019-08-12 20:08:25","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-08-07T00:00:00-04:00","iso_date":"2019-08-07T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"624128":{"id":"624128","type":"image","title":"Robot MacGyvering - Lakshmi Nair 1","body":null,"created":"1565210646","gmt_created":"2019-08-07 20:44:06","changed":"1565210646","gmt_changed":"2019-08-07 20:44:06","alt":"Lakshmi Nair stands next to a robotic arm with tool parts on a table","file":{"fid":"237702","name":"Macgyvering MAIN.jpg","image_path":"\/sites\/default\/files\/images\/Macgyvering%20MAIN.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Macgyvering%20MAIN.jpg","mime":"image\/jpeg","size":200873,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Macgyvering%20MAIN.jpg?itok=iU3IpDzd"}}},"media_ids":["624128"],"related_links":[{"url":"http:\/\/rail.gatech.edu","title":"Robot Autonomy and Interactive Learning Lab"},{"url":"https:\/\/www.ic.gatech.edu\/content\/robotics-computational-perception","title":"Robotics and Computational Perception Research at Georgia Tech"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181920","name":"cc-research; ic-ai-ml; ic-robotics"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"621721":{"#nid":"621721","#data":{"type":"news","title":"AIs and Humans Become \u2018Creative Equals\u2019 with New Design Tool","body":[{"value":"\u003Cp\u003EGeorgia Tech researchers have created software with a built-in AI agent that works alongside human designers in real time to create game levels. The software, dubbed MorAI Maker in a nod to Nintendo\u0026rsquo;s game Mario Maker, uses new machine learning techniques for game content generation that allows humans and an\u0026nbsp;AI agent\u0026nbsp;to work in a turn-based fashion on the same digital canvas. This is the first such tool of its kind.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThrough two studies with more than 100 game hobbyists and practicing game developers, the Georgia Tech team found that people varied significantly in how they used the AI.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We did not explicitly structure any roles into our machine learning models, but we still found that users naturally projected different roles onto the same AI and took corresponding roles,\u0026rdquo; said \u003Cstrong\u003EMatthew Guzdial\u003C\/strong\u003E, Ph.D. student in computer science and lead researcher.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAccording to researchers, after refining the machine learning model, the AI agent was capable of picking up on users\u0026rsquo; preferences for level structures. A majority of game developers reported that they would use the AI co-designer in the software, which was developed in Unity.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearchers observed four major categories of roles that people assigned their virtual partners.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESome participants viewed the AI as a friend. One participant prompted the AI to begin the level design, forfeiting her own turn and stating, \u0026ldquo;Let\u0026rsquo;s see what my friend comes up with.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESome participants wanted an equal design partner (collaborator), others seemed to expect the AI to adhere to their specific design beliefs or instructions (student), and some designers followed the AI\u0026rsquo;s lead or expected to be evaluated on their design (manager).\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Human designers in the study demonstrated a willingness to adapt their own design practices to the AI, sometimes as a means of attempting to determine how best to interact with it,\u0026rdquo; said Guzdial.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EConversely, every participant had at least one interaction where the AI adapted to the human designs. For some, this was the exception rather than the rule. \u0026ldquo;The [AI] agent placed objects fairly arbitrarily, in places where it didn\u0026rsquo;t really affect gameplay, just looked weird,\u0026rdquo; said another participating professional designer.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe AI agent embedded in the game design software was trained on implicit feedback from the user. If a user kept the AI\u0026rsquo;s game level additions, the AI received a \u0026ldquo;reward,\u0026rdquo; and if the user removed them a \u0026ldquo;penalty\u0026rdquo; was given to the AI. The AI was not allowed to remove human-generated elements.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne designer said, \u0026ldquo;It was nice to be surprised by the AI partner. It prompted conversation\/discussion in my head.\u0026rdquo; Another said, \u0026ldquo;I was running out of ideas, then prompted the AI for help, and I said, \u0026lsquo;Oh yeah I forgot about these things!\u0026rsquo;\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDespite mostly positive feedback, not everyone found the tool to be consistently valuable. As one participant put it, \u0026ldquo;I could see using this tool as a way to give myself inspiration. But, if I had more specific goals in mind... I would have found it more inhibiting than useful.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGuzdial says MorAI Maker is intended as a design aide, not as a replacement for designers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The AI system is developed in favor of augmenting, not replacing, creative work,\u0026rdquo; he said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe full research,\u0026nbsp;\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1901.06417.pdf\u0022\u003E\u003Cem\u003EFriend, collaborator, student, manager: How design of an AI-driven game level editor affects creators\u003C\/em\u003E\u003C\/a\u003E, is published in the 2019 Proceedings of the ACM Conference on Humans Factors in Computing Systems.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research is based upon work supported by the National Science Foundation under Grant No. IIS-1525967. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":[{"value":"Video Game Developers Use an AI partner In Wildly Different Ways, From Friend to Boss"}],"field_summary":[{"value":"\u003Cp\u003EWill video game developers welcome AI assistance in their workflow? In short, yes, and in wildly different ways, based on research from Georgia Tech published this month.\u0026nbsp;\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Will video game developers welcome AI assistance in their workflow? In short, yes, and in wildly different ways, based on research from Georgia Tech published this month. "}],"uid":"27592","created_gmt":"2019-05-16 11:37:38","changed_gmt":"2019-08-12 14:50:52","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-05-16T00:00:00-04:00","iso_date":"2019-05-16T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"621722":{"id":"621722","type":"image","title":"MorAI Maker Game Design Tool","body":null,"created":"1558007459","gmt_created":"2019-05-16 11:50:59","changed":"1558007477","gmt_changed":"2019-05-16 11:51:17","alt":"","file":{"fid":"236823","name":"MorAI Maker creations.png","image_path":"\/sites\/default\/files\/images\/MorAI%20Maker%20creations.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/MorAI%20Maker%20creations.png","mime":"image\/png","size":706743,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/MorAI%20Maker%20creations.png?itok=33_P6TT9"}}},"media_ids":["621722"],"related_links":[{"url":"https:\/\/www.youtube.com\/watch?v=UkMeM5Ty1lA\u0026feature=youtu.be\u0026t=563","title":"VIDEO: Early Interaction with AI Creative Partner"},{"url":"https:\/\/www.spreaker.com\/user\/10751784\/tu-ep6-video-game-devs-react-to-ai","title":"Tech Unbound Podcast EP6: Video Game Developers React in Wildly Different Ways to AI-Enabled Software"}],"groups":[{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\nResearch Communications Manager\u003Cbr \/\u003E\r\n\u003Cem\u003ECollege of Computing and GVU Center\u003C\/em\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"624291":{"#nid":"624291","#data":{"type":"news","title":"AI \u0027Performers\u0027 Take Center Stage and Get Creative with People in Public Spaces","body":[{"value":"\u003Cp\u003EResearchers at Georgia Tech are seeking to improve \u0026ldquo;artificial intelligence literacy\u0026rdquo; and give people opportunities to engage directly with AI systems in order to understand the potential and capabilities of the technology.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAI-assisted tech is increasingly common, but actions by these autonomous programs are often hard to spot in people\u0026rsquo;s daily use of devices and online services.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech\u0026rsquo;s Expressive Machinery Lab has developed exhibitions where the AI agents are front-and-center and people are able to create with them in public spaces. These AIs have included a dance partner, visual storyteller, music maker, and comedic improv performer.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;There are common misconceptions about what AI is, what it is capable of, and how it works,\u0026rdquo; said \u003Cstrong\u003EBrian Magerko\u003C\/strong\u003E, professor of digital media and director of the Expressive Machinery Lab. \u0026ldquo;AI systems in public spaces that can engage as active participants in co-creative activities have the potential to serve as avenues for AI literacy. We believe this work pushes these efforts forward considerably.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe exhibitions\u0026nbsp;involving live interactions between people and AIs \u0026ndash; what the researchers call co-creative experiences \u0026ndash; have taken place across the country since 2013 at academic conferences, art festivals, museums, and other venues. \u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe multi-year endeavor has resulted in a design blueprint developed by the researchers that shows how to build AI experiences for public spaces where audiences or performers can create with an AI partner.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Museums and other public spaces can serve as alternative venues for AI literacy initiatives, complementing formal education and broadening access to opportunities to interact with and learn about AI by both adults and children who may not have AI devices in their homes or schools,\u0026rdquo; said \u003Cstrong\u003EDuri Long\u003C\/strong\u003E, human-centered computing Ph.D. student at Georgia Tech and a researcher involved in the work.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearchers encountered challenges unique to making \u0026ldquo;creative AIs\u0026rdquo;, such as how to build systems that engage people with different tastes, AIs that perform over sustained periods of time, and AIs being able to adapt to unpredictable human behavior.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor example, the AI dance partner, known as LuminAI and the oldest of the group, doesn\u0026rsquo;t have fingers so any naughty hand gestures aren\u0026rsquo;t processed in the AI\u0026rsquo;s dance routine.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Our AI agents are unlike many other AIs, which usually have a specific task to accomplish,\u0026rdquo; Long said. \u0026ldquo;Our work involves open-ended co-creative AI installations where there is not a single clear goal or other reward function to optimize the AI\u0026rsquo;s behavior. Our AIs are meant to create or collaborate with a human counterpart, and that looks different every time.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile AIs in general often have large databases of sensor data (images, temperature readings, etc.) to improve their understanding of the world, in creative areas such as dance, theater, and other performing arts there is limited data from which AIs can pull.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe researchers overcame this in part by having their AIs learn from human partners in real-time and decide what might be a suitable action. For professional performers, who want a greater degree of control, they could perhaps take turns with the AI partner to have a more structured performance. Conversely, an AI as part of a museum exhibit might guide participants on how to start an activity in order to engage people early on. \u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESocial interaction was also important to consider and, counter to some technology trends, the researchers discovered that human-to-human interaction could increase as a result of AI involvement.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELuminAI, the dancing AI, prompted a couple to do the salsa, two friends to start a synchronized dance routine, and a group of teenagers to perform in a dance circle.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe comedic AI in the roster, called Robot Improv Circus, allows an audience to watch someone interacting in VR with the AI agent and provide feedback to the person by using voice prompts and gestures to trigger in-game reward systems. This led to several groups of friends encouraging each other to try different actions with the comedic AI.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research was published in the Proceedings of the Creativity \u0026amp; Cognition Conference 2019. The paper \u003Cem\u003EDesigning Co-Creative AI for Public Spaces\u003C\/em\u003E was co-authored by Duri Long, Mikhail Jacob, and Brian Magerko.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech\u2019s Expressive Machinery Lab has developed exhibitions where the AI agents are front-and-center and people are able to create with them. These AIs have included a dance partner, visual storyteller, music maker, and improv comedian."}],"uid":"27592","created_gmt":"2019-08-09 17:13:35","changed_gmt":"2019-08-09 17:23:15","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-08-09T00:00:00-04:00","iso_date":"2019-08-09T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"624288":{"id":"624288","type":"image","title":"AI Performers","body":null,"created":"1565370405","gmt_created":"2019-08-09 17:06:45","changed":"1565370439","gmt_changed":"2019-08-09 17:07:19","alt":"","file":{"fid":"237732","name":"Expressive Machinery Lab AIs.png","image_path":"\/sites\/default\/files\/images\/Expressive%20Machinery%20Lab%20AIs_0.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Expressive%20Machinery%20Lab%20AIs_0.png","mime":"image\/png","size":537485,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Expressive%20Machinery%20Lab%20AIs_0.png?itok=9glDJfQR"}},"624289":{"id":"624289","type":"image","title":"Duri Long","body":null,"created":"1565370460","gmt_created":"2019-08-09 17:07:40","changed":"1565370460","gmt_changed":"2019-08-09 17:07:40","alt":"","file":{"fid":"237733","name":"Duri Long.png","image_path":"\/sites\/default\/files\/images\/Duri%20Long.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Duri%20Long.png","mime":"image\/png","size":68901,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Duri%20Long.png?itok=sq3wsG9G"}},"624287":{"id":"624287","type":"image","title":"Brian Magerko","body":null,"created":"1565370308","gmt_created":"2019-08-09 17:05:08","changed":"1565370308","gmt_changed":"2019-08-09 17:05:08","alt":"","file":{"fid":"237731","name":"Brian Magerko.png","image_path":"\/sites\/default\/files\/images\/Brian%20Magerko.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Brian%20Magerko.png","mime":"image\/png","size":84568,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Brian%20Magerko.png?itok=DfGeRTdl"}}},"media_ids":["624288","624289","624287"],"related_links":[{"url":"https:\/\/www.youtube.com\/watch?v=K1juBtnJjTk\u0026list=PLqbYO_bYE2ClHihmAEMrP2FtqE6qpXnSF","title":"AI Dance Partner "}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"143","name":"Digital Media and Entertainment"},{"id":"148","name":"Music and Music Technology"},{"id":"151","name":"Policy, Social Sciences, and Liberal Arts"}],"keywords":[],"core_research_areas":[{"id":"39431","name":"Data Engineering and Science"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\nResearch Communications Manager\u003Cbr \/\u003E\r\nGVU Center and College of Computing\u003Cbr \/\u003E\r\n678.231.0787\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"624184":{"#nid":"624184","#data":{"type":"news","title":"Argoverse Gives Researchers Access to New Datasets for Autonomous Vehicles","body":[{"value":"\u003Cp\u003EDeveloping autonomous vehicles has long been a hot topic in pop culture and the tech community, but the material that\u0026rsquo;s needed to further academic research \u0026mdash; data from autonomous vehicle sensors and other telemetry --\u0026nbsp; is usually kept under lock and key. Researchers and engineers at \u003Ca href=\u0022https:\/\/www.argo.ai\/\u0022\u003EArgo AI\u003C\/a\u003E and \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EGeorgia Institute of Technology\u003C\/a\u003E recently challenged that by releasing \u003Ca href=\u0022https:\/\/www.argoverse.org\/\u0022\u003EArgoverse\u003C\/a\u003E, the first public autonomous vehicle dataset to include high-definition (HD) maps.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EArgoverse\u0026rsquo;s HD maps contain accurate details to within a few centimeters. These maps help autonomous vehicles better understand the rules of the road through geometric and semantic metadata such as where a driver should stop for an intersection, what the travel direction is for a particular lane, and what turns are available in each lane, if any. And when it comes to research, the maps can be used to develop more accurate forecasting models by painting a broader and more accurate picture of road infrastructure and traffic flow.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn addition to HD maps, Argoverse contains two datasets that allow researchers to train and benchmark 3D object tracking and forecasting methods. When applied to the autonomy stack, these methods play a critical role in enabling autonomous vehicles to identify objects on the road -- such as other cars, bicyclists and pedestrians -- track them over time, and forecast their behavior seconds into the future.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHaving access to high-quality maps and curated data collections is critical to furthering autonomous vehicle research. Argoverse comes at a time when many experts and academics in the field can benefit from materials that take tremendous resources and capital to produce. Building a single autonomous vehicle could cost upwards of a few hundred thousand dollars, and that has to be done before putting the tools in place to build a map.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It could be seen as a competitive disadvantage for a company to release data like this, but over the past few years the industry has started to realize the benefits of engaging the academic community,\u0026rdquo; said \u003Cstrong\u003EJames Hays\u003C\/strong\u003E, an associate professor in the \u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E at Georgia Tech and a principal scientist at Argo AI. \u0026ldquo;Creating autonomous vehicles is a big challenge that combines so many aspects of technology. By putting out this dataset, Argo is providing material for others to discover clever ways to improve self-driving capabilities.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EArgo AI said in a statement, \u0026ldquo;For our team at Argo, releasing this data collection is about giving academic communities access to the materials they need. We\u0026rsquo;re excited to not only support cutting edge developments in computer vision and machine learning but also to support the next generation of engineers and roboticists who are preparing for jobs at self-driving technology companies, Argo AI included.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EInspired by the \u003Ca href=\u0022http:\/\/www.cvlibs.net\/datasets\/kitti\/\u0022\u003EKITTI dataset\u003C\/a\u003E, Argoverse includes one dataset with 3D tracking annotations for 113 scenes and one dataset with 327,793 interesting vehicle trajectories extracted from over 1,000 driving hours. The Argoverse data collection also includes an API to connect sensor data with the HD map representation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe HD map has three layers: One layer encodes the ground height at any location, while another layer indicates the drivable area. The most complex layer encodes the geometry and connectivity of individual lanes.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe data was collected in Miami, Fla. and Pittsburgh, Penn. \u0026ndash; covering 180 linear miles of the two distinct urban cities that each possess unique local driving habits and challenges.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETwo of the paper\u0026rsquo;s lead authors, \u003Cstrong\u003EJohn Lambert \u003C\/strong\u003Eand \u003Cstrong\u003EPatsorn Sangkloy, \u003C\/strong\u003Eare Ph.D. students in Hays\u0026rsquo;s lab at Georgia Tech and presented the paper \u003Ca href=\u0022http:\/\/openaccess.thecvf.com\/content_CVPR_2019\/papers\/Chang_Argoverse_3D_Tracking_and_Forecasting_With_Rich_Maps_CVPR_2019_paper.pdf\u0022\u003E\u003Cem\u003EArgoverse: 3d Tracking and Forecasting with Rich Maps\u003C\/em\u003E\u003C\/a\u003E at the \u003Ca href=\u0022http:\/\/cvpr2019.thecvf.com\/\u0022\u003E2019 Computer Vision and Pattern Recognition (CVPR) conference.\u003C\/a\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Argoverse is first public autonomous vehicle dataset to include high-definition (HD) maps. "}],"uid":"34773","created_gmt":"2019-08-08 17:22:55","changed_gmt":"2019-08-08 17:26:03","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-08-08T00:00:00-04:00","iso_date":"2019-08-08T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"624181":{"id":"624181","type":"image","title":"Argoverse is the first public autonomous vehicle dataset to include high-definition (HD) maps. ","body":null,"created":"1565284526","gmt_created":"2019-08-08 17:15:26","changed":"1565284526","gmt_changed":"2019-08-08 17:15:26","alt":"Argoverse","file":{"fid":"237711","name":"map_ground_height.jpg","image_path":"\/sites\/default\/files\/images\/map_ground_height.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/map_ground_height.jpg","mime":"image\/jpeg","size":77030,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/map_ground_height.jpg?itok=V2yMtjY2"}},"624186":{"id":"624186","type":"image","title":"Patsorn Sangkloy answers questions during a poster session at ICML shortly after Argoverse was announced during the oral presentation. ","body":null,"created":"1565285076","gmt_created":"2019-08-08 17:24:36","changed":"1565285134","gmt_changed":"2019-08-08 17:25:34","alt":"","file":{"fid":"237717","name":"-6208717384310888339_IMG_3807-2.jpg","image_path":"\/sites\/default\/files\/images\/-6208717384310888339_IMG_3807-2.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/-6208717384310888339_IMG_3807-2.jpg","mime":"image\/jpeg","size":1141608,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/-6208717384310888339_IMG_3807-2.jpg?itok=Qj1JWLHk"}}},"media_ids":["624181","624186"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["allie.mcfadden@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"623821":{"#nid":"623821","#data":{"type":"news","title":"Georgia Tech Faculty, Students, and Alumni Take Part in 41st Meeting of the Cognitive Science Society","body":[{"value":"\u003Cp\u003EMembers of the Georgia Tech research community were present last week at the \u003Ca href=\u0022https:\/\/cognitivesciencesociety.org\/cogsci-2019\/\u0022\u003E2019 Annual Meeting of the Cognitive Science Society\u003C\/a\u003E in Montreal, Canada. This year, the conference highlighted research on the theme \u003Cem\u003ECreativity+Cognition+Computation\u003C\/em\u003E, as well as the full breadth of research topics offered by the society\u0026rsquo;s membership.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMany Georgia Tech faculty, students, and alumni participated among the leadership for the conference.\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EProfessor \u003Cstrong\u003EAshok Goel\u003C\/strong\u003E served as the conference\u0026rsquo;s co-chair;\u003C\/li\u003E\r\n\t\u003Cli\u003EProfessor \u003Cstrong\u003EKeith McGreggor\u003C\/strong\u003E was the sponsorship chair;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EWendy Newstetter\u003C\/strong\u003E of the \u003Ca href=\u0022http:\/\/www.coe.gatech.edu\u0022\u003ECollege of Engineering\u003C\/a\u003E and \u003Ca href=\u0022https:\/\/c21u.gatech.edu\/\u0022\u003ECenter for 21st Century Universities\u003C\/a\u003E served on the awards committee;\u003C\/li\u003E\r\n\t\u003Cli\u003EGeorgia Tech alum \u003Cstrong\u003EJim Davies\u003C\/strong\u003E was co-chair for publication-based talks;\u003C\/li\u003E\r\n\t\u003Cli\u003EGeorgia Tech alum \u003Cstrong\u003EMaithilee Kunda\u003C\/strong\u003E was co-chair for member abstracts;\u003C\/li\u003E\r\n\t\u003Cli\u003EGeorgia Tech alum \u003Cstrong\u003ESwaroop Vattam\u003C\/strong\u003E served on the workshops and tutorials committee.\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E adjunct professors \u003Cstrong\u003EBrian Magerko\u003C\/strong\u003E and \u003Cstrong\u003EGil Weinberg\u003C\/strong\u003E, primarily of the \u003Ca href=\u0022https:\/\/www.iac.gatech.edu\/\u0022\u003EIvan Allen College of Liberal Arts\u003C\/a\u003E and \u003Ca href=\u0022https:\/\/music.gatech.edu\/\u0022\u003ESchool of Music\u003C\/a\u003E, respectively, were also part of a panel on Creativity in the Arts.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPh.D. students \u003Cstrong\u003ESungeun An\u003C\/strong\u003E presented a poster paper at the conference titled \u003Cem\u003ELearning by Doing: Supporting Experimentation in Inquiry-Driven Modeling\u003C\/em\u003E (Sungeun An, \u003Cstrong\u003ERobert Bates\u003C\/strong\u003E, \u003Cstrong\u003EJennifer Hammock\u003C\/strong\u003E, \u003Cstrong\u003ESpencer Rugaber\u003C\/strong\u003E, \u003Cstrong\u003EEmily Weigel\u003C\/strong\u003E, Ashok Goel), and \u003Cstrong\u003EMarissa Gonzales\u003C\/strong\u003E another titled \u003Cem\u003EWhy are Some Online Education Programs Successful: Student Cognition and Success\u003C\/em\u003E (Marissa Gonzales, Ashok Goel).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor more information about this year\u0026rsquo;s conference and to stay up-to-date on news about future conferences, visit \u003Ca href=\u0022https:\/\/cognitivesciencesociety.org\/\u0022\u003Ehttps:\/\/cognitivesciencesociety.org\/\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"This year, the conference highlighted research on the theme Creativity+Cognition+Computation."}],"uid":"33939","created_gmt":"2019-07-30 16:34:53","changed_gmt":"2019-07-30 16:34:53","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-07-30T00:00:00-04:00","iso_date":"2019-07-30T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"623820":{"id":"623820","type":"image","title":"CogSci 2019","body":null,"created":"1564504438","gmt_created":"2019-07-30 16:33:58","changed":"1564504438","gmt_changed":"2019-07-30 16:33:58","alt":"CogSci 2019 banner","file":{"fid":"237591","name":"MontrealSideBanner-sm.jpg","image_path":"\/sites\/default\/files\/images\/MontrealSideBanner-sm.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/MontrealSideBanner-sm.jpg","mime":"image\/jpeg","size":191308,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/MontrealSideBanner-sm.jpg?itok=p_CgqQd5"}}},"media_ids":["623820"],"groups":[{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"623681":{"#nid":"623681","#data":{"type":"news","title":"OMSCS Dominates at Learning @ Scale","body":[{"value":"\u003Cp\u003EThe Online Master of Science in Computer Science (OMSCS) is leading the future of higher education online. The program\u0026rsquo;s prominence was evident at this June\u0026rsquo;s \u003Ca href=\u0022https:\/\/learningatscale.acm.org\/las2019\/\u0022\u003ELearning @ Scale\u003C\/a\u003E (L@S), an annual Association for Computing Machinery conference focusing on the digital learning environment.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOMSCS had influence in all areas of the conference, from leadership to research. OMSCS Associate Director of Student Experience \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/david-joyner\u0022\u003E\u003Cstrong\u003EDavid Joyner\u003C\/strong\u003E\u003C\/a\u003E served as general chair for this year\u0026rsquo;s conference. Yet he wasn\u0026rsquo;t the only OMSCS connection in the conferences\u0026rsquo; leadership. OMSCS alumnus and current teaching assistant (TA) \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/tonymason2\/?originalSubdomain=ca\u0022\u003E\u003Cstrong\u003ETony Mason\u003C\/strong\u003E\u003C\/a\u003E was registration chair, and OMSCS student and TA \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/mrswenson\/\u0022\u003E\u003Cstrong\u003EMichael Swenson\u003C\/strong\u003E\u003C\/a\u003E served as communications chair.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOMSCS is a great fit for L@S, according to Joyner.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Learning @ Scale puts an emphasis not just on using technology in the act of teaching, but also on using technology to scale the overall enterprise of learning,\u0026rdquo; Joyner said. \u0026ldquo;So much of scaling OMSCS has been not to scaling instruction itself, but scaling the things that have to happen for instruction to take place, like admissions and advising. Learning @ Scale has been a great place to explore the problem holistically.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EJoyner and John P. Imlay Jr. Dean of Computing \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/fac\/Charles.Isbell\/\u0022\u003E\u003Cstrong\u003ECharles Isbell\u003C\/strong\u003E\u003C\/a\u003E presented \u003Cem\u003EMaster\u0026rsquo;s at Scale: Five Years in a Scalable Online Graduate Degree. \u003C\/em\u003EThe paper looks at the program from a bird\u0026rsquo;s eye view, showing how the degree has diversified computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;My favorite trend is that our fraction of women in the program has doubled since its inception, and our fraction of underrepresented minorities has cruised at double the on-campus rate,\u0026rdquo; Joyner said. \u0026ldquo;Online education has a mixed track record with underrepresented groups, but we\u0026#39;re seeing evidence that it can be a powerful positive force.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGT Computing researchers also presented four posters on in-progress research:\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003ESynchronous at Scale: Investigation and Implementation of a Semi-Synchronous Online Lecture Platform\u003C\/em\u003E by OMSCS alumna \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/kutnick\/\u0022\u003E\u003Cstrong\u003EDenise Kutnick\u003C\/strong\u003E\u003C\/a\u003E and David Joyner\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EPeer Advising at Scale: Content and Context of a Learner-Owned Course Evaluation System \u003C\/em\u003Eby master\u0026rsquo;s student \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/alexsduncan\/\u0022\u003E\u003Cstrong\u003EAlex Duncan\u003C\/strong\u003E\u003C\/a\u003E and David Joyner\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EPARQR: Augmenting the Piazza Online Forum to Better Support Degree Seeking Online Masters Students\u003C\/em\u003E by master\u0026rsquo;s student \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/noah-bilgrien-44909b79\/\u0022\u003E\u003Cstrong\u003ENoah Bilgrien\u003C\/strong\u003E\u003C\/a\u003E; alumni \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/roy-finkelberg\/\u0022\u003E\u003Cstrong\u003ERoy Finkelberg\u003C\/strong\u003E\u003C\/a\u003E and \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/ctailor\/\u0022\u003E\u003Cstrong\u003EChirag Tailor\u003C\/strong\u003E\u003C\/a\u003E; Ph.D. student \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/india-irish-71024888\/\u0022\u003E\u003Cstrong\u003EIndia Irish\u003C\/strong\u003E\u003C\/a\u003E; master\u0026rsquo;s students \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/girish-narayanan-murali\/\u0022\u003E\u003Cstrong\u003EGirish Murali\u003C\/strong\u003E\u003C\/a\u003E, \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/mangal-abhishek\/\u0022\u003E\u003Cstrong\u003EAbhishek Mangal\u003C\/strong\u003E\u003C\/a\u003E, \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/niklas-gustafsson-68a60bb\/\u0022\u003E\u003Cstrong\u003ENiklas Gustafsson\u003C\/strong\u003E\u003C\/a\u003E, and \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/sumedha-raman\/\u0022\u003E\u003Cstrong\u003ESumedha Raman\u003C\/strong\u003E\u003C\/a\u003E; and \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E Professors \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/home\/thad\/\u0022\u003E\u003Cstrong\u003EThad Starner\u003C\/strong\u003E\u003C\/a\u003E and \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/rosa-arriaga\u0022\u003E\u003Cstrong\u003ERosa Arriaga\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EJack Watson: Addressing Contract Cheating at Scale in Online Computer Science Education\u003C\/em\u003E by OMSCS students \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/rockograziano\/\u0022\u003E\u003Cstrong\u003ERocko Graziano\u003C\/strong\u003E\u003C\/a\u003E and \u003Cstrong\u003EDavid Benton\u003C\/strong\u003E; master\u0026rsquo;s students \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/sarthakwahal\/\u0022\u003E\u003Cstrong\u003ESarthak Wahal\u003C\/strong\u003E\u003C\/a\u003E, \u003Ca href=\u0022https:\/\/www.xueqiuyue.com\/\u0022\u003E\u003Cstrong\u003EQiuyue Xue\u003C\/strong\u003E\u003C\/a\u003E, and \u003Cstrong\u003EP. Tim Miller\u003C\/strong\u003E; OMSCS alumni \u003Cstrong\u003ENick Larsen\u003C\/strong\u003E and \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/diego-vacanti\/\u0022\u003E\u003Cstrong\u003EDiego Vacanti\u003C\/strong\u003E\u003C\/a\u003E; \u003Cstrong\u003EPepper Miller\u003C\/strong\u003E; master\u0026rsquo;s students \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/khushhall\/\u0022\u003E\u003Cstrong\u003EKhushhall Chandra Mahajan\u003C\/strong\u003E\u003C\/a\u003E and \u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/deepak-s\/\u0022\u003E\u003Cstrong\u003EDeepak Srikanth\u003C\/strong\u003E\u003C\/a\u003E; and Thad Starner\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENext year L@S will be held in Atlanta and chaired by Joyner.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"OMSCS\u0027s prominence was evident at this June\u2019s Learning @ Scale (L@S), an annual Association of Computer Machinery conference focusing on the digital learning environment."}],"uid":"34541","created_gmt":"2019-07-25 17:49:53","changed_gmt":"2019-07-26 14:07:22","author":"Tess Malone","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-07-25T00:00:00-04:00","iso_date":"2019-07-25T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"623683":{"id":"623683","type":"image","title":"L@S","body":null,"created":"1564077451","gmt_created":"2019-07-25 17:57:31","changed":"1564077451","gmt_changed":"2019-07-25 17:57:31","alt":"Group of researchers at Learning @ Scale","file":{"fid":"237542","name":"IMG_20190625_132418.jpg","image_path":"\/sites\/default\/files\/images\/IMG_20190625_132418.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/IMG_20190625_132418.jpg","mime":"image\/jpeg","size":757454,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/IMG_20190625_132418.jpg?itok=foWbOcMJ"}}},"media_ids":["623683"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ETess Malone, Communications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:tess.malone@cc.gatech.edu\u0022\u003Etess.malone@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["tess.malone@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"623044":{"#nid":"623044","#data":{"type":"news","title":"Robot Able to Instantly Identify Household Materials Using Near-Infrared Light ","body":[{"value":"\u003Cp\u003ERobots aren\u0026rsquo;t yet household fixtures, but Georgia Tech researchers have already come up with a way domestic bots might recognize materials around the home.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing near-infrared light, similar to what\u0026rsquo;s used in TV remotes, the robot can identify common materials used in household objects to better inform its actions. This might allow intelligent machines to understand, for example, the right bowl (paper versus metal) to put in a microwave or how hard to grasp a cup made of glass versus plastic.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo classify materials, the researchers first determined hundreds of light wavelengths reflected from five common materials \u0026ndash; paper, wood, plastic, metal, and fabric. With this information, they trained a neural network on 10,000 examples in order to create a machine-learning (ML) model that could be used by a robot to quickly identify a material.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAccording to the researchers, a robot using their new ML model can identify materials without it first having to touch an object, a useful function for handling potentially fragile items. To do so, the robot holds a small spectrometer near an object to get a quick light measurement, which is then processed to identify the material.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Robots currently use conventional cameras or haptic sensing - the sense of touch - to estimate a material type,\u0026rdquo; said \u003Cstrong\u003EZackory Erickson\u003C\/strong\u003E, the first author on the research paper detailing the new work and Georgia Tech robotics Ph.D. student.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This is the first time that we know of that spectroscopy and machine learning have been used for material classification in robotics research, and our accuracy is on par with existing methods.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe team\u0026rsquo;s new ML model yielded the best results using spectrometer measurements from near-infrared light. In fact, the accuracy was 99.9 percent with the full dataset of 10,000 measurements from 50 objects that the model had been trained on.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;While human eyes typically use three color receptors to see the world, our robot can be thought of as using hundreds of color receptors to recognize materials,\u0026rdquo; said \u003Cstrong\u003ECharlie Kemp\u003C\/strong\u003E, associate professor in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University and part of the research team. \u0026ldquo;Instead of a conventional color camera that measures red, green, and blue light, our robot uses a spectrometer that measures light at hundreds of different wavelengths, some outside of the range of human vision.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo see how results would compare using only a single light reading from each object, the team also trained the model on just 50 measurements, one from each object. Interestingly, accuracy in identifying the correct material only dropped to 95 percent. When using a spectrometer reading from objects the machine learning model had never seen, the robot still achieved an 81.6 percent success rate.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Spectroscopy presents a reliable and effective way for robots to estimate materials of household objects,\u0026rdquo; Erickson said. \u0026ldquo;We\u0026rsquo;ve demonstrated how a robot can use near-infrared spectroscopy to infer the materials of everyday objects like cups, bowls, and garments.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research is published in the Proceedings of the 2019 International Conference on Robotics and Automation (ICRA) in the paper titled \u003Cem\u003EClassification of Household Materials via Spectroscopy\u003C\/em\u003E co-authored by \u003Ca href=\u0022http:\/\/zackory.com\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EZackory Erickson\u003C\/strong\u003E\u003C\/a\u003E, \u003Cstrong\u003ENathan Luskey\u003C\/strong\u003E, \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~chernova\/\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003ESonia Chernova\u003C\/strong\u003E\u003C\/a\u003E, and \u003Ca href=\u0022http:\/\/charliekemp.com\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003ECharlie Kemp\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor more Georgia Tech research published at ICRA, as well as the entire conference program,\u0026nbsp;explore this \u003Ca href=\u0022https:\/\/public.tableau.com\/shared\/J22YXRJXM?:display_count=yes\u0026amp;:origin=viz_share_link\u0026amp;:showVizHome=no\u0022 target=\u0022_blank\u0022\u003Einteractive visualization\u003C\/a\u003E\u0026nbsp;from the GVU Center at Georgia Tech.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":[{"value":"No Contact is Required with Objects by Using Inexpensive, Handheld \u0027Light-Reading\u0027 Device"}],"field_summary":"","field_summary_sentence":[{"value":"Robots aren\u2019t yet household fixtures, but Georgia Tech researchers have already come up with a way domestic bots might recognize materials around the home."}],"uid":"27592","created_gmt":"2019-07-08 17:32:11","changed_gmt":"2019-07-17 20:55:36","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-07-08T00:00:00-04:00","iso_date":"2019-07-08T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"623045":{"id":"623045","type":"image","title":"Robot Classifies Materials of Household Objects Using \u0027Light-Reading\u0027 Device","body":null,"created":"1562609057","gmt_created":"2019-07-08 18:04:17","changed":"1562609089","gmt_changed":"2019-07-08 18:04:49","alt":"","file":{"fid":"237265","name":"Robot classifies materials of household objects.png","image_path":"\/sites\/default\/files\/images\/Robot%20classifies%20materials%20of%20household%20objects.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Robot%20classifies%20materials%20of%20household%20objects.png","mime":"image\/png","size":1829642,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Robot%20classifies%20materials%20of%20household%20objects.png?itok=9t0CWmic"}}},"media_ids":["623045"],"related_links":[{"url":"https:\/\/www.youtube.com\/watch?v=fBv_xEai2AU","title":"VIDEO: Watch how GT researchers are bringing domestic bots one step closer to reality"},{"url":"https:\/\/www.spreaker.com\/user\/10751784\/tu-ep5-robot-instantly-identifies-materials","title":"Tech Unbound Podcast EP5: Robot Able to Instantly Identify Household Materials Without Touching Objects"}],"groups":[{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"667","name":"robotics"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\nResearch Communications Manager, GVU Center\u003Cbr \/\u003E\r\n678.231.0787\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"623011":{"#nid":"623011","#data":{"type":"news","title":"IC\u0027s Dhruv Batra Named PECASE Winner, One of Three at Georgia Tech","body":[{"value":"\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E Assistant Professor \u003Cstrong\u003EDhruv Batra\u003C\/strong\u003E was awarded the prestigious Presidential Early Career Award for Scientists and Engineers (PECASE) on Wednesday in an announcement by President Donald Trump. The PECASE is the highest honor bestowed by the United States government to outstanding scientists and engineers beginning independent research careers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBatra is one of three Georgia Tech faculty members this year to earn the award, giving the Institute a total of 18 in its history. The other two awardees in this class are Associate Professor Mark Davenport of the School of Electrical and Computer Engineering and Assistant Professor Matthew McDowell of the School of Materials Science and Engineering.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlong with the Department of Defense, the White House Office of Science and Technology Policy will provide $1 million over the course of five years to support Batra\u0026rsquo;s research to make artificial intelligence (AI) systems more transparent, explainable, and trustworthy. The award comes as a result of Batra\u0026rsquo;s selection for a similar early-career award by the Army Research Office Young Investigator Program in 2014.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research Batra\u0026rsquo;s lab will pursue with the funding addresses a fundamental challenge in development of AI \u0026ndash; their \u0026ldquo;black-box\u0026rdquo; nature, the consequent difficulty humans face in identifying why or how AI systems fail, and how to improve upon those technologies. When a self-driving car from a major tech company, for example, suffered its first fatality in 2015, legal and regulatory agencies understandably questioned what went wrong. The challenge at the time was providing a sufficient answer to that question.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Your response can\u0026rsquo;t just be, \u0026lsquo;Well, there was this machine learning box in there, and it just didn\u0026rsquo;t detect the car. We don\u0026rsquo;t know why,\u0026rsquo;\u0026rdquo; Batra said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBatra\u0026rsquo;s research aims to create AI systems that can more readily explain what they do and why. This could come in the form of natural language or visual explanations, both of which \u0026ndash; computer vision and natural language processing \u0026ndash; are central areas of focus in Batra\u0026rsquo;s lab. The machine could, for example, identify regions in image that provide support for its predictions, potentially assisting a user\u0026rsquo;s understanding of what the machine can or cannot do.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt\u0026rsquo;s an important area of study for a few reasons, Batra said. He classifies AI technology into three levels of maturity:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003ELevel 1 is technology that is in its infancy. It is not near deployment to everyday users, and the consumers of the technology are researchers. The goal for transparency and explanation is to help researchers and developers to understand the failure modes and current limitations, and deduce how to improve the technology \u0026ndash; \u0026ldquo;actionable insight,\u0026rdquo; as Batra called it.\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003ELevel 2 is when things are working to a degree, enough so that the technology can and has been deployed.\u003Cbr \/\u003E\r\n\t\u003Cbr \/\u003E\r\n\t\u0026ldquo;The technology may be mature in a narrow range, and you can ship the product,\u0026rdquo; Batra said. \u0026ldquo;Like face detection or fingerprint technology. It\u0026rsquo;s built into products and being used at agencies, airports, or other places.\u0026rdquo;\u003Cbr \/\u003E\r\n\t\u003Cbr \/\u003E\r\n\tIn such cases, you want explanations and interpretability that helps build appropriate trust with users. Users can understand when the system reliably works and when it might not work \u0026ndash; face detection in bad lighting, for example \u0026ndash; and make efforts to use in a more appropriate setting.\u003Cbr \/\u003E\r\n\t\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003ELevel 3 is typically a fairly narrow category where the AI is better \u0026ndash; sometimes significantly so \u0026ndash; than the human. Batra used chess-playing and Go-playing bots as an example. The best chess-playing bots convincingly outperform the best humans and reliably hand a resounding defeat to the average human player.\u003Cbr \/\u003E\r\n\t\u003Cbr \/\u003E\r\n\t\u0026ldquo;We already know bots play much better than humans,\u0026rdquo; he said. \u0026ldquo;In such cases, you don\u0026rsquo;t need to improve the machine and you already trust its skill level. You want the machine to give you explanations not so that you can improve the AI, but so that you can improve yourself.\u0026rdquo;\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EBatra envisions scenarios where the techniques his lab develops could assist at all three levels, but the experiments will take place between Levels 1 and 2. They will work in Visual Question Answering, which are agents that answer natural language questions about visual content, and other areas of maturity that may reach the product level in five or more years.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBatra has served as an assistant professor at Georgia Tech since Fall 2016. \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~dbatra\/\u0022\u003EVisit his website for more information about his research.\u003C\/a\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The PECASE is the highest honor bestowed by the United States government to outstanding scientists and engineers beginning independent research careers."}],"uid":"33939","created_gmt":"2019-07-05 16:18:17","changed_gmt":"2019-07-05 16:18:17","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-07-05T00:00:00-04:00","iso_date":"2019-07-05T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"586461":{"id":"586461","type":"image","title":"Dhruv Batra","body":null,"created":"1485377710","gmt_created":"2017-01-25 20:55:10","changed":"1485377710","gmt_changed":"2017-01-25 20:55:10","alt":"","file":{"fid":"223509","name":"DhruvBatra.jpg","image_path":"\/sites\/default\/files\/images\/DhruvBatra.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/DhruvBatra.jpg","mime":"image\/jpeg","size":82240,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/DhruvBatra.jpg?itok=D762Jyi-"}}},"media_ids":["586461"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181639","name":"cc-research; ic-ai-ml"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"622856":{"#nid":"622856","#data":{"type":"news","title":"Isbell Begins Term as Dean of Computing","body":[{"value":"\u003Cp\u003EWhen \u003Cstrong\u003ECharles Isbell\u003C\/strong\u003E applied to college, he applied to only one: the Georgia Institute of Technology.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I didn\u0026rsquo;t want to go anywhere else,\u0026rdquo; he said. He had grown up in Atlanta, graduating from Mays High School, and he loved the city. More than that, he already knew that he wanted to work with computers, and he knew Georgia Tech was one of the best places in the world to do so.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhen he got to campus, he knew right away that he had made a good decision.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I always felt I belonged at Georgia Tech,\u0026rdquo; Isbell said. \u0026ldquo;No, I didn\u0026rsquo;t join a frat, I wasn\u0026rsquo;t part of any of the big clubs,\u0026rdquo; he said. \u0026ldquo;Hey, I went to zero parties. Zero. But I did build friendships. I built connections.\u0026rdquo; He also, in a nice bit of symmetry, served as the undergraduate representative on the committee that hired \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/peter-freeman\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EPeter Freeman\u003C\/strong\u003E\u003C\/a\u003E to be the first dean of the brand new College of Computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EToday, \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~isbell\/\u0022 target=\u0022_blank\u0022\u003ECharles Isbell\u003C\/a\u003E becomes the John P. Imlay Jr. Dean of Computing. He is the fourth person to hold the position. His philosophy as dean is built on the foundation he laid long ago as an undergraduate.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;To me, it\u0026rsquo;s all about community,\u0026rdquo; he said. \u0026ldquo;I want people to feel like they belong, and that the community reflects their experiences. I want people to feel that the things they\u0026rsquo;re learning apply to their worlds.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EMachines Bringing People Together\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIsbell went to MIT after graduating from Georgia Tech, and after that spent four years working at AT\u0026amp;T Labs. During that time, he continued to pursue his interests in computing and human connection.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe first project that earned Isbell a \u0026ldquo;best paper\u0026rdquo; award was his work on Cobot, a software agent whose goal was to become a functioning member of an online social community called LambdaMOO.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I\u0026rsquo;m interested in how humans express themselves in a way that computers can understand \u0026ndash; from a technical, machine learning point of view, that is,\u0026rdquo; Isbell said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe also found new ways to use technology to serve existing real-life communities. At MIT, he built what was most likely the first-ever online Black history database. He ran a website for hip-hop reviews.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo this day, he continues to mix his cultural experience and computing. All of his graduating students pose for photos dressed like members of the \u003Ca href=\u0022https:\/\/en.wikipedia.org\/wiki\/Parliament_(band)\u0022 target=\u0022_blank\u0022\u003Efunk band Parliament\u003C\/a\u003E in a silver top-hat, star-shaped sunglasses, and strings of Mardi Gras beads. The framed and funky photos line the walls of his office.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIsbell says that combining his passions keeps him engaged and that he likes to see others do the same.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If you\u0026rsquo;re passionate, people pick up on that passion,\u0026rdquo; he said.\u003C\/p\u003E\r\n\r\n\u003Cblockquote\u003E\r\n\u003Cp\u003E\u0026ldquo;The technology we develop is transformative, and we have to reckon with that. We have to accept our responsibility as leaders and our responsibility to bring other people along for this ride.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ECharles Isbell, John P. Imlay Jr. Dean of Computing\u003C\/strong\u003E\u003C\/p\u003E\r\n\u003C\/blockquote\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EGiving Back\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn 2002, Isbell was hired as a junior faculty member in the College of Computing and moved back to Atlanta.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;First thing that happened when I came back, my mother made me a bowl of cheese grits and bacon,\u0026rdquo; he said. \u0026ldquo;I knew I was back home.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAt the Institute, things were more complicated.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It was so exciting to be back, but the place was completely different,\u0026rdquo; he said. It was bigger, a stronger program with a ballooning reputation. \u0026ldquo;Still, I always felt I could build something here.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAfter earning tenure, Isbell dived into administrative work to do exactly that. He was one of the architects of the college\u0026rsquo;s \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/academics\/degree-programs\/bachelors\/computer-science\/threads\u0022 target=\u0022_blank\u0022\u003Eaward-winning Threads curriculum\u003C\/a\u003E, and also of its groundbreaking \u003Ca href=\u0022http:\/\/www.omscs.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003EOnline Master\u0026rsquo;s of Science in Computer Science\u003C\/a\u003E (OMSCS) program.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I just kept volunteering,\u0026rdquo; he said. \u0026ldquo;Then one day I woke up as dean.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ERedefining the Field\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe didn\u0026rsquo;t, of course. Wake up as dean, that is. Isbell won the job in a \u003Ca href=\u0022https:\/\/b.gatech.edu\/2wLZTP3\u0022 target=\u0022_blank\u0022\u003Egrueling nationwide search\u003C\/a\u003E. He is the first internal candidate ever to be named as the dean of the College of Computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnd as someone who has been in or around the college for decades, he has a unique view on its development. When Isbell arrived as an undergraduate, computing was still in its infancy at Georgia Tech \u0026mdash;\u0026nbsp;it wasn\u0026rsquo;t even a college yet.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThrough his tenure on the faculty, he has seen the college grow and mature. Now, he says, the college is truly entering adulthood, \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/603980\/college-computing-rises-no-8-us-news-rankings\u0022 target=\u0022_blank\u0022\u003Ea top-10 program\u003C\/a\u003E with responsibilities not only to its faculty, staff, and students but also to the larger world.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The technology we develop is transformative, and we have to reckon with that,\u0026rdquo; he said. \u0026ldquo;We have to accept our responsibility as leaders and our responsibility to bring other people along for this ride.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs dean, he intends to build on the hard work of his predecessors in confronting the challenges of a field that is always changing and always short of labor. And as computing metastasizes into other fields \u0026ndash; finance, health, media, politics, art \u0026mdash; he sees social and ethical considerations becoming ever more important.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe good news is that the College of Computing is already addressing these problems, Isbell said. OMSCS has diversified and significantly increased the pipeline of trained talent to industry. Our \u003Ca href=\u0022http:\/\/constellations.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003EConstellations Center for Equity in Computing\u003C\/a\u003E is piloting a hybrid classroom-online model that holds the promise of making computer science education available to all children. The college has made ground-breaking commitments to not only teach ethics to the students, but to computing research that prioritizes transparency and the public good.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn other words, Isbell wants Georgia Tech to lead a re-thinking of the nature and importance of community in the field of computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It seems increasingly clear that computer scientists need to think more clearly about the impact of their work on society as a whole,\u0026rdquo; Isbell said. \u0026ldquo;That\u0026rsquo;s going to require the involvement of everyone who is affected \u0026mdash; which is to say, everyone.\u0026rdquo;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Charles Isbell begins his service as the John P. Imlay Jr. Dan of Computing on July 1."}],"uid":"32045","created_gmt":"2019-06-28 19:20:12","changed_gmt":"2019-07-01 13:36:25","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-07-01T00:00:00-04:00","iso_date":"2019-07-01T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"622870":{"id":"622870","type":"image","title":"Charles Isbell, John P. Imlay Jr. Dean of Computing","body":null,"created":"1561986445","gmt_created":"2019-07-01 13:07:25","changed":"1561986445","gmt_changed":"2019-07-01 13:07:25","alt":"Charles Isbell John P Imlay Jr Dean of Computing","file":{"fid":"237213","name":"Charles Isbell_John P Imlay Jr Dean of Computing_July2019.jpg","image_path":"\/sites\/default\/files\/images\/Charles%20Isbell_John%20P%20Imlay%20Jr%20Dean%20of%20Computing_July2019.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Charles%20Isbell_John%20P%20Imlay%20Jr%20Dean%20of%20Computing_July2019.jpg","mime":"image\/jpeg","size":1278786,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Charles%20Isbell_John%20P%20Imlay%20Jr%20Dean%20of%20Computing_July2019.jpg?itok=1-mm0kB3"}},"622871":{"id":"622871","type":"image","title":"Charles Isbell, John P. Imlay Jr. Dean of Computing_seated","body":null,"created":"1561986721","gmt_created":"2019-07-01 13:12:01","changed":"1561986721","gmt_changed":"2019-07-01 13:12:01","alt":"Charles Isbell John P Imlay Jr Dean of Computing","file":{"fid":"237214","name":"Charles_Isbell_John P Imlay Jr Dean of Computing_informal_July2019.jpg","image_path":"\/sites\/default\/files\/images\/Charles_Isbell_John%20P%20Imlay%20Jr%20Dean%20of%20Computing_informal_July2019.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Charles_Isbell_John%20P%20Imlay%20Jr%20Dean%20of%20Computing_informal_July2019.jpg","mime":"image\/jpeg","size":1010357,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Charles_Isbell_John%20P%20Imlay%20Jr%20Dean%20of%20Computing_informal_July2019.jpg?itok=nNqqumWT"}}},"media_ids":["622870","622871"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"606703","name":"Constellations Center"},{"id":"576491","name":"CRNCH"},{"id":"545781","name":"Institute for Data Engineering and Science"},{"id":"430601","name":"Institute for Information Security and Privacy"},{"id":"576481","name":"ML@GT"},{"id":"66442","name":"MS HCI"},{"id":"431631","name":"OMS"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"46361","name":"GT computing"},{"id":"10664","name":"charles isbell"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAnn Claycombe, Communications Director\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:ann.claycombe@cc.gatech.edu?subject=Isbell%20Begins%20Term%20as%20Dean%20of%20Computing\u0022\u003Eann.claycombe@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["ann.claycombe@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"622864":{"#nid":"622864","#data":{"type":"news","title":"IC Researchers Earn 2018 IJRR Paper of the Year for Impactful Robotics Research","body":[{"value":"\u003Cp\u003EA paper published in the \u003Cem\u003EI\u003Ca href=\u0022http:\/\/www.ijrr.org\/\u0022\u003Enternational Journal of Robotics Research\u003C\/a\u003E\u003C\/em\u003E (IJRR) by researchers in the \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC) was selected as the 2018 IJRR Paper of the Year.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EChosen from a shortlist considered by the IJRR Executive Committee, the paper, \u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1707.07383\u0022\u003E\u003Cem\u003EContinuous-time Gaussian Process Motion Planning via Probabilistic Inference\u003C\/em\u003E\u003C\/a\u003E, was recognized for its technical rigor, relevance, and potential for impact in the robotics research community. The research comes from IC Ph.D. students \u003Cstrong\u003EMustafa Mukadam\u003C\/strong\u003E and \u003Cstrong\u003EJing Dong\u003C\/strong\u003E, master\u0026rsquo;s student \u003Cstrong\u003EXinyan Yan\u003C\/strong\u003E, and advisors Professor \u003Cstrong\u003EFrank Dellaert\u003C\/strong\u003E and Assistant Professor \u003Cstrong\u003EByron Boots\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis paper introduces a novel formulation of motion planning that treats the problem of finding an efficient, feasible path between two points as probabilistic inference with Gaussian Processes. Motion planning is a hard problem, and state-of-the art sampling-based and trajectory optimization algorithms have well-known drawbacks. The former can effectively find feasible trajectories but often exhibits jerky and redundant motion, and the latter requires a fine approximation of the trajectory to reason about thin obstacles or tight constraints.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn their paper, the team of researchers adopts a continuous-time representation of trajectories, viewing them as functions that map time to robot state. Combing this representation with fast approaches to probabilistic inference, they developed a computationally-efficient gradient-based optimization algorithm called a Gaussian Process Motion Planner that can overcome large computational costs associated with fine discretization, while still maintaining smoothness of motion in the result.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWith the award comes a $1,000 prize. Boots attended the \u003Ca href=\u0022http:\/\/www.roboticsconference.org\/\u0022\u003ERobotics: Science and Systems\u003C\/a\u003E (RSS) conference in the Freiburg, Germany, this week, where he accepted the award on behalf of his team.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnother paper involving Boots was also awarded a Best Student Paper Award at RSS. Titled \u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1902.08967\u0022\u003E\u003Cem\u003EAn Online Learning Approach to Model Predictive Control\u003C\/em\u003E\u003C\/a\u003E, the paper was written by Robotics Ph.D. students \u003Cstrong\u003ENolan Wagener\u003C\/strong\u003E, \u003Cstrong\u003EChing-An Cheng\u003C\/strong\u003E, and \u003Cstrong\u003EJacob Sacks\u003C\/strong\u003E, along with Boots.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt shows that there exists a close connection between model predictive control (MPC), a popular technique for solving dynamic control tasks, and online learning, an abstract theoretical framework for analyzing online decision making. This new perspective provides a foundation for leveraging powerful online learning algorithms to design MPC algorithms. Toward this end, the researchers propose a generic framework for synthesizing new MPC algorithms called Dynamic Mirror Decent Model Predictive Control.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe framework exposes key design choices that can help practitioners easily develop new control algorithms tailored to the challenges of their specific task. The approach is validated by developing new MPC algorithms that consistently match or outperform the state-of-the-art on several tasks including an aggressive driving problem with the goal of racing an autonomous car around a dirt track under computational resource constraints.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"With the award comes a $1,000 prize. Boots attended the Robotics: Science and Systems (RSS) conference in the Freiburg, Germany, this week, where he accepted the award on behalf of his team."}],"uid":"33939","created_gmt":"2019-06-28 21:45:13","changed_gmt":"2019-06-28 21:45:13","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-06-28T00:00:00-04:00","iso_date":"2019-06-28T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"622863":{"id":"622863","type":"image","title":"IJRR Paper of the Year","body":null,"created":"1561757769","gmt_created":"2019-06-28 21:36:09","changed":"1561757769","gmt_changed":"2019-06-28 21:36:09","alt":"Byron Boots accepts the IJRR Paper of the Year Award at RSS 2019","file":{"fid":"237211","name":"IJRR Paper of the Year.jpeg","image_path":"\/sites\/default\/files\/images\/IJRR%20Paper%20of%20the%20Year.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/IJRR%20Paper%20of%20the%20Year.jpeg","mime":"image\/jpeg","size":214672,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/IJRR%20Paper%20of%20the%20Year.jpeg?itok=3Ab2q1qk"}},"622862":{"id":"622862","type":"image","title":"RSS Best Student Paper","body":null,"created":"1561757679","gmt_created":"2019-06-28 21:34:39","changed":"1561757679","gmt_changed":"2019-06-28 21:34:39","alt":"A team of researchers accepts the Best Student Paper award at RSS 2019","file":{"fid":"237210","name":"RSS Best Student Paper.jpeg","image_path":"\/sites\/default\/files\/images\/RSS%20Best%20Student%20Paper.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/RSS%20Best%20Student%20Paper.jpeg","mime":"image\/jpeg","size":181785,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/RSS%20Best%20Student%20Paper.jpeg?itok=kKN86uwy"}}},"media_ids":["622863","622862"],"related_links":[{"url":"https:\/\/www.ic.gatech.edu\/content\/robotics-computational-perception","title":"Robotics and Computational Perception Research at Georgia Tech"}],"groups":[{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181602","name":"ic-robotics"},{"id":"181216","name":"cc-research"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"622859":{"#nid":"622859","#data":{"type":"news","title":"Georgia Tech Team Wins New Fetch Robot at ICRA\u0027s FetchIt! Mobile Manipulation Challenge","body":[{"value":"\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~chernova\/\u0022\u003E\u003Cstrong\u003ESonia Chernova\u003C\/strong\u003E\u003C\/a\u003E\u0026rsquo;s \u003Ca href=\u0022http:\/\/www.rail.gatech.edu\/\u0022\u003ERobot Autonomy and Interactive Learning\u003C\/a\u003E (RAIL) lab is adding a new member this summer after a successful foray into the \u003Ca href=\u0022https:\/\/opensource.fetchrobotics.com\/competition\u0022\u003E\u003Cem\u003EFetchIt!\u003C\/em\u003E\u003Cem\u003E Mobile Manipulation Challenge\u003C\/em\u003E\u003C\/a\u003E at the \u003Ca href=\u0022https:\/\/www.icra2019.org\/\u0022\u003EInternational Conference on Robotics and Automation\u003C\/a\u003E (ICRA) last month.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA team of Georgia Tech master\u0026rsquo;s and Ph.D. students, advised by Chernova, won the challenge by successfully assembling three kits with its robot in 39 minutes. It was the only team in the competition to complete the task, with the second-place finisher failing to score a point.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor its victory, the RAIL lab will receive a new mobile manipulation robot from Fetch Robotics, its second. Along with the other robots already in the lab\u0026rsquo;s possession, the newcomer will provide RAIL researchers new opportunities to pursue multi-robot applications. The prize package also includes items from the event\u0026rsquo;s co-sponsors EandM Robotics, Schunk, SICK Sensor Intelligence, and The Construct, to go with the $100,000 robot.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E[VIDEO::https:\/\/youtu.be\/G_ur71h4CNQ]\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This is a long-term benefit,\u0026rdquo; said Chernova, an associate professor in the \u003Ca href=\u0022http:\/\/ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E. \u0026ldquo;This is one of the most capable mobile manipulation platforms out there, and to now have two of them will enable us to enhance the capabilities of the robot and pursue new lines of research in our lab.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe allure of a new state-of-the-art robot would be enough to entice most teams to take part in the competition, but for Chernova and her participating students it was more about the opportunity to explore specific applications that aligned with their research initiatives, past and present.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe lab has done past work in grasping, semantic reasoning and mapping, and fault diagnosis, the latter of which has become a focus over the past six months. The competition, Ph.D. student \u003Cstrong\u003EDavid Kent\u003C\/strong\u003E said, came at a good time because the particular challenges it presented are often in this domain.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This particular setup was particularly challenging because there was just enough variability where it wasn\u0026rsquo;t going to work every time,\u0026rdquo; he said. \u0026ldquo;There would always be something going wrong, so fault recovery ended up being very central.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo win the competition, not only did Georgia Tech\u0026rsquo;s team have to come in first place, it had to do so by scoring at least 14 points. To put that into context, Georgia Tech was the only team in the competition to finish with any points. Teams scored points by successfully collecting items laid out at different stations to assemble three kits. They were awarded eight points for each completed kit. Any kit that was missing a piece, however, resulted in zero points awarded, and any kit with extra pieces would have points deducted.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If you drop one screw along the way and you don\u0026rsquo;t notice \u0026ndash; which is actually very easy to do \u0026ndash; you go away with nothing,\u0026rdquo; Chernova said. \u0026ldquo;In the real world, a partial kit is useless.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech achieved its first 15 points and elected to complete its third kit without official scoring to ensure it wouldn\u0026rsquo;t drop below the threshold needed to win the robot. Officially the team scored 15, but a completed third kit gave it an unofficial 23 points, after bonuses were added.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It was a lot of fun to be able to work with my lab on a single project and see it come together,\u0026rdquo; said Ph.D. student \u003Cstrong\u003EWeiyu Liu\u003C\/strong\u003E, another member of the team. \u0026ldquo;It was a really great opportunity to try out some of the code we had written and also to see others\u0026rsquo; code and other research projects.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAlready, the team has turned the experience into a submitted paper, which they hope to have accepted and published in the future. The focus is on mobile manipulation, which is a particularly challenging aspect of robotics because of what Chernova calls \u0026ldquo;an explosion of uncertainty.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Manipulation in many ways is a solved problem,\u0026rdquo; she said. \u0026ldquo;Navigation in many ways is a solved problem. When you put those two solved problems together, though \u0026ndash; when you take the wheels and put the arm on it \u0026ndash; it becomes a much more challenging problem, one our research will continue to tackle with the aid of Fetch in the coming years.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMembers of the team included: Chernova, Kent, Liu, \u003Cstrong\u003ESiddhartha Banerjee\u003C\/strong\u003E, \u003Cstrong\u003EAngel Daruna\u003C\/strong\u003E, \u003Cstrong\u003EJonathan Balloch\u003C\/strong\u003E, \u003Cstrong\u003EAbhinav Jain\u003C\/strong\u003E, \u003Cstrong\u003EAkshay Krishnan\u003C\/strong\u003E, \u003Cstrong\u003EMuhammad Asif Rana\u003C\/strong\u003E, \u003Cstrong\u003EHarish Ravichandar\u003C\/strong\u003E, \u003Cstrong\u003EBinit Shah\u003C\/strong\u003E, and \u003Cstrong\u003ENithin Shrivatsav\u003C\/strong\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A team of Georgia Tech master\u2019s and Ph.D. students, advised by Sonia Chernova, won the challenge by successfully assembling three kits with its robot in 39 minutes."}],"uid":"33939","created_gmt":"2019-06-28 19:55:14","changed_gmt":"2019-06-28 19:55:14","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-06-28T00:00:00-04:00","iso_date":"2019-06-28T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"622858":{"id":"622858","type":"image","title":"Georgia Tech FetchIt! Win","body":null,"created":"1561750984","gmt_created":"2019-06-28 19:43:04","changed":"1561750984","gmt_changed":"2019-06-28 19:43:04","alt":"The Georgia Tech RAIL lab celebrates a win in the FetchIt Mobile Manipulation Challenge at ICRA","file":{"fid":"237208","name":"Fetch.jpeg","image_path":"\/sites\/default\/files\/images\/Fetch.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Fetch.jpeg","mime":"image\/jpeg","size":296705,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Fetch.jpeg?itok=dXBkErUl"}}},"media_ids":["622858"],"related_links":[{"url":"http:\/\/rail.gatech.edu","title":"Robot Autonomy and Interactive Learning Lab"},{"url":"https:\/\/www.ic.gatech.edu\/content\/robotics-computational-perception","title":"Robotics and Computational Perception Research at Georgia Tech"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181602","name":"ic-robotics"},{"id":"181216","name":"cc-research"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"622523":{"#nid":"622523","#data":{"type":"news","title":"IC Researchers Awarded Outstanding Study Design Paper Award at ICWSM-19","body":[{"value":"\u003Cp\u003EA team of researchers that included individuals from Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E were awarded the Outstanding Study Design Paper award at the \u003Ca href=\u0022https:\/\/www.icwsm.org\/2019\/index.php\u0022\u003EInternational AAAI Conference on Web and Social Media\u003C\/a\u003E (ICWSM 2019) this week in Munich, Germany.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe paper, titled \u003Cem\u003E\u003Ca href=\u0022http:\/\/www.munmund.net\/pubs\/ICWSM19_DrugEffects.pdf\u0022\u003EA Social Media Study on the Effects of Psychiatric Medication Use\u003C\/a\u003E\u003C\/em\u003E, was presented by IC Ph.D. student \u003Cstrong\u003EKoustuv Saha\u003C\/strong\u003E and included fellow IC Ph.D. student \u003Cstrong\u003EBenjamin Sugar\u003C\/strong\u003E and IC Assistant Professor \u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E. Collaborators from Microsoft Research, Harvard Medical School, and New York University-Shanghai were also involved with the research.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research addresses a challenge in understanding the effects of psychiatric medications during mental health treatment. While clinical trials help to evaluate effects of the medication, there are challenges in generalizing trials to broader populations. Using a list of common approved and regulated psychiatric medications and a Twitter dataset of 300 million posts from 30,000 individuals, researchers developed machine learning models to first assess effects relating to mood, cognition, depression, anxiety, psychosis, and suicidal ideation and then, based on a score, observe how use of specific drugs are associated with characteristic changes in an individual\u0026rsquo;s psychopathology.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe goal of this research is a deeper understanding of effects and how to situate those with treatment outcomes.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EICWSM is a forum for researchers from multiple disciplines to come together to share knowledge, discuss ideas, exchange information, and learn about cutting-edge research in diverse fields with the common theme of online social media. This includes social theories, as well as computational algorithms for analyzing social media. In its 13\u003Csup\u003Eth\u003C\/sup\u003E year of existence, the conference has become one of the premier venues for computational social science.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The paper, titled A Social Media Study on the Effects of Psychiatric Medication Use, was presented by IC Ph.D. student Koustuv Saha and included fellow IC Ph.D. student Benjamin Sugar and IC Assistant Professor Munmun De Choudhury."}],"uid":"33939","created_gmt":"2019-06-14 19:51:15","changed_gmt":"2019-06-14 19:51:15","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-06-14T00:00:00-04:00","iso_date":"2019-06-14T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"622522":{"id":"622522","type":"image","title":"Koustuv Saha ICWSM","body":null,"created":"1560541641","gmt_created":"2019-06-14 19:47:21","changed":"1560541641","gmt_changed":"2019-06-14 19:47:21","alt":"Koustuv Saha presents a paper at ICWSM","file":{"fid":"237101","name":"Screen Shot 2019-06-14 at 3.46.50 PM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202019-06-14%20at%203.46.50%20PM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202019-06-14%20at%203.46.50%20PM.png","mime":"image\/png","size":769188,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202019-06-14%20at%203.46.50%20PM.png?itok=adXluiTA"}}},"media_ids":["622522"],"groups":[{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181216","name":"cc-research"},{"id":"181214","name":"ic-hcc"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"622052":{"#nid":"622052","#data":{"type":"news","title":"Georgia Tech Heads to the Golden State to Present at World\u2019s Leading Computer Vision Conference","body":[{"value":"\u003Cp\u003EFor those interested in computer vision, the \u003Ca href=\u0022http:\/\/cvpr2019.thecvf.com\/\u0022\u003EIEEE Computer Vision and Pattern Recognition (CVPR)\u003C\/a\u003E conference is the place to be. Known as the premier computer vision conference, it draws thousands of attendees and papers submitted each year.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis year, Georgia Tech will present 18 papers from 33 different authors. The papers discuss advancements in areas including video analytics, data sets, and evaluation. \u003Ca href=\u0022http:\/\/www.ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing (IC)\u003C\/a\u003E Assistant Professors \u003Cstrong\u003EDhruv Batra \u003C\/strong\u003Eand \u003Cstrong\u003EDevi Parikh \u003C\/strong\u003Eand Professor \u003Cstrong\u003EJames Rehg\u003C\/strong\u003E lead the pack with four accepted papers each.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech\u0026rsquo;s involvement continues outside of published research with several faculty members holding conference leadership positions or organizing workshops. IC Professor \u003Cstrong\u003EFrank Dellaert \u003C\/strong\u003Eis the co-organizer of the \u003Ca href=\u0022https:\/\/sumochallenge.org\/2019-sumo-workshop.html\u0022\u003E2019 SUMO Challenge Workshop: 360\u0026deg; Indoor Scene Understanding and Modeling\u003C\/a\u003E, and Parikh and Batra are organizing two workshops, \u003Cem\u003E\u003Ca href=\u0022https:\/\/visualqa.org\/workshop.html\u0022\u003EVisual Question Answering and Dialog\u003C\/a\u003E\u003C\/em\u003E and \u003Ca href=\u0022https:\/\/sumochallenge.org\/2019-sumo-workshop.html\u0022\u003E\u003Cem\u003EHabitat: Embodied Agents Challenge and Workshop\u003C\/em\u003E.\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;CVPR is the biggest conference in computer vision and arguably the \u003Ca href=\u0022http:\/\/www.guide2research.com\/topconf\/\u0022\u003Emost impactful conference\u003C\/a\u003E in all of computing. ML@GT has one of the \u003Ca href=\u0022http:\/\/csrankings.org\/#\/index?vision\u0022\u003Estrongest computer vision groups in the country\u003C\/a\u003E and it\u0026#39;s exciting to show off our latest research. Our visibility at CVPR helps us recruit the best students and faculty,\u0026rdquo; said \u003Cstrong\u003EJames Hays,\u003C\/strong\u003E IC associate professor and 2019 CVPR area chair.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECVPR will be held June 16 through 20 in Long Beach, Calif. at the Long Beach Convention and Entertainment Center.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor more information about Georgia Tech at CVPR 2019, please \u003Ca href=\u0022https:\/\/mailchi.mp\/8ea133dbe400\/mlatgtcvpr2019\u0022\u003Eclick here.\u003C\/a\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech presents 18 papers at global computer vision conference. "}],"uid":"34773","created_gmt":"2019-05-29 15:31:53","changed_gmt":"2019-06-14 18:06:42","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-06-14T00:00:00-04:00","iso_date":"2019-06-14T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"622051":{"id":"622051","type":"image","title":"CVPR 2019","body":null,"created":"1559143734","gmt_created":"2019-05-29 15:28:54","changed":"1559143734","gmt_changed":"2019-05-29 15:28:54","alt":"CVPR 2019","file":{"fid":"236945","name":"CVPR2019.jpg","image_path":"\/sites\/default\/files\/images\/CVPR2019.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/CVPR2019.jpg","mime":"image\/jpeg","size":427786,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/CVPR2019.jpg?itok=Dhfh9Iuz"}}},"media_ids":["622051"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"},{"id":"50877","name":"School of Computational Science and Engineering"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["allie.mcfadden@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"622415":{"#nid":"622415","#data":{"type":"news","title":"Diakopoulos Forges a Path to Combine Journalism and Artificial Intelligence","body":[{"value":"\u003Cp\u003EIt has been nearly ten years since \u003Cstrong\u003ENicholas Diakopoulos\u003C\/strong\u003E earned his Ph.D. in computer science from the \u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E at Georgia Tech and co-founded \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/content\/social-computing-computational-journalism\u0022\u003EComputational Journalism program at Georgia Tech\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESince then, he has been around the world teaching and researching how computer science and journalism can work together efficiently and effectively. He has even written a book about it \u0026ndash; \u003Ca href=\u0022https:\/\/www.barnesandnoble.com\/w\/automating-the-news-nicholas-diakopoulos\/1129517417?ean=9780674976986\u0026amp;st=PLA\u0026amp;sid=BNB_ADL+Core+Generic+Books+-+Desktop+Medium\u0026amp;sourceId=PLAGoNA\u0026amp;dpid=tdtve346c\u0026amp;2sid=Google_c\u0026amp;gclid=Cj0KCQjwitPnBRCQARIsAA5n84lTPF9G_v86FeosKBpl0IRWZZ_nsMvn9vrlmsw3IAKDlhuxGGilJqcaAjHnEALw_wcB\u0022\u003E\u003Cem\u003EAutomating The News: How Algorithms Are Rewriting the Media\u003C\/em\u003E\u003C\/a\u003E, which debuts this month.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003EComputation + Journalism\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EThe son of a newspaper and magazine editor, Diakopoulos was exposed early on to the world of news media. One of his first jobs was helping with layout and design, as well as managing subscriptions, for a magazine. Still, he was drawn to computing. When his family brought home a Tandy 1000 personal computer he and his brother would spend hours coding programs and games in Basic.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt wasn\u0026rsquo;t until his first year of graduate school at Georgia Tech that he started to see the connections between media and computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOriginally drawn to Georgia Tech by the strength of its computer vision graduate program \u0026ndash; \u003Ca href=\u0022http:\/\/csrankings.org\/#\/index?vision\u0022\u003Ecurrently ranked #2 in the United States\u003C\/a\u003E \u0026ndash;\u0026nbsp;and the campus\u0026rsquo; proximity to a buzzing city like Atlanta, it wasn\u0026rsquo;t until he was on campus that Diakoplous discovered his passion for human-computer interaction (HCI).\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELucky for him, Tech was also strong in this area. He soon began publishing research at HCI conferences like \u003Ca href=\u0022https:\/\/chi2019.acm.org\/\u0022\u003EConference on Human Factors in Computing Systems (CHI),\u003C\/a\u003E while still studying computer vision with his advisor, \u003Cstrong\u003EIrfan Essa,\u003C\/strong\u003E the director of the \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EMachine Learning Center at Georgia Tech (ML@GT).\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt wasn\u0026rsquo;t until Essa returned to the lab one day from a meeting that a true idea to combine computing and journalism was sparked.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I remember I was sitting at my desk messing around with some news graphic and interfaces when Irfan walked over and said he was just at a meeting at CNN where they were talking about computational journalism. He wasn\u0026rsquo;t sure what it was, but he encouraged me to figure it out,\u0026rdquo; said Diakopolous of the 2006 conversation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe seemingly random conversation would soon lead to the first-ever \u003Ca href=\u0022https:\/\/compjournalism.wordpress.com\/\u0022\u003Eseminar in computational journalism\u003C\/a\u003E co-taught by Essa and Diakopolous in 2007. At CHI 2007, Diakopolous ran into fellow Georgia Tech alum, \u003Cstrong\u003EBrad Stenger\u003C\/strong\u003E, who was running \u003Ca href=\u0022https:\/\/www.wired.it\/topic\/wired-next-fest-2018\/\u0022\u003EWIRED\u0026rsquo;s Next Fest\u003C\/a\u003E in San Francisco. Steiner was intrigued by the seminar and suggested creating a computational journalism event.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn Spring 2008, Georgia Tech hosted the first \u003Ca href=\u0022http:\/\/cplusj.org\/\u0022\u003EComputation + Journalism Symposium\u003C\/a\u003E. That first event brought over 100 students, faculty, and industry employees together to discuss possible partnerships and ways to bring computational thinking together with journalism. The symposium must have struck a chord with attendees because it is still going strong today.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003EThe Future of Artificial Intelligence and Journalism\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EWith nearly twelve years of research in computational journalism, Diakopoulos had an urge to write a book, but was unsure if it was something he should do. After co-authoring a textbook with a senior colleague, \u003Cstrong\u003EBen Shneiderman\u003C\/strong\u003E, the process of writing a book was demystified and he took the plunge.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDue in June 2019, the book explores the connections between artificial intelligence and journalism. Diakopoulos hopes that readers take away two lessons.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne, that all technology, including AI and algorithmic technology, has the capability to embed human values. Journalism itself is a values-driven institution that holds ideals like independence, verification, and accuracy in high regard. Diakopoulos encourages journalists to step up and collaborate with computer scientists to design AI the right way before someone else steps in and designs agents with other values.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDiakopoulos also emphasizes that AI is not taking away journalism jobs. In fact, his research shows that AI is creating jobs. With the introduction of AI into the workplace, employees are needed to edit and create knowledge bases, maintain quality assurance, manage the agents, and more.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;AI will make jobs change and shift, but it won\u0026rsquo;t take jobs away. AI is too brittle and bound to the data to completely replace a journalist. The most productive path forward is a collaborative, hybrid relationship between journalists and AI,\u0026rdquo; said Diakopoulos.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDiakopoulous is excited about this future hybridization because of AI\u0026rsquo;s ability to sift through large data sets and find original content. This unique content can help to turn readers into subscribers, which affects a news organizations bottom line. Diakopoulous believes that his work helps to make this clear, and that it could serve to inform responsible and strategic adoption of AI in news production.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELittle research has been on how people perceive AI-written content, and how this content impacts things like reader trust, but Diakopoulous is looking forward to the longitudinal studies to come.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBesides writing content, Diakopoulous has a team of researchers that is working on a project that is AB testing headlines. He hopes to create a playbook that is data-driven and helps inform writers and editors on how their linguistic choices actually effect search engine optimization or readership.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003EThe Yellow Jacket Effect\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EDiakopoulous credits Georgia Tech with instilling a mindset that looks at computing broadly.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Georgia Tech taught me that it\u0026rsquo;s not about the algorithm or optimizing things. It\u0026rsquo;s about understanding how computing technology can affect all kinds of stakeholders and the world around us,\u0026rdquo; said Diakopoulos.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe was also impressed with the \u003Ca href=\u0022https:\/\/gvu.gatech.edu\/\u0022\u003EGVU Center\u003C\/a\u003E at Georgia Tech and has tried to bring that ethos with him into every job.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I loved how the GVU Center was able to draw people from across campus. It was a powerful lesson and eye-opening experience to me on how universities could work. It showed me that we don\u0026rsquo;t have to be siloed into our individual research, but that some amazing partnerships, friendships, and projects can come out of working with people from around the institute.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAfter graduating from Georgia Tech in 2009, Diakopoulos was selected as a \u003Ca href=\u0022https:\/\/www.aaas.org\/page\/2018-mass-media-fellows\u0022\u003EMass Media Fellow\u003C\/a\u003E by the \u003Ca href=\u0022https:\/\/www.aaas.org\/\u0022\u003EAmerican Association for the Advancement of Sciences (AAAS\u003C\/a\u003E), a program that places Ph.D. and graduate students in news rooms across America to be science reporters. He was placed at \u003Ca href=\u0022https:\/\/www.sacbee.com\/\u0022\u003EThe Sacramento Bee\u003C\/a\u003E for a summer.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile he enjoyed his time at the newspaper, he was drawn to academia because of its mission to research and develop new knowledge, while also giving him a platform to explore problems that could have a diverse impact across the world \u0026ndash; not to mention the longer deadlines. He left the United States to become an assistant professor position at University of Bergen in Norway.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDiakopolous expected to stay in Norway for a while, but was granted a stimulus fund to do a post-doc at Rutgers University. He took the opportunity and moved to New York where he completed the Rutgers post-doc and fellowships at PUNY and Colombia University before accepting a tenure-track position at the University of Maryland in their School of Journalism. In 2017, Diakopoulous became an Assistant Professor in Communication Studies and Computer Science (by courtesy) at Northwestern University where he is Director of the Computational Journalism Lab (CJL).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor students earning their Ph.D., he encourages them to think about how their research will impact the public.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If you think about the implications your research could have, it might lead you to ask more impactful research questions. It\u0026rsquo;s a different way to go about research, but I have found it to usually be quite effective,\u0026rdquo; said Diakopoulous.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Co-founder of Georgia Tech\u0027s Computational Journalism program is releasing a book about the connections between AI and journalism and looks back on his time at Georgia Tech."}],"uid":"34773","created_gmt":"2019-06-10 20:10:21","changed_gmt":"2019-06-10 20:17:47","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-06-10T00:00:00-04:00","iso_date":"2019-06-10T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"622417":{"id":"622417","type":"image","title":"Georgia Tech alum, Nick Diakopoulos, releases his first book \u0022Automating the News: How Algorithms Are Rewriting the Media\u0022 this month.","body":null,"created":"1560197829","gmt_created":"2019-06-10 20:17:09","changed":"1560197829","gmt_changed":"2019-06-10 20:17:09","alt":"","file":{"fid":"237063","name":"nick-book.jpg","image_path":"\/sites\/default\/files\/images\/nick-book.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/nick-book.jpg","mime":"image\/jpeg","size":553387,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/nick-book.jpg?itok=hwujS1Q8"}}},"media_ids":["622417"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"130","name":"Alumni"}],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["allie.mcfadden@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"622215":{"#nid":"622215","#data":{"type":"news","title":"Artificial Intelligence Agents Begin to Learn New Skills from Watching Videos","body":[{"value":"\u003Cp\u003EData is a hot word in 2019 and according to \u003Cstrong\u003EAshley Edwards\u003C\/strong\u003E, there is a lot of data out there that can be used more efficiently for teaching robots and artificial agents how to do a variety of tasks.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEdwards, a recent computer science Ph.D. graduate from Georgia Tech, details her research in a new paper, \u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1805.07914.pdf\u0022\u003E\u003Cem\u003EImitating Latent Policies from Observation\u003C\/em\u003E\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe new approach uses imitation learning from observation and video data. This new way of thinking could eventually\u0026nbsp;teach agents how to do tasks like make a sandwich, play a videogame, or even drive a car, all from watching videos.\u0026nbsp;In most experiments, Edwards and her fellow researchers algorithm was able to complete a task in 200 to 300 steps while previous methods have gone into the thousands.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This approach is exciting because it unpeels another layer for how we can train artificial agents to work with humans. We have hardly skimmed the surface of this problem space, but this is a great next step,\u0026rdquo;\u0026nbsp;said\u0026nbsp;\u003Cstrong\u003ECharles Isbell,\u0026nbsp;\u003C\/strong\u003Edean designate of the College of Computing and paper co-author.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo accomplish this, researchers have an agent watch a video and guess what actions are being taken. In the paper, this is referred to as a latent policy. Given that guess, the agent tries to predict movements and learn what to do. When the agent is then placed into an actual environment, it can take what it has learned from the videos and apply its knowledge to real-world actions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn previous research using \u0026ldquo;imitation from observation,\u0026rdquo; humans must physically show agents how to do an action or train a computer to use a dynamic model to learn how to do a new task, both of which are time-consuming, expensive, and potentially dangerous.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;There are thousands of videos out there documenting people doing things, but it can be hard to know what they are doing in a way that can be applied to artificial systems,\u0026rdquo; said Edwards.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor example, there are countless hours of dashcam footage from autonomous cars driving on streets, but there isn\u0026rsquo;t much information about why self-driving cars make the decisions that they do. The videos rarely have detailed telemetry information about the vehicle, like what angle the steering wheel was pointed when the car moved a certain way. Edwards and her team hope that their algorithm will be able to analyze video footage and piece together not only how to do an action, but why.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDuring their research, Edwards and her co-authors performed four experiments to prove their idea. Using a platform game called Coinrun, they trained an agent to jump over platforms and avoid traps to solve a task. They also used classic control environments in their experiments to get a cart to balance a pole and teach a mountain car to drive itself up a mountain.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETheir approach was able to beat the expert in two of the experiments and was considered \u0026ldquo;state-of-the-art\u0026rdquo; in all four. \u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDespite its achievements, the current model is only created for discrete actions like moving right, left, forward or backward one step at a time. So, Edwards and her team are continuing to push their work forward toward being able to achieve smoother and more continuous actions for their models.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis research is one of 18 accepted papers from \u003Ca href=\u0022http:\/\/www.ml.gatech.edu\/\u0022\u003Ethe Machine Learning Center at Georgia Tech\u0026rsquo;s (ML@GT)\u003C\/a\u003E and will be presented at the \u003Ca href=\u0022https:\/\/icml.cc\/Conferences\/2019\u0022\u003E36\u003Csup\u003Eth\u003C\/sup\u003E Annual International Conference on Machine Learning (ICML)\u003C\/a\u003E held June 9 through 15 in Long Beach, Calif.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Using video and existing data, Georgia Tech researchers are teaching artificial agents how to do a variety of tasks more efficiently."}],"uid":"34773","created_gmt":"2019-06-04 15:00:34","changed_gmt":"2019-06-05 21:59:07","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-06-04T00:00:00-04:00","iso_date":"2019-06-04T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"622214":{"id":"622214","type":"image","title":"Georgia Tech researchers are looking at how to more efficiently teach robots and artificial agents how to do tasks using video. ","body":null,"created":"1559660261","gmt_created":"2019-06-04 14:57:41","changed":"1559660261","gmt_changed":"2019-06-04 14:57:41","alt":"Screen capture of YouTube ","file":{"fid":"237001","name":"con-karampelas-1178812-unsplash.jpg","image_path":"\/sites\/default\/files\/images\/con-karampelas-1178812-unsplash.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/con-karampelas-1178812-unsplash.jpg","mime":"image\/jpeg","size":287447,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/con-karampelas-1178812-unsplash.jpg?itok=a_SameGy"}}},"media_ids":["622214"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["allie.mcfadden@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"621993":{"#nid":"621993","#data":{"type":"news","title":"Meet ML@GT: Sean Foley, A Master of Dungeons and Dragons and Annotation","body":[{"value":"\u003Cp\u003E\u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EThe Machine Learning Center at Georgia Tech\u003C\/a\u003E\u0026nbsp;(ML@GT) is home to many talented students from across campus, representing all six of Georgia Tech\u0026rsquo;s colleges and the\u0026nbsp;\u003Ca href=\u0022https:\/\/www.gtri.gatech.edu\/\u0022\u003EGeorgia Tech Research Institute (GTRI).\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThese students have diverse backgrounds and a wide variety of interests, both inside and outside of the classroom. Today, we\u0026rsquo;d like you to meet\u0026nbsp;\u003Cstrong\u003ESean Foley,\u0026nbsp;\u003C\/strong\u003Ean avid Dungeons and Dragons player who is soaking up the present day.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EName: \u003C\/strong\u003ESean Foley\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EHometown: \u003C\/strong\u003EAtlanta, Ga.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ETell us about your research:\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMy current research lies in accelerating the annotation of videos. I work on a system that helps people label data more easily, with the hope of saving researchers time and money. I\u0026rsquo;m also very interested in advancing the understanding of how work should be divided between humans and machines during supervised learning.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELonger term, I\u0026#39;m interested in videos understanding context and being able to learn from a few examples rather than thousands, as these are two areas that reveal the shortcomings of modern machine learning techniques vis-a-vis human ability. I\u0026#39;m also interested in environmental and social applications of computer vision, particularly for analytics at a large scale.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EAdvisor: \u003C\/strong\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~hays\/\u0022\u003EJames Hays\u003C\/a\u003E, associate professor in the \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ECurrent Georgia Tech degree program: \u003C\/strong\u003EI\u0026rsquo;m a second-year machine learning Ph.D. student and my home school is the School of Interactive Computing\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EOther degrees earned:\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI have a B.S. in Computer Science and a B.A. in Cognitive Science, both from the University of Georgia.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EFavorite conference and why: \u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECVPR. It\u0026rsquo;s the only conference I have been to and it was awesome. I hope to go to many more!\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EDescribe your perfect Saturday:\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMy perfect Saturday is playing Dungeons \u0026amp; Dragons with my friends. I\u0026#39;ve been running D\u0026amp;D for about three years now and it\u0026#39;s a blast.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EFavorite place to hang out on campus or in Atlanta and why:\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI honestly love my house a lot and I enjoy hosting people. Some other favorites are \u003Ca href=\u0022https:\/\/ameliesfrenchbakery.com\/\u0022\u003EAmelie\u0026rsquo;s,\u003C\/a\u003E \u003Ca href=\u0022http:\/\/www.wagaya.us\/\u0022\u003EWayaga\u003C\/a\u003E, and \u003Ca href=\u0022https:\/\/www.drbombays.com\/\u0022\u003EDr. Bombay\u0026rsquo;s\u003C\/a\u003E, and Waffle House. Piedmont Park is nice and spacious. As far as campus goes, I\u0026rsquo;m almost always in the lab.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ETell us about your hobbies: \u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI play classical guitar. And, I love playing video games and reading science fiction. I also run a Dungeons and Dragons group which takes up a lot of my time with planning campaigns, drawing maps, and piecing together the story.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EFavorite Georgia Tech experience: \u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMy labmates are really fun and we often have game nights at my advisor\u0026#39;s house, which is fun and cozy.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWho is someone that inspires you and why? \u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.ursulakleguin.com\/UKL_info.html\u0022\u003EUrsula LeGuin\u003C\/a\u003E is a constant source of inspiration to me. She was an incredibly wise person and her mind was a gift to the world.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat is your proudest accomplishment?\u0026nbsp;\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EProfessionally, my proudest accomplishment is my work with Berkeley Deep Drive on Scalabel, as it was my first huge software engineering project.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMy proudest accomplishment personally is my D\u0026amp;D group. It\u0026#39;s been going on for almost three years now and has been a great way to keep my college friends in touch with each other.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat is the most random or useless talent that you have?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI can crabwalk REALLY fast. Picture the fastest you can imagine someone crab walking, then double it.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat do you hope to do after graduation?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI hope to go into academia because I\u0026#39;ve always enjoyed teaching and the academic research environment is rigorous but fun.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat is your guilty pleasure?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDefinitely Talenti ice cream. Salted caramel is my favorite flavor, but they\u0026#39;re all so good.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat are you most looking forward to in the next ten years?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn the next ten years, I\u0026#39;m most looking forward to \u0026lsquo;right now.\u0026rsquo; I am not trying to rush anything, and things are going well in the present. I am continuing to focus on self-improvement, hard work, and enjoying today for what it is.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EPodcast, movie, TV show, or book? Why?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI\u0026rsquo;d say movie because there is so much you can do with the medium. Two to three hours is the perfect length for a story, and you can watch with friends. Books have always been good, but in my opinion, movies are better in the present day than they\u0026#39;ve ever been.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The Machine Learning Center at Georgia Tech is full of amazing students. Today, we\u0027d like you to meet Sean Foley."}],"uid":"34773","created_gmt":"2019-05-24 19:51:17","changed_gmt":"2019-05-24 19:51:17","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-05-24T00:00:00-04:00","iso_date":"2019-05-24T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"621992":{"id":"621992","type":"image","title":"Sean Foley","body":null,"created":"1558727255","gmt_created":"2019-05-24 19:47:35","changed":"1558727255","gmt_changed":"2019-05-24 19:47:35","alt":"Sean Foley","file":{"fid":"236923","name":"7903367946555341567_IMG_1847.jpg","image_path":"\/sites\/default\/files\/images\/7903367946555341567_IMG_1847.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/7903367946555341567_IMG_1847.jpg","mime":"image\/jpeg","size":347326,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/7903367946555341567_IMG_1847.jpg?itok=WTO34zQn"}}},"media_ids":["621992"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"134","name":"Student and Faculty"}],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"621931":{"#nid":"621931","#data":{"type":"news","title":"Number of Regents\u0027 Professors Hits Double Digits with New Appointments","body":[{"value":"\u003Cp\u003EAs part of its \u003Ca href=\u0022https:\/\/www.usg.edu\/assets\/regents\/documents\/board_meetings\/agenda_2019_05.pdf\u0022\u003EMay 14 board meeting\u003C\/a\u003E, the University System of Georgia (USG) Board of Regents appointed four College of Computing faculty members as Regents\u0026rsquo; Professors.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe quartet are among the 11 Georgia Tech professors from across campus appointed to named faculty positions this month, and they represent each of the College\u0026rsquo;s three schools:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/seymour-goodman\u0022\u003ESeymour Goodman\u003C\/a\u003E\u003C\/strong\u003E, a joint professor in the \u003Ca href=\u0022https:\/\/scs.gatech.edu\/\u0022\u003ESchool of Computer Science\u003C\/a\u003E and the \u003Ca href=\u0022https:\/\/inta.gatech.edu\/\u0022\u003ESam Nunn School of International Affairs\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/cse.gatech.edu\/people\/surya-kalidindi\u0022\u003ESurya Kalidindi\u003C\/a\u003E\u003C\/strong\u003E, a joint professor in the \u003Ca href=\u0022https:\/\/cse.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003ESchool of Computational Science \u0026amp; Engineering\u003C\/a\u003E and the \u003Ca href=\u0022https:\/\/www.me.gatech.edu\/\u0022 target=\u0022_blank\u0022\u003EGeorge W. Woodruff School of Mechanical Engineering\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/elizabeth-mynatt\u0022\u003EElizabeth Mynatt\u003C\/a\u003E\u003C\/strong\u003E, a Distinguished Professor in the \u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E and the executive director of the \u003Ca href=\u0022https:\/\/ipat.gatech.edu\/\u0022\u003EInstitute for People and Technology\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/haesun-park\u0022\u003EHaesun Park\u003C\/a\u003E\u003C\/strong\u003E, a professor in the School of Computational Science \u0026amp; Engineering\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;These appointments are worthy recognition of Sy, Surya, Beth, and Haesun and the significant research contributions each has made \u0026ndash; and continues to make \u0026ndash; to their respective fields,\u0026rdquo; said \u003Cstrong\u003EZvi Galil\u003C\/strong\u003E, the John P. Imlay Jr. Dean of Computing at Georgia Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/610180\/professor-earns-highest-academic-honor-university-system-georgia\u0022 target=\u0022_blank\u0022\u003E[RELATED: Professor Recognized by USG With Top Academic Honor]\u003C\/a\u003E\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA Regents\u0026rsquo; Professorship is the highest academic and research honor given to faculty members by the USG Board of Regents. With the addition of Goodman, Kalidindi, Mynatt, and Park, there are now 11 Regents\u0026rsquo; Professors in the GT Computing community.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A trio of College of Computing faculty members were recently approved as Regents\u0027 Professors"}],"uid":"32045","created_gmt":"2019-05-23 15:15:57","changed_gmt":"2019-05-23 16:22:53","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-05-23T00:00:00-04:00","iso_date":"2019-05-23T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"621932":{"id":"621932","type":"image","title":"USG Board of Regents","body":null,"created":"1558624650","gmt_created":"2019-05-23 15:17:30","changed":"1558624650","gmt_changed":"2019-05-23 15:17:30","alt":"University System of Georgia Board of Regents logo","file":{"fid":"236906","name":"USG BOR logo original.jpeg","image_path":"\/sites\/default\/files\/images\/USG%20BOR%20logo%20original.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/USG%20BOR%20logo%20original.jpeg","mime":"image\/jpeg","size":65945,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/USG%20BOR%20logo%20original.jpeg?itok=MhmA4N1F"}}},"media_ids":["621932"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"66442","name":"MS HCI"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"10177","name":"Regents\u0027 Professor"},{"id":"10989","name":"Beth Mynatt"},{"id":"10475","name":"Haesun Park"},{"id":"167857","name":"Sy Goodman"},{"id":"168983","name":"Surya Kalidindi"},{"id":"1966","name":"usg"},{"id":"728","name":"Board of Regents"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAlbert Snedeker, Communications Manager\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:albert.snedeker@cc.gatech.edu?subject=New%20Regents\u0027%20Professors\u0022\u003Ealbert.snedeker@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["albert.snedeker@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"620914":{"#nid":"620914","#data":{"type":"news","title":"New Machine Learning Research ID\u2019s Opioid Addiction Self-Treatments and Risks","body":[{"value":"\u003Cp\u003EUsing advanced machine-learning\u0026nbsp;techniques, Georgia Tech researchers have examined nearly 1.5 million Reddit posts to identify risks associated with several of the most common alternative treatments for opioid addiction.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThese clinically untested, self-directed treatments are often developed and promoted through online communities like Reddit, which commonly encourage their use without professional medical consultation. According to the study, the three most commonly used are:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EKratom \u0026ndash; an unregulated herbal stimulant\u003C\/li\u003E\r\n\t\u003Cli\u003EImodium \u0026ndash; a common anti-diarrheal medication\u003C\/li\u003E\r\n\t\u003Cli\u003EXanax \u0026ndash; a psychiatric medication used to treat anxiety and panic disorders\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EThe findings \u0026ndash; part of what the researchers say is one of the first large-scale social media study of clinically unproven, alternative treatments (ATs) used in opioid addiction recovery \u0026ndash; indicate that these treatments offer:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003ERisky results\u003C\/li\u003E\r\n\t\u003Cli\u003EPotentially substantial side effects\u003C\/li\u003E\r\n\t\u003Cli\u003EHigh chance of abuse for those struggling with recovery\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Along with identifying what potentially are the most commonly used alternative treatments, we documented a number of important trends and gained valuable insights into how people use ATs, which will be used in part to better inform current treatment strategies,\u0026rdquo; said \u003Cstrong\u003EStevie Chancellor\u003C\/strong\u003E, Georgia Tech College of Computing Ph.D. student and chief author of the study.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/podcasts.apple.com\/us\/podcast\/social-medias-role-in-opioid-addiction-recovery-stevie\/id1435564422?i=1000437211514\u0022 target=\u0022_blank\u0022\u003E[PODCAST:\u0026nbsp;Social Media\u0026#39;s Role in Opioid Addiction Recovery\u0026nbsp;with Stevie Chancellor]\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne of the key trends documented among AT users is the growing use of \u0026ldquo;stacks\u0026rdquo; or \u0026ldquo;kits.\u0026rdquo; These combine several substances\u0026shy; \u0026ndash; prescription drugs, illicit drugs, over the counter medications, vitamins\/minerals, or other substances \u0026ndash; to combat withdrawal symptoms and facilitate recovery.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAn anonymous Reddit post evaluated in the study read, \u0026ldquo;I\u0026rsquo;ve decided to quit [opioids] for good - what you guys think about my withdrawal strategy?\u0026rdquo; The post goes on to list what can reasonably be considered as a risky mix of anti-anxiety drugs, OTCs, alcohol, and more. Added to this, the study also found that many of these \u0026ldquo;stacks\u0026rdquo; have specific dosing patterns that users follow.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo identify these posts from among the initial dataset of 1.44 million posts from 63 subreddits, Chancellor and her colleagues developed a machine learning binary classifier. The classifier, which used a transfer learning approach to improve results as it scanned from one subreddit to the next, automatically labeled each post as either recovery or non-recovery related.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/b.gatech.edu\/2XzztNd\u0022 target=\u0022_blank\u0022\u003E[RELATED:\u0026nbsp;Researchers Plumb Reddit to Reveal New Insights into Stress Following Campus Violence]\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research team \u0026ndash; which included an addiction research scientist \u0026ndash; applied natural language processing techniques to the resulting dataset of 93,104 recovery-related posts. Known as \u0026ldquo;word embeddings,\u0026rdquo; the process allows a system to find and learn contextual relationships between words and phrases in a large dataset. These relationships often reveal connections between words that may be misspelled or slang terms for common words.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe results were verified through a series of automated and human-in-the-loop validation tests of the machine learning algorithm.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We found that Imodium, commonly used to relieve nausea and diarrhea, is often referred to as \u0026lsquo;lope\u0026rsquo;, which is short for the active ingredient Loperamide. We also confirm that it is prone to misuse and dependence,\u0026rdquo; said Chancellor.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt\u0026rsquo;s this kind of insight the Georgia Tech team wants to share with treatment and recovery researchers and care providers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Because there is little empirical research into alternative treatments for opioid use disorder, professionals overseeing detoxification and behavioral interventions are at a disadvantage,\u0026rdquo; said Munmun DeChoudhury, School of Interactive Computing assistant professor and co-author of the research study.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Broadly speaking, we\u0026rsquo;re interested in reducing harm that can be caused by alternative treatments. We are moving in the right direction by identifying and giving some context to the most commonly used of these, which ultimately gives behavioral health clinicians more insight into how ATs impact their patients.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EChancellor and De Choudhury\u0026rsquo;s work on this project was supported in part by a grant from the National Institutes of Health, #R01GM112697.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EComplete details about the study are in a paper titled \u003Cem\u003EDiscovering Alternative Treatments for Opioid Use Recovery Using Social Media\u003C\/em\u003E, which has been accepted to the 2019 ACM CHI Conference on Human Factors in Computing Systems, set for May 4 through 9 in Glasgow, UK.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Findings provide new insights into how people use clinically untested opioid recovery methods."}],"uid":"32045","created_gmt":"2019-04-25 15:25:51","changed_gmt":"2019-05-14 21:14:02","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-25T00:00:00-04:00","iso_date":"2019-04-25T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"621222":{"id":"621222","type":"image","title":"Most Common Self-Treatments for Opioid Addiction","body":null,"created":"1556809340","gmt_created":"2019-05-02 15:02:20","changed":"1556809340","gmt_changed":"2019-05-02 15:02:20","alt":"Most common self-treatments for opioid addiction identified through Georgia Tech research","file":{"fid":"236632","name":"Opioid self-treatments IDd.jpg","image_path":"\/sites\/default\/files\/images\/Opioid%20self-treatments%20IDd.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Opioid%20self-treatments%20IDd.jpg","mime":"image\/jpeg","size":991475,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Opioid%20self-treatments%20IDd.jpg?itok=Z0jzrzlM"}}},"media_ids":["621222"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"9167","name":"machine learning"},{"id":"181124","name":"munmun"},{"id":"172780","name":"stevie chancellor"},{"id":"181125","name":"opioid use disorder"},{"id":"181126","name":"imodium"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71891","name":"Health and Medicine"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAlbert Snedeker, Communications Manager\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:albert.snedeker@cc.gatech.edu?subject=Opioid%20Self-Treatment%20Study\u0022 target=\u0022_blank\u0022\u003Ealbert.snedeker@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["albert.snedeker@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"620694":{"#nid":"620694","#data":{"type":"news","title":"Meet ML@GT: Unaiza Ahsan","body":[{"value":"\u003Cp\u003E\u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EThe Machine Learning Center at Georgia Tech\u003C\/a\u003E (ML@GT) is home to many amazing students from across campus, representing all six of Georgia Tech\u0026rsquo;s colleges and the Georgia Tech Research Institute (GTRI).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThese students have diverse backgrounds and a wide variety of interests both inside and outside of the classroom. Today, we\u0026rsquo;d like you to meet \u003Cstrong\u003EUnaiza Ahsan\u003C\/strong\u003E who recently successfully defended her thesis and will graduate this spring with a Ph.D. in computer science.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EHometown:\u003C\/strong\u003E Karachi, Pakistan\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EAdvisor:\u003C\/strong\u003E Irfan Essa, ML@GT Director and \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E Professor\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ECurrent Georgia Tech degree program:\u003C\/strong\u003E Ph.D. Computer Science\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EOther degrees earned: \u003C\/strong\u003EB.E. in Telecommunications and a M.S. in Computer and Information Science Systems from NED University of Engineering \u0026amp; Technology, Karachi Pakistan\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ETell us about your research:\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMy research is about artificial intelligence (AI) agents learning useful visual representations without requiring massively labeled datasets. So, I am interested in something called self-supervised learning. This is where you take the data \u0026ndash; videos in my case \u0026ndash; construct a pseudo-task out of that data like creating a video jigsaw puzzle, for example, and training a deep network to solve the task. This approach does not require labels and we end up with networks that can recognize actions in videos much better than if we had trained them from scratch.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat are your plans after graduation?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI recently accepted a position as a data scientist with The Home Depot in Atlanta.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat is your favorite conference and why?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI have two. The \u003Ca href=\u0022http:\/\/cvpr2019.thecvf.com\/\u0022\u003EComputer Vision and Pattern Recognition (CVPR)\u003C\/a\u003E conference is one because it always gives me great ideas of what to do next in my research. My other favorite is the \u003Ca href=\u0022http:\/\/wacv19.wacv.net\/\u0022\u003EWinter Applications of Computer Vision (WACV)\u003C\/a\u003E conference. It\u0026rsquo;s one of the best events for presenting my research and learning from others in the field. It\u0026rsquo;s always held in amazing locations in the United States, which also gives me a chance to explore awesome places like Hawaii!\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat is your favorite place to hang out on campus or in Atlanta and why?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI used to be scared of biking because I had never really biked before in my life, but now I have come to love it! My favorite place to hang out in \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/about\/atlanta\u0022\u003EAtlanta\u003C\/a\u003E is riding my bike on the \u003Ca href=\u0022https:\/\/beltline.org\/\u0022\u003EBeltline\u003C\/a\u003E, which is also fairly close to campus.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat are some of your hobbies?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI love writing poetry! I have scribbled so many poems in between my notebooks full of research, experiments etc.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EVolunteer work is also important to me. I love volunteering at \u003Ca href=\u0022https:\/\/openhandatlanta.org\/\u0022\u003EOpen Hand Atlanta\u003C\/a\u003E. Their work is really important and I love the opportunity to give back to my community. They deliver nutritious meals to senior citizens all around Atlanta.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI also enjoy long walks and nature in general.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat is your favorite Georgia Tech experience?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGazing through the amazing telescopes at the observatory in \u003Ca href=\u0022https:\/\/www.physics.gatech.edu\/about\/directions\u0022\u003EHowey\u003C\/a\u003E! That was a one-of-a-kind experience! Uh, my thesis defense too...but after it was over!\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EHow would you describe your perfect Saturday?\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMy perfect Saturday would start with a really good research result, which would make me happy and want to celebrate. I would celebrate by riding my bike on the Beltline and stopping at Jeni\u0026rsquo;s Splendid Ice Creams for some well-deserved and delicious ice cream.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMy family is back home in Pakistan so I would give them a call before heading off to volunteer for the afternoon. My perfect day would end with having an awesome \u0026lsquo;Desi\u0026rsquo; dinner.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWho is someone that inspires you and why? \u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMy aunt, Dr. Asmat Salim. She is a force to reckon with and has been instrumental in my life. She encouraged me to apply for the Ph.D. program at Georgia Tech and has been amazingly happy for me at every milestone. She has won the \u003Ca href=\u0022https:\/\/www.heart.org\/en\/affiliates\/paul-dudley-white-award\u0022\u003EPaul Dudley White International Science Team Award\u003C\/a\u003E from the American Heart Association (AHA) for her research, twice(!), all while battling some serious illnesses and supervising her own Ph.D. students in Pakistan\u003C\/p\u003E\r\n\r\n\u003Cp\u003EShe has taught me to be resilient, no matter what our personal circumstances and that this is one of the most useful lessons we can learn in life.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat is your proudest accomplishment? \u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEarning my Ph.D. in computer science from Georgia Tech! A close contender is when I won a poetry contest in Pakistan. The prize was a summer creative writing course at Middlesex University in the United Kingdom.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWhat is something that you would most like to create? \u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EI would love to write a book someday, or at least publish a book of poems titled \u003Cem\u003ELife as a Ph.D. Student\u003C\/em\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The Machine Learning Center at Georgia Tech is full of amazing students. Today, we\u0027d like you to meet Unaiza Ahsan."}],"uid":"34773","created_gmt":"2019-04-22 12:14:21","changed_gmt":"2019-05-14 14:39:59","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-22T00:00:00-04:00","iso_date":"2019-04-22T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"621663":{"id":"621663","type":"image","title":"Unaiza Ahsan is hooded by her advisor, Irfan Essa, at commencement spring 2019.","body":null,"created":"1557844724","gmt_created":"2019-05-14 14:38:44","changed":"1557844724","gmt_changed":"2019-05-14 14:38:44","alt":"","file":{"fid":"236798","name":"47766336181_74eb1804c2_k.jpg","image_path":"\/sites\/default\/files\/images\/47766336181_74eb1804c2_k.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/47766336181_74eb1804c2_k.jpg","mime":"image\/jpeg","size":677741,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/47766336181_74eb1804c2_k.jpg?itok=TpBB5Vuk"}},"617524":{"id":"617524","type":"image","title":"Unaiza Ahsan traveled to Hawaii in January 2019 to present her paper at the Winter Applications of Computer Vision (WACV) conference. ","body":null,"created":"1549636546","gmt_created":"2019-02-08 14:35:46","changed":"1549636546","gmt_changed":"2019-02-08 14:35:46","alt":"","file":{"fid":"235048","name":"IMG_20190107_071941352.jpg","image_path":"\/sites\/default\/files\/images\/IMG_20190107_071941352.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/IMG_20190107_071941352.jpg","mime":"image\/jpeg","size":210492,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/IMG_20190107_071941352.jpg?itok=9YNISt5a"}}},"media_ids":["621663","617524"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"134","name":"Student and Faculty"},{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"621531":{"#nid":"621531","#data":{"type":"news","title":"Two Georgia Tech Alums Receive Prestigious Awards at CHI 2019","body":[{"value":"\u003Cp\u003ETwo former \u003Ca href=\u0022http:\/\/www.gatech.edu\u0022\u003EGeorgia Tech\u003C\/a\u003E students were recognized by the CHI community this week in Glasgow, U.K., one for her overall contributions in human-computer interaction at the conference and another for her long history of promoting social action within the community.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EJennifer Mankoff\u003C\/strong\u003E, one of Professor \u003Cstrong\u003EGregory Abowd\u003C\/strong\u003E\u0026rsquo;s first of 30 Ph.D graduates in 2001, was inducted into the prestigious CHI Academy this week, and \u003Cstrong\u003EGillian Hayes\u003C\/strong\u003E (2007), also advised by Abowd, was awarded the Social Impact award.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMankoff, who was Abowd\u0026rsquo;s third ever Ph.D. graduate, joined an exclusive community that includes eight Georgia Tech faculty members. Most recently, Professor \u003Cstrong\u003EAmy Bruckman\u003C\/strong\u003E was \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/602715\/professor-amy-bruckman-joins-seven-other-ic-faculty-chi-academy\u0022\u003Einducted a year ago\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMankoff noted the mentors, like Abowd, she had along the way to give her that opportunity. Abowd provided the introduction for Mankoff at the awards ceremony for the CHI academy. She credited her research community and the CHI community for giving her the freedom to pursue the kind of research that she was passionate about.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The openness to let people be able to work on whatever they\u0026rsquo;re passionate about and see that has value is something that\u0026rsquo;s been important to me over the years,\u0026rdquo; Mankoff said. \u0026ldquo;More than once, I\u0026rsquo;ve shifted to another area that I wasn\u0026rsquo;t working in before and maybe a lot of others weren\u0026rsquo;t either. It\u0026rsquo;s a sign of how open the community is.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeated at a reunion\u0026nbsp;party for the Abowd \u0026ldquo;family\u0026rdquo; \u0026ndash; academics who were part of a lineage that began as doctoral students in Abowd\u0026rsquo;s lab \u0026ndash; she noted the importance of having a vibrant community like that.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We were very lucky to be there at the beginning, helping to form his group and to learn from him and all the energy he brings to this group,\u0026rdquo; she said. \u0026ldquo;It\u0026rsquo;s one of the strongest networks I have at CHI.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHayes received her Social Impact award just 12 years after Abowd received his own in 2007. She said it was an especially proud honor to have the distinction of following in the footsteps of her advisor.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The way he has instilled in us an ethos of being able to give back, being able to bake in community outcomes with our research outcomes and define good, interesting research problems that also really solve real-world problems, and work in partnership with communities,\u0026rdquo; Hayes said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHayes, whose 30-minute talk at the conference focused on ways in which the community needed to do better in thinking about issues of accessibility, access, racial and gender inequities, and much more, said she thought the CHI community was leading the way as a standard-bearer for diversity, inclusion, and service.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;But we still have a long way to go,\u0026rdquo; she said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHer talk, she hoped, would be a call to action to the rest of the community.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This is our time, and we can control our destinies and we can create truly community-driven innovation,\u0026rdquo; she said.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Jennifer Mankoff, one of Professor Gregory Abowd\u2019s first of 30 Ph.D graduates in 2001, was inducted into the prestigious CHI Academy this week, and Gillian Hayes (2007), also advised by Abowd, was awarded the Social Impact award."}],"uid":"33939","created_gmt":"2019-05-08 22:03:04","changed_gmt":"2019-05-08 22:03:04","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-05-08T00:00:00-04:00","iso_date":"2019-05-08T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"621530":{"id":"621530","type":"image","title":"CHI Awards 2019","body":null,"created":"1557352606","gmt_created":"2019-05-08 21:56:46","changed":"1557352606","gmt_changed":"2019-05-08 21:56:46","alt":"Jennifer Mankoff, Gregory Abowd, and Gillian Hayes smiling","file":{"fid":"236742","name":"Awards CHI.jpg","image_path":"\/sites\/default\/files\/images\/Awards%20CHI.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Awards%20CHI.jpg","mime":"image\/jpeg","size":131241,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Awards%20CHI.jpg?itok=s9avGS1d"}}},"media_ids":["621530"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"620601":{"#nid":"620601","#data":{"type":"news","title":"Could a Robot Save People from a Burning Building? Georgia Tech is Pushing New Robotics Research in that Direction","body":[{"value":"\u003Cp\u003EFor several years, scientists have been training intelligent agents on images and other data so that machines can learn to recognize what they see. Researchers are now starting to work toward training robots equipped with these stores of data to be able to make better autonomous decisions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EZsolt Kira\u003C\/strong\u003E, associate director of the \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EMachine Learning Center at Georgia Tech\u003C\/a\u003E, and Georgia Tech Ph.D. student \u003Cstrong\u003EChih-Yao Ma\u003C\/strong\u003E have published new research that improves on how autonomous robots move in their surroundings. This work is in collaboration with \u003Cstrong\u003ECaiming Xiong,\u003C\/strong\u003E director of \u003Ca href=\u0022https:\/\/www.salesforce.com\/research\/\u0022\u003ESalesforce Research\u003C\/a\u003E, and researchers from the \u003Ca href=\u0022https:\/\/www.umd.edu\/\u0022\u003EUniversity of Maryland, College Park\u003C\/a\u003E.\u0026nbsp;Georgia Tech professor \u003Cstrong\u003EGhassan AlRegib\u0026nbsp;\u003C\/strong\u003Eand Ph.D. student\u0026nbsp;\u003Cstrong\u003EJiasen Lu \u003C\/strong\u003Eare also paper authors.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EExisting methods have allowed robots to navigate unknown environments by combining a 360- degree panoramic view of their surroundings and programmed instructions that describe how to accomplish a goal. The goal could be to locate a doctor\u0026rsquo;s office in an office complex or find the fastest exit route in a building.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe new Georgia Tech research by Kira and his team improves the accuracy with which a robot completes an assigned navigation task by 8 percent, a significant increase for autonomous navigation systems. The research method includes new mechanisms that add reasoning skills to autonomous systems, as well as the ability for them to essentially correct their mistakes.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;By teaching robots to more effectively navigate unknown environments, robots could be used in the household or for autonomous vehicles,\u0026rdquo; said Kira.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe researchers say their method\u0026rsquo;s improved accuracy for navigation could be particularly helpful in the future for scenarios that might be too dangerous for humans, such as a robot performing search and rescue or entering a burning building. Or robots could simply take up more mundane (but essential) tasks like making and serving morning coffee.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKira\u0026rsquo;s team began with an existing robotics technique, the attention mechanism. The mechanism teaches robots to autonomously move in their environment using written instructions. The mechanism also helps it identify which step should be completed next.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKira and Ma added a reasoning component to allow the robot to estimate how well it was doing in completing the task, and how close it was to finishing it. They also added a new \u0026ldquo;rollback\u0026rdquo; function. Rollback uses a neural network trained to help the agent determine if it has made a mistake while following instructions. If it determines a mistake has been made, the agent reverts to its most recent successfully completed task in an effort to correct the error. This improvement significantly reduces the number of steps needed to reach the goal.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;These added components helped increase the accuracy of the attention mechanism and led to higher success rates in performance or completing the set of instructions,\u0026rdquo; said Kira.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe work is published in two papers, \u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1901.03035.pdf\u0022\u003E\u0026ldquo;Self-Monitoring Navigation Agent Via Auxillary Progress Estimation\u0026rdquo;\u003C\/a\u003E and \u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1903.01602.pdf\u0022\u003E\u0026ldquo;The Regretful Agent: Heuristic-Aided Navigation through Progress Estimation.\u0026rdquo;\u003C\/a\u003E The papers will be presented respectively at the \u003Ca href=\u0022https:\/\/iclr.cc\/\u0022\u003EInternational Conference on Learning Representations (ICLR)\u003C\/a\u003E May 6-9 and the \u003Ca href=\u0022http:\/\/cvpr2019.thecvf.com\/\u0022\u003EComputer Vision and Pattern Recognition (CVPR)\u003C\/a\u003E conference June 16-20.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"New research on autonomous robots from the Machine Learning Center at Georgia Tech, Salesforce Research, and the University of Maryland will be presented at two major AI conferences this summer. "}],"uid":"34773","created_gmt":"2019-04-17 20:21:55","changed_gmt":"2019-05-07 16:48:03","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-17T00:00:00-04:00","iso_date":"2019-04-17T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"620599":{"id":"620599","type":"image","title":"New research on autonomous robots from the Machine Learning Center at Georgia Tech in collaboration with Salesforce Research and the University of Maryland will be presented at two major AI conferences this summer, ICLR and CVPR. ","body":null,"created":"1555532208","gmt_created":"2019-04-17 20:16:48","changed":"1555532227","gmt_changed":"2019-04-17 20:17:07","alt":"","file":{"fid":"236316","name":"Screen Shot 2019-02-20 at 3.43.27 PM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202019-02-20%20at%203.43.27%20PM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202019-02-20%20at%203.43.27%20PM.png","mime":"image\/png","size":1026779,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202019-02-20%20at%203.43.27%20PM.png?itok=MnkVIG9U"}},"620600":{"id":"620600","type":"image","title":"The graphs on the left represent a baseline on how well an agent is moving through a set of tasks. The two graphs on the right represent the work discussed in \u201cSelf-Monitoring Navigation Agent Via Auxillary Progress Estimation\u201d. The darker the green is an","body":null,"created":"1555532275","gmt_created":"2019-04-17 20:17:55","changed":"1555532275","gmt_changed":"2019-04-17 20:17:55","alt":"","file":{"fid":"236317","name":"Screen Shot 2019-02-20 at 9.43.05 AM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202019-02-20%20at%209.43.05%20AM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202019-02-20%20at%209.43.05%20AM.png","mime":"image\/png","size":38510,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202019-02-20%20at%209.43.05%20AM.png?itok=6QURy2PL"}}},"media_ids":["620599","620600"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"152","name":"Robotics"}],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"621330":{"#nid":"621330","#data":{"type":"news","title":"Advancing Learning Representations at ICLR: ML@GT Presents 12 Papers at Premier AI Conference","body":[{"value":"\u003Cp\u003EResearchers in the \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EMachine Learning Center at Georgia Tech (ML@GT)\u003C\/a\u003E will present 12 papers at the seventh annual \u003Ca href=\u0022https:\/\/iclr.cc\/\u0022\u003EInternational Conference on Learning Representations (ICLR)\u003C\/a\u003E, taking in New Orleans, La. May 6-9. Assistant professor \u003Cstrong\u003EDhruv Batra\u003C\/strong\u003E is an area chair and associate professor \u003Cstrong\u003ELe Song \u003C\/strong\u003Ewill give an invited talk at the \u003Ca href=\u0022https:\/\/iclr.cc\/Conferences\/2019\/Schedule?showEvent=631\u0022\u003ERepresentation Learning on Graphs and Manifolds\u003C\/a\u003E workshop.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EICLR is one of the fastest growing artificial intelligence conferences in the world and is globally respected as a premier conference for artificial intelligence researchers who focus on representation learning which is generally referred to as deep learning.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EML@GT research includes a paper on a \u003Ca href=\u0022http:\/\/ml.gatech.edu\/hg\/item\/620601\u0022\u003Eself-monitoring navigation agent\u003C\/a\u003E in collaboration with Salesforce and the University of Maryland, and work on \u003Ca href=\u0022https:\/\/openreview.net\/forum?id=H1g6osRcFQ\u0022\u003Etransferring policy with strategy optimization.\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOnly 500 or about a third of submissions were accepted as poster presentations and 24 as oral presentations. All of Georgia Tech\u0026rsquo;s work is in the poster session.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;ICLR has continued to grow and is now one of the premier conferences in artificial intelligence and machine learning. For ML@GT to have 12 papers in our third year as a\u0026nbsp;center is a sign of our prominence in these communities and the quality of work that the center publishes,\u0026rdquo; said \u003Cstrong\u003EByron Boots\u003C\/strong\u003E, an assistant professor in the \u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E and ML@GT.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EConference presentations will touch on topics such as hierarchical modeling and sparse coding, and speakers include \u003Cstrong\u003EIan Goodfellow\u003C\/strong\u003E, \u003Ca href=\u0022https:\/\/venturebeat.com\/2019\/04\/05\/apple-hires-google-ai-expert-ian-goodfellow-to-direct-machine-learning\/\u0022\u003EApple\u0026rsquo;s new director of machine learning\u003C\/a\u003E, and \u003Cstrong\u003ECynthia Dwork\u003C\/strong\u003E, a distinguished scientist at Microsoft Research.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EICLR has helped lead the charge in increasing inclusivity and diversity at conferences by building on the efforts of groups like Black in AI, Queer in AI, Women in Machine Learning, and LatinX in AI.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech\u0026rsquo;s research:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1901.00544.pdf\u0022\u003EMulti-class Classification Without Multi-Class Labels\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/openreview.net\/pdf?id=HkxLXnAcFQ\u0022\u003EA Closer Look at Few-Shot Classification\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1810.03538.pdf\u0022\u003ECombinatorial Attacks on Binarized Neural Networks\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/openreview.net\/pdf?id=HJlmHoR5tQ\u0022\u003EAdversarial Imitation via Variational Inverse Reinforcement Learning\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1901.03035\u0022\u003ESelf-Monitoring Navigation Agent via Auxiliary Progress Estimation\u003C\/a\u003E (For more information on this research, check out our summary \u003Ca href=\u0022http:\/\/ml.gatech.edu\/hg\/item\/620601\u0022\u003Ehere.)\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/openreview.net\/forum?id=rJNH6sAqY7\u0022\u003EOn Computation and Generalization of Generative Adversarial Networks under Spectrum Control\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/openreview.net\/forum?id=SkgQBn0cF7\u0022\u003EModeling the Long Term Future in Model-Based Reinforcement Learning\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/openreview.net\/pdf?id=rkxwShA9Ym\u0022\u003ELabel Super Resolution Networks\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/openreview.net\/forum?id=H1g6osRcFQ\u0022\u003EPolicy Transfer with Strategy Optimization\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/openreview.net\/pdf?id=S1E3Ko09F7\u0022\u003EL-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/openreview.net\/pdf?id=Syl8Sn0cK7\u0022\u003ELearning a Meta-Solver for Syntax-Guided Program Synthesis\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/openreview.net\/pdf?id=HyePrhR5KX\u0022\u003EDyRep: Learning Representations over Dynamic Graphs\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Researchers from the Machine Learning Center at Georgia Tech will present 12 papers at International Conference on Learning Representations (ICLR), a premier artificial intelligence conference."}],"uid":"34773","created_gmt":"2019-05-04 15:10:30","changed_gmt":"2019-05-04 15:10:30","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-05-04T00:00:00-04:00","iso_date":"2019-05-04T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"621329":{"id":"621329","type":"image","title":"ICLR is one of the premier artificial intelligence conferences that ML@GT researchers will be presenting at in 2019.","body":null,"created":"1556982611","gmt_created":"2019-05-04 15:10:11","changed":"1556982611","gmt_changed":"2019-05-04 15:10:11","alt":"ICLR is one of the premier artificial intelligence conferences that ML@GT researchers will be presenting at in 2019. Picture of building in New Orleans.","file":{"fid":"236675","name":"new-orleans-1630343_960_720 copy.jpg","image_path":"\/sites\/default\/files\/images\/new-orleans-1630343_960_720%20copy.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/new-orleans-1630343_960_720%20copy.jpg","mime":"image\/jpeg","size":208427,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/new-orleans-1630343_960_720%20copy.jpg?itok=3Ohv0WCq"}}},"media_ids":["621329"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["allie.mcfadden@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"621184":{"#nid":"621184","#data":{"type":"news","title":"IC Researchers Seek to Improve Treatment for Schizophrenia Under New $2.7 Million NIMH Grant","body":[{"value":"\u003Cp\u003EFor the past few years, Georgia Tech School of Interactive Computing Assistant Professor \u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E has pursued research that gathers insights about mental health through digital traces individuals leave behind on social media.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUnder a new $2.7 million grant from the \u003Ca href=\u0022https:\/\/www.nimh.nih.gov\/index.shtml\u0022\u003ENational Institutes of Mental Health\u003C\/a\u003E (NIMH), she and a team of researchers at \u003Ca href=\u0022https:\/\/www.northwell.edu\/\u0022\u003ENorthwell Health\u003C\/a\u003E will apply that new information in a clinical setting in hopes of improving treatment.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In our past research, we have gained a number of new insights, but I see an opportunity to influence real world people and outcomes,\u0026rdquo; De Choudhury said. \u0026ldquo;Going beyond just academic and empirical findings, how do you take that information and make a difference in people\u0026rsquo;s lives? What research challenges do such translations pose to the computing domain?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis grant offers the researchers that opportunity. It will be one of the first in which computing researchers and leading experts in psychiatry research are coming together to influence how treatment can be delivered harnessing patient-contributed data. The grant is funded through a new NIMH program designed to inform and support delivery of high quality mental health services.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe idea is to build machine learning algorithms based on data that mental health patients voluntarily share with the research team, including both clinicians at Northwell Health and researchers in De Choudhury\u0026rsquo;s lab at \u003Ca href=\u0022http:\/\/www.gatech.edu\u0022\u003EGeorgia Tech\u003C\/a\u003E. With these algorithms, they hope to identify different risk markers and symptom changes that appear in social media posts to identify changes and trends in an individual over time.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy combing a number of different social media sources, primarily Facebook and Twitter, they will look at the use of words or patterns of words an individual uses. In mental illnesses like schizophrenia, the main population they will explore, that is important information to know.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If they are feeling delusional or experiencing paranoia, what is it that they are saying,\u0026rdquo; De Choudhury said. \u0026ldquo;We can look at social interactions and see whether they might be feeling isolation, which can have a negative impact on mental health. Nuances of language styles, like the way people use articles or pronouns, can say a lot about their psychological state, as well, which has been shown in our and co-investigator (University of Texas Professor) \u003Cstrong\u003EJamie Pennebaker\u003C\/strong\u003E\u0026rsquo;s prior work.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe population they will focus on comprises younger individuals, largely teens and early 20s, who have had a first episode of schizophrenia. Most will have only recently been diagnosed and admitted to a specialized treatment facility directed by the collaborators on the project in New York. The goal is to use the information gathered in their digital traces to identify risk markers that signal a potential relapse.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Schizophrenia is a challenging and debilitating illness,\u0026rdquo; De Choudhury said. \u0026ldquo;Even people under treatment have a high chance of relapse with negative outcomes on quality of life, productivity, and functioning. Symptoms often come back, and most mental illnesses are only managed, not cured.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBetter management means that the treatment is timely and highly adaptable to the patient\u0026rsquo;s needs, De Choudhury said. Unfortunately, that\u0026rsquo;s a challenge because, in clinical settings, there is very little knowledge about a patient\u0026rsquo;s day-to-day life. Unlike a disease such as cancer, which has an objective screening that can identify its presence and severity, mental illnesses are based on what is reported. These self-reports are often skewed, based on what a patient wants to tell or remembers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In some ways, the treatment paradigm right now is not very evidence based,\u0026rdquo; she said. \u0026ldquo;But to prevent relapse, it\u0026rsquo;s important that we try to be as precise and proactive as possible.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe project will span four years and began on April 15.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"This grant offers researchers the opportunity to apply findings of past research to real-world clinical settings."}],"uid":"33939","created_gmt":"2019-05-01 19:41:21","changed_gmt":"2019-05-01 19:41:21","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-05-01T00:00:00-04:00","iso_date":"2019-05-01T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"587685":{"id":"587685","type":"image","title":"Munmun De Choudhury","body":null,"created":"1487686001","gmt_created":"2017-02-21 14:06:41","changed":"1487783642","gmt_changed":"2017-02-22 17:14:02","alt":"Georgia Tech Assistant Professor Munmun De Choudhury","file":{"fid":"223975","name":"munmun portrait_horz.jpg","image_path":"\/sites\/default\/files\/images\/munmun%20portrait_horz.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/munmun%20portrait_horz.jpg","mime":"image\/jpeg","size":711876,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/munmun%20portrait_horz.jpg?itok=GwpgdV5R"}}},"media_ids":["587685"],"related_links":[{"url":"https:\/\/www.ic.gatech.edu\/podcasts\/ep-3-social-media-and-mental-health","title":"The Interaction Hour podcast: Social Media and Mental Health"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181214","name":"ic-hcc"},{"id":"181215","name":"ic-social-computing"},{"id":"181216","name":"cc-research"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"621151":{"#nid":"621151","#data":{"type":"news","title":"IC\u2019s Caitlyn Seim to Serve as Spring Ph.D. Commencement Speaker","body":[{"value":"\u003Cp\u003E\u003Ca href=\u0022http:\/\/www.ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E Ph.D. student \u003Cstrong\u003ECaitlyn Seim\u003C\/strong\u003E will serve as commencement speaker for the Georgia Tech Ph.D. graduation ceremony on May 3.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeim, who is advised by IC Professor \u003Cstrong\u003EThad Starner\u003C\/strong\u003E, was chosen by a committee of leaders from across campus, including the Office of the Dean of Students, various faculty, and commencement officials. The process included an audition of a speech written by Seim.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHaving recently defended her dissertation for her degree in Human-Centered Computing, Seim said that she is honored by her selection and opportunity to share the stage with Georgia Tech President \u003Cstrong\u003EBud Peterson\u003C\/strong\u003E and Vice Provost for Graduate Education and Faculty Affairs \u003Cstrong\u003EBonnie Ferri\u003C\/strong\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;I am so thrilled to represent the graduating class, and I can\u0026rsquo;t wait to share my message about the importance of research,\u0026rdquo; Seim said. \u0026ldquo;I love Georgia Tech so much. After all my time here, I still enjoy it as if it were my first day on campus.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESeim, whose research in wearable computing and passive haptic rehabilitation has been covered extensively by external media, said that in her speech she hopes to help graduates think about a recent realization that she had.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;That is the significant role we have in society\u0026rsquo;s progress,\u0026rdquo; she said. \u0026ldquo;It\u0026rsquo;s about the formation of knowledge and how Ph.D. students are uniquely trained to evaluate fact and expand what society can achieve. My training in the Human-Centered Computing program actually helped me to begin recognizing this by introducing me to the concept of epistemology.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ELooking back, Seim will remember Georgia Tech as a unique student body and a beautiful campus.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;For me, I have to put special emphasis on the academic community,\u0026rdquo; she said. \u0026ldquo;The faculty made learning a great experience, and as a graduate student I felt like I was really part of a community. The student researchers who I mentor continue to impress me and consistently show curiosity, respect, and dedication. It has been a pleasure working with everyone.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe \u003Ca href=\u0022http:\/\/commencement.gatech.edu\/schedule\u0022\u003EPh.D. commencement ceremony\u003C\/a\u003E will take place at 9-10:30 a.m. Friday, May 3, at McCamish Pavilion. Ferri will also speak. No tickets are required for the event.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Seim, who is advised by IC Professor Thad Starner, was chosen by a committee of leaders from across campus, including the Office of the Dean of Students, various faculty, and commencement officials."}],"uid":"33939","created_gmt":"2019-05-01 01:07:58","changed_gmt":"2019-05-01 01:07:58","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-30T00:00:00-04:00","iso_date":"2019-04-30T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"611755":{"id":"611755","type":"image","title":"Caitlyn Seim - PHL","body":null,"created":"1537470856","gmt_created":"2018-09-20 19:14:16","changed":"1537470856","gmt_changed":"2018-09-20 19:14:16","alt":"Caitlyn Seim showing haptic glove","file":{"fid":"232896","name":"Seim Banner.jpg","image_path":"\/sites\/default\/files\/images\/Seim%20Banner.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Seim%20Banner.jpg","mime":"image\/jpeg","size":170103,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Seim%20Banner.jpg?itok=QblfJAZi"}}},"media_ids":["611755"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"181210","name":"ic-ubicomp-and-wearable"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"620987":{"#nid":"620987","#data":{"type":"news","title":"Georgia Tech\u0027s Child Study Lab Sees Computer Science as New \u0027Microscope\u0027 for Autism Research","body":[{"value":"\u003Cp\u003EWhat if behavior could be mapped and analyzed in much the same way an MRI provides images of the brain or a microscope an up-close look at cells? Both proved to be paradigm shifts in detecting developmental anomalies or diseases like cancer, and \u003Ca href=\u0022http:\/\/www.gatech.edu\u0022\u003EGeorgia Tech\u003C\/a\u003E research at the intersection of computing and early childhood behavior could do the same for autism.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBuilding upon nearly a decade of research, \u003Ca href=\u0022http:\/\/www.childstudylab.gatech.edu\/\u0022\u003EGeorgia Tech\u0026rsquo;s Child Study Lab\u003C\/a\u003E, which was established in 2010 by a $10 million grant from the \u003Ca href=\u0022https:\/\/www.nsf.gov\/\u0022\u003ENational Science Foundation\u003C\/a\u003E, and collaborators at \u003Ca href=\u0022https:\/\/weill.cornell.edu\/\u0022\u003EWeill Cornell Medical College\u003C\/a\u003E were awarded last year with a $1.7 million grant from the \u003Ca href=\u0022https:\/\/www.nih.gov\/\u0022\u003ENational Institutes of Health\u003C\/a\u003E. The grant will help researchers collect new data, using the datasets created over the past decade to develop automated tools that better and more efficiently characterize behaviors that are present and important in typical child development but are often considered to be core, early-emerging markers of autism spectrum disorder (ASD) when absent.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E[VIDEO::https:\/\/youtu.be\/jVldx01ENHM::aVideoStyle]\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.youtube.com\/watch?v=jVldx01ENHM\u0022 target=\u0022_blank\u0022\u003E[RELATED:\u0026nbsp;Using Computer Science to Augment Autism Research at Georgia Tech (VIDEO)]\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPsychologists have long understood that there were links between early childhood development and the likelihood of typical language and behavior outcomes throughout life. What they weren\u0026rsquo;t able to do, however, was to study childhood behavior at a granular level similar to that of a microscope. Given the importance of early detection to inform proper interventions, the tedium of human coding and analysis poses a significant challenge.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;That process is manual and driven by humans specifying what happens in a frame of a video,\u0026rdquo; said \u003Cstrong\u003EJim Rehg\u003C\/strong\u003E, a professor in the \u003Ca href=\u0022http:\/\/www.ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E and the principal investigator on the NIH award. \u0026ldquo;It takes hours upon hours of data collection and analysis.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EComputing could alter that reality, and this work being done at Georgia Tech is a significant reason why.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Given enough video, we can model the details of behavior,\u0026rdquo; Rehg said. \u0026ldquo;Deep learning, married with the ability to collect the data, allows us to build out how our algorithms work in much the same way computer science has been applied to genetics and imaging to make those more powerful and scalable.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThat has long been the mission of the Child Study Lab, and the latest grant will continue to move the needle forward in autism research at Georgia Tech and beyond. Unlike many other conditions, autism spectrum disorder can\u0026rsquo;t be found by taking a blood test or viewing images of the brain. Doctors must analyze behavior through developmental screenings and comprehensive diagnostic evaluations.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDuring screenings, doctors might talk or play with a child to see how they learn, speak, or behave. Do they exhibit typical communicative skills like joint attention, in which two people use gestures or gaze to share their attention with respect to other objects or events? The skills a child demonstrates in these areas are known to be strong indicators of how they will develop throughout childhood and adolescence.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe challenge here is that, given how important it is to detect ASD at an early age and thus tailor interventions and education to meet the child\u0026rsquo;s specific needs, the manual labor that comes with these screenings and evaluations makes it far less efficient than detection of other developmental challenges. Autism spectrum disorder affects one in 59 children in the United States alone, and not all who are screened are ultimately determined to be one of those individuals.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe need for objective, automated measurements of behavior is clear, and Rehg \u0026ndash; along with IC Research Scientist \u003Cstrong\u003EAgata Rozga\u003C\/strong\u003E, Child Study Lab coordinator \u003Cstrong\u003EAudrey Southerland\u003C\/strong\u003E, collaborators at Weill Cornell, and more \u0026ndash; are taking steps in that direction.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;For us, the goal is to use these computational capabilities to extract the important key moments and information to give clinicians or psychologists the ability to more easily examine a child\u0026rsquo;s behavior,\u0026rdquo; Southerland said. \u0026ldquo;If we can provide additional details through technology about the quality or coordination of important social and communicative behaviors, we can hopefully provide behavioral experts with the capability of exploring these behaviors in much greater detail than currently possible.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe first grant from the NSF funded the creation of the Child Study Lab, which has over the years developed an extensive dataset of behaviors in typically developing children. At the time, it was the first large-scale investment in technology that would assist in modeling and sensing behaviors that underlie developmental conditions like autism spectrum disorder. Additional grants have assisted in studies that use computer vision to measure and detect gaze shifts or wearable technology and machine learning to detect and differentiate between types of problem behaviors.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe NIH grant brings all the past research together to compare what the sensory data says in relation to human coding, and how that might ultimately serve to develop reliable, objective, automated tools for measuring early, nonverbal communication behaviors.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The important thing is for us to make sure that whatever we produce is good enough so that we can actually push it out into the field to people who are specializing in this area,\u0026rdquo; Southerland said. \u0026ldquo;We never want to get rid of the human expert in this field, but we want to build technology they can use to augment and streamline their analysis of behavior.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn addition to the National Institutes of Health and the National Science Foundation, the Child Study Lab has also received funding from the \u003Ca href=\u0022https:\/\/www.simonsfoundation.org\/\u0022\u003ESimons Foundation\u003C\/a\u003E and has partnered with external entities like the \u003Ca href=\u0022https:\/\/www.marcus.org\/\u0022\u003EMarcus Autism Center\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESoutherland and the Child Study Lab are actively seeking families with young children to participate in this study to further develop their automated tools. Anyone interested in playing a part in this exciting work can visit the lab\u0026rsquo;s website to learn more.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech\u2019s Child Study Lab, which was established in 2010 by a $10 million grant from the National Science Foundation, and collaborators at Weill Cornell Medical College were awarded last year with a $1.7 million grant from the NIH."}],"uid":"33939","created_gmt":"2019-04-28 23:36:39","changed_gmt":"2019-04-28 23:36:39","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-28T00:00:00-04:00","iso_date":"2019-04-28T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"620985":{"id":"620985","type":"image","title":"Autism and Computing Research at Georgia Tech","body":null,"created":"1556487692","gmt_created":"2019-04-28 21:41:32","changed":"1556487692","gmt_changed":"2019-04-28 21:41:32","alt":"Creating the Next in Autism and Computing Research at Georgia Tech\u0027s Child Study Lab","file":{"fid":"236513","name":"Autism and Computing rotator EDIT2.jpg","image_path":"\/sites\/default\/files\/images\/Autism%20and%20Computing%20rotator%20EDIT2.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Autism%20and%20Computing%20rotator%20EDIT2.jpg","mime":"image\/jpeg","size":70199,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Autism%20and%20Computing%20rotator%20EDIT2.jpg?itok=wOe24NH3"}}},"media_ids":["620985"],"related_links":[{"url":"http:\/\/www.childstudylab.gatech.edu\/","title":"Child Study Lab at Georgia Tech"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022http:\/\/david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"620928":{"#nid":"620928","#data":{"type":"news","title":"IC\u0027s Miranda Parker Uncovering Factors that Lead to CS Programs in Georgia","body":[{"value":"\u003Ch3\u003ELike the majority of research in IC, it comes down to the people\u003C\/h3\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EMiranda Parker\u003C\/strong\u003E was early on in her time as a Ph.D. student in the \u003Ca href=\u0022http:\/\/www.ic.gatech.edu\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E (IC) when she began her first quantitative study. She wanted to see whether they could model the variables that influence whether a school would or would not adopt computer science (CS) as a class for its students.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPrior to the study, the hypothesis was that variables like median income or enrollment numbers or the population of students who qualify for free and reduced cost lunch programs could be an indicator of whether or not computer science was implemented. Lower income levels, for example, might correlate to schools that just didn\u0026rsquo;t have the resources to deploy such programs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESomewhat to Parker\u0026rsquo;s surprise, the short answer to that question was \u0026ndash; no. No, a higher median income didn\u0026rsquo;t mean more computer science; no, schools with lower free and reduced lunch numbers didn\u0026rsquo;t teach computer science at a higher rate; no, higher enrollment didn\u0026rsquo;t necessarily mean more young students yearning to learn how to code.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOn the surface, that first study might have felt like a failure. If the goal was to prove that income disparity equated to a disparity in who was gaining exposure to a key part of their education, then it may be fair to describe it as such. However, Parker looks back on that study as a key component of what has guided her research at \u003Ca href=\u0022http:\/\/www.gatech.edu\u0022\u003EGeorgia Tech\u003C\/a\u003E ever since.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt wasn\u0026rsquo;t a failure, she said. It just helped open her eyes to some realities she may not have noticed otherwise.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Part of me wanted my first study to fail because part of me didn\u0026rsquo;t want to be able to say, \u0026lsquo;Oh, yes, these three things mean more computer science,\u0026rsquo;\u0026rdquo; she said. \u0026ldquo;Sure, it\u0026rsquo;s snazzy. It\u0026rsquo;s easy to put on a Facebook post. But it\u0026rsquo;s so much more complicated than that. And I\u0026rsquo;m glad that it\u0026rsquo;s more complicated than that.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOver the years, Parker, who studies \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/academics\/human-centered-computing-phd-program\u0022\u003Ehuman-centered computing\u003C\/a\u003E with a focus on computer science education, has gained a deeper understanding of what might influence a public high school in Georgia to offer computer science education. None of the above items are among them. What she said has shown some correlation is a bit more complex.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If a school had computer science in 2016, the correlation was that it also had computer science in 2015, 2014, and 2013,\u0026rdquo; she said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOkay, but how did it get started in 2013? That\u0026rsquo;s part of the question her research is trying to uncover.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;That\u0026rsquo;s an endless cycle,\u0026rdquo; she explained. \u0026ldquo;You had it before, now you still have it. But how did you get it to begin with?\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne thing she\u0026rsquo;s learned, which can be said for a majority of research in IC, is that it comes down to the people. Who is involved with a school and what connections do they have to a particular subject? If a connection has worked in CS in the past or may be passionate about adding that to the school, the results indicate the school is much more likely to employ that subject.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMakes sense, right?\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If a school has someone who can teach computer science and there are parents saying we need to teach computer science, then whether it\u0026rsquo;s rural or urban or high or low income, it doesn\u0026rsquo;t matter,\u0026rdquo; Parker said. \u0026ldquo;They will have computer science. But if there\u0026rsquo;s no one there to push them, it\u0026rsquo;s much less likely.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt\u0026rsquo;s not just a person, either. Organizations like Georgia Tech\u0026rsquo;s \u003Ca href=\u0022http:\/\/constellations.gatech.edu\/\u0022\u003EConstellations Center for Equity in Computing\u003C\/a\u003E and the \u003Ca href=\u0022https:\/\/www.ceismc.gatech.edu\/\u0022\u003ECenter for Education Integrating Science, Math, and Computing\u003C\/a\u003E are also championing K-12 CS educational opportunities.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut, Parker said, being successful is a bit more complicated than just serving CS up to the masses in communities that are unfamiliar with these and other organizations.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Computer science isn\u0026rsquo;t the end all, be all,\u0026rdquo; she said. \u0026ldquo;If a school is in a more agricultural-based county, that may benefit the school more than a heavy computer science program would. It\u0026rsquo;s about finding how computer science can most benefit students in different ways for different areas.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe most encouraging thing about that research, Parker said, was that the failure of her original study showed her one important piece of information.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;You don\u0026rsquo;t need high income to have computer science,\u0026rdquo; she said. \u0026ldquo;It really can be for everyone. That\u0026rsquo;s an important piece of information to know.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EParker is aiming to finish her Ph.D. work in the fall and will decide between pursuing a faculty position, which she is leaning toward now, or other opportunities that may present themselves down the road. Former Georgia Tech Professor \u003Cstrong\u003EMark Guzdial\u003C\/strong\u003E, now a faculty member at the University of Michigan, is Parker\u0026rsquo;s advisor.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Miranda Parker is investigating the qualities in high schools that lead to having a CS program in Georgia. One thing she\u2019s learned, which can be said for a majority of research in IC, is that it comes down to the people."}],"uid":"33939","created_gmt":"2019-04-25 22:05:16","changed_gmt":"2019-04-25 22:05:16","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-25T00:00:00-04:00","iso_date":"2019-04-25T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"620927":{"id":"620927","type":"image","title":"Miranda Parker","body":null,"created":"1556228807","gmt_created":"2019-04-25 21:46:47","changed":"1556228807","gmt_changed":"2019-04-25 21:46:47","alt":"Miranda Parker stands by the street","file":{"fid":"236482","name":"Parker rotator.jpg","image_path":"\/sites\/default\/files\/images\/Parker%20rotator.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Parker%20rotator.jpg","mime":"image\/jpeg","size":117463,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Parker%20rotator.jpg?itok=RXiQx4_O"}}},"media_ids":["620927"],"related_links":[{"url":"https:\/\/www.ic.gatech.edu\/academics\/human-centered-computing-phd-program","title":"Human-Centered Computing at Georgia Tech"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"431631","name":"OMS"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"620597":{"#nid":"620597","#data":{"type":"news","title":"MIT Press Publishes Collected Volume of Georgia Tech Blended Learning Research","body":[{"value":"\u003Cp\u003EMIT Press has released a comprehensive, new volume of blended learning research by\u0026nbsp;Georgia Tech faculty. \u003Ca href=\u0022https:\/\/mitpress.mit.edu\/books\/blended-learning-practice\u0022\u003E\u003Cem\u003EBlended Learning in Practice: A Guide for Practitioners and Researchers\u003C\/em\u003E\u003C\/a\u003E was collected and edited by a team housed within the Center for 21st Century Universities (C21U) and spanning a number of departments across the Institute.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe publisher describes this cross-disciplinary volume as, \u0026ldquo;A guide to both theory and practice of blended learning offering rigorous research, case studies, and methods for the assessment of educational effectiveness.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe editorial team for the volume is comprised of the College of Computing\u0026rsquo;s \u003Cstrong\u003EAshok Goel\u003C\/strong\u003E, the\u0026nbsp;School of Literature, Media, and Communication\u0026rsquo;s \u003Cstrong\u003EAmanda Madden\u003C\/strong\u003E, the Strada Institute for the Future of Work\u0026#39;s \u003Cstrong\u003ERob Kadel\u003C\/strong\u003E, and Georgia State University\u0026rsquo;s \u003Cstrong\u003ELauren Margulieux\u003C\/strong\u003E. \u003Cem\u003EBlended Learning in Practice: A Guide for Practitioners and Researchers\u003C\/em\u003E explores the work of more than two dozen contributors and represents a range of approaches and models of blended learning from faculty in nearly every school across the Institute.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOn April 11, C21U hosted \u003Ca href=\u0022https:\/\/youtu.be\/vUZ7EfFii_4\u0022\u003Ea panel discussion and launch celebration\u003C\/a\u003E for the editors and contributors of the volume. \u003Cstrong\u003EGoel\u003C\/strong\u003E, \u003Cstrong\u003EKadel\u003C\/strong\u003E, and \u003Cstrong\u003EMargulieux\u003C\/strong\u003E, as well as contributors \u003Cstrong\u003EJoe Bankoff\u003C\/strong\u003E and \u003Cstrong\u003EDavid Joyner\u003C\/strong\u003E appeared on a panel to share their experiences with blended learning best practices, origins of the book, as well as \u0026ldquo;behind the scenes\u0026rdquo; details of the three-and-a-half year production and revision process.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The origins of the book really lie in discussions within C21U,\u0026rdquo; said \u003Cstrong\u003EGoel\u003C\/strong\u003E. \u0026ldquo;This came about soon after the founding of C21U when various faculty would say, \u0026lsquo;We know about blended learning and we want to do it in our classes, but we don\u0026rsquo;t have the resources to do it right or we don\u0026rsquo;t know quite how to do it.\u0026rsquo;\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe editors perceived a need for comprehensive research and guidance for practitioners of blended learning but also for researchers interested in studying the efficacy and methodology of the practice. \u003Cem\u003EBlended Learning in Practice: A Guide for Practitioners and Researchers\u003C\/em\u003E provides guidelines and case studies that include the use of Assassin\u0026rsquo;s Creed II in a first-year composition course, a blended global issues and leadership laboratory, a knowledge-based AI course blended with a MOOC, and more.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;As we went through the process of compiling and writing this volume, we ended up with 14 chapters from faculty across several colleges at Georgia Tech that tell very rich and detailed stories,\u0026rdquo; said \u003Cstrong\u003EKadel\u003C\/strong\u003E. \u0026ldquo;We\u0026rsquo;re incredibly grateful for those submissions from faculty. It\u0026rsquo;s not just a computer science, physical science,\u0026nbsp;or communications blended learning book. It\u0026rsquo;s a real triumph for us that we can demonstrate not only to the Georgia Tech community but to the broader community that Georgia Tech is able to bring together a number of differing perspectives on a way of teaching and show that there is real cohesion.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EYou can watch a recording of the editor and contributor discussion on the \u003Ca href=\u0022https:\/\/youtu.be\/vUZ7EfFii_4\u0022\u003EC21U Youtube channel\u003C\/a\u003E. Visit the MIT Press website for more information about \u003Ca href=\u0022https:\/\/mitpress.mit.edu\/books\/blended-learning-practice\u0022\u003EBlended Learning in Practice: A Guide for Practitioners and Researchers\u003C\/a\u003E.\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EIf you are interested in becoming involved with blended learning or blended learning research at Georgia Tech, you can reach out to the Center for 21st Century Universities (C21U) for more information via \u003Ca href=\u0022mailto:ed-innovation@gatech.edu\u0022\u003Eed-innovation@gatech.edu\u003C\/a\u003E.\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EContributors to \u003Cem\u003EBlended Learning in Practice: A Guide for Practitioners and Researchers\u003C\/em\u003E:\u003C\/strong\u003E\u003Cbr \/\u003E\r\nJoe Bankoff, Paula Braun, Mark Braunstein, Marion L. Brittain, Timothy G. Buchman, Rebecca E. Burnett, Aldo A. Ferri, Bonnie Ferri, Andy Frazee, Mohammed M. Ghassemi, Ashok K. Goel, Alyson B. Goodman, Joyelle Harris, Cheryl Hiddleson, David Joyner, Robert S. Kadel, Kenneth J. Knoespel, Joe Le Doux, Amanda G. Madden, Lauren Margulieux, Olga Menagarishvili, Shamim Nemati, Vjollca Sadiraj, Donald Webster\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"MIT Press has released a comprehensive, new volume of blended learning research conducted by Georgia Tech faculty. Blended Learning in Practice: A Guide for Practitioners and Researchers was collected and edited by a team housed within C21U."}],"uid":"27998","created_gmt":"2019-04-17 19:57:15","changed_gmt":"2019-04-17 20:20:52","author":"Brittany Aiello","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-17T00:00:00-04:00","iso_date":"2019-04-17T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"620598":{"id":"620598","type":"image","title":"Blended Learning in Practice Panel Discussion","body":null,"created":"1555531332","gmt_created":"2019-04-17 20:02:12","changed":"1555531332","gmt_changed":"2019-04-17 20:02:12","alt":"A discussion amongst the editors of MIT Press\u0027 new volume, Blended Learning in Practice. Pictured (left to right) are Rob Kadel, Ashok Goel, Lauren Margulieux, and moderating Brittany Aiello.","file":{"fid":"236315","name":"Screen Shot 2019-04-17 at 3.10.50 PM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202019-04-17%20at%203.10.50%20PM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202019-04-17%20at%203.10.50%20PM.png","mime":"image\/png","size":3868470,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202019-04-17%20at%203.10.50%20PM.png?itok=67FQfvzM"}}},"media_ids":["620598"],"groups":[{"id":"66244","name":"C21U"},{"id":"47223","name":"College of Computing"},{"id":"431631","name":"OMS"},{"id":"131901","name":"Provost"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"13481","name":"C21U"},{"id":"177708","name":"blended learning"},{"id":"10037","name":"mit press"},{"id":"112431","name":"ashok goel"},{"id":"181055","name":"Amanda Madden"},{"id":"39781","name":"LMC"},{"id":"14381","name":"center for 21st century universities"},{"id":"167943","name":"School of Literature Media and Communication"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39511","name":"Public Service, Leadership, and Policy"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EBrittany Aiello\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECNE and C21U Communications\u003C\/p\u003E\r\n\r\n\u003Cp\u003Ebrittany@c21u.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["brittany@c21u.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"620459":{"#nid":"620459","#data":{"type":"news","title":"College\u0027s Skyrocketing Stature, Global Impact Highlights of Galil\u0027s Legacy as Dean of Computing  ","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003EZvi Galil\u003C\/strong\u003E, the John P. Imlay Jr. Dean of Computing at the Georgia Institute of Technology, will be \u003Ca href=\u0022https:\/\/b.gatech.edu\/2DaFCqr\u0022 target=\u0022_blank\u0022\u003Estepping down from the deanship on June 30\u003C\/a\u003E, concluding nine years of transformational achievement and numerous successes at the College. He will be returning to the faculty to teach, research, and serve as an ambassador of Georgia Tech\u0026#39;s online programs.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGalil\u0026rsquo;s deanship was marked by accomplishments on many fronts. Under his leadership the College has risen into the top eight nationally, top seven internationally \u0026ndash; the only top 10 computer science program to rise either in rank or in score in the last ranking (2018).\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn a measure of the College\u0026rsquo;s public perception, applications to the College have grown ten-fold, and enrollment in on-campus degree programs has nearly doubled during Galil\u0026rsquo;s tenure as dean.\u0026nbsp;Computing is now the largest major at the university, and the most selective \u0026ndash; our majors average higher than 1500 on the SATs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/b.gatech.edu\/2Xgdp96\u0022 target=\u0022_blank\u0022\u003E[RELATED:\u0026nbsp;College of Computing Rises to No. 8 in U.S. News Rankings]\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe College\u0026rsquo;s reputation among employers and alumni has seen dramatic enhancement, as well. As a result, the College\u0026#39;s career fairs\u0026nbsp;and its \u003Ca href=\u0022https:\/\/b.gatech.edu\/2xXpdDe\u0022 target=\u0022_blank\u0022\u003Ecorporate affiliates program\u003C\/a\u003E\u0026nbsp;have grown in stature in recent years. The \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/content\/college-computing-career-fair-student-information\u0022 target=\u0022_blank\u0022\u003EGT Computing Career Fair\u003C\/a\u003E regularly sets new attendance records with more than 160 companies participating\u0026nbsp;(with nearly 20 companies waitlisted) this year\u0026nbsp;in the Klaus Building Atrium. Several hundred\u0026nbsp;students from across campus attended each day of the four-day event.\u003C\/p\u003E\r\n\r\n\u003Cblockquote\u003E\r\n\u003Cp\u003E\u0026ldquo;Being a dean is about community building, about involvement, support, and empowerment. You\u0026rsquo;re closer to students, you\u0026rsquo;re closer to staff\u0026nbsp;and faculty. I view my role as dean as working to inspire our community by helping them to connect, encouraging them to excel, increasing their confidence.\u0026rdquo; - Zvi Galil\u003C\/p\u003E\r\n\u003C\/blockquote\u003E\r\n\r\n\u003Cp\u003EMore and more companies are also participating in the College\u0026#39;s corporate affiliates program (CAP). During Galil\u0026#39;s tenure as dean, CAP grew from 14 companies generating $280,000 in membership fees in 2010, to 63 companies raising $1.13 million in the current academic year. Galil exceeded the annual campus fundraising campaign goal by 40 percent \u0026ndash;\u0026nbsp;the largest percentage above the goal of any unit at Georgia Tech. Alumnus \u003Ca href=\u0022https:\/\/issuu.com\/gtalumni\/docs\/vol91_no2_low_res\/67\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EJames Liang\u003C\/strong\u003E\u0026#39;s gift of $1.5 million for an endowed chair\u003C\/a\u003E was at the time the largest international gift in Georgia Tech history, and the only endowed chair by an international donor.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe successful fundraising provided the resources for continued investment in the College and its faculty, and also helped fund four \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/content\/research-centers-and-initiatives\u0022 target=\u0022_blank\u0022\u003EInterdisciplinary Research Institutes and four Interdisciplinary Research Centers\u003C\/a\u003E led by the College. Galil doubled the number of endowed senior faculty chairs to 10, in addition to four new junior faculty chairs. Faculty rose from 85 to 102, with six or more to join later this year.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/b.gatech.edu\/2xXpdDe\u0022 target=\u0022_blank\u0022\u003E[RELATED:\u0026nbsp;Corporate Affiliates Program Paying Off for GT Computing Students]\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIt is a testament to the values and productivity of the College\u0026rsquo;s faculty that, with just 8 percent of Georgia Tech faculty, GT Computing teaches about 18 percent of the Institute\u0026rsquo;s credit hours (about 13 percent of undergraduate and about 24 percent of graduate credit hours).\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Under Zvi\u0026rsquo;s leadership the standing of the college has improved along a host of traditional metrics \u0026ndash; but truly great universities are in the center of the important issues of the day,\u0026rdquo; said Executive Associate Dean \u003Cstrong\u003ECharles Isbell\u003C\/strong\u003E, \u003Ca href=\u0022https:\/\/b.gatech.edu\/2OGKckA\u0022 target=\u0022_blank\u0022\u003Ewho will take over as dean on July 1\u003C\/a\u003E. \u0026ldquo;Through OMSCS, Zvi has led the way in moving the college to the center of perhaps the most important of national discussions: the role of affordability and access in computing. That is a transformative accomplishment.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EOMSCS\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECreating the College\u0026rsquo;s \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/617084\/omscs-five-years-cyber-pioneer\u0022\u003Enow-famous Online Masters of Science in Computer Science (OMSCS) program\u003C\/a\u003E took years of labor from dozens of faculty and staff members. Galil\u0026rsquo;s vision was the driving force behind the entire project, however, and guided many of the decisions that make the program so distinctive.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne of the most significant was OMSCS\u0026rsquo; unique admissions policy. Instead of admitting only a few of the highest-achieving applicants, Galil insisted that the program be open to anyone who had met the requirements. Those online students have been just as successful as the on-campus students admitted through a much more selective process.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENow, five years after its founding, the online master\u0026rsquo;s has nearly 9,000 students and an \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/620099\/omscs-revolution-will-be-digitized\u0022\u003Einternational reputation for changing the game in online education\u003C\/a\u003E. The OMSCS program embodies \u003Cem\u003EGeorgia Tech\u0026#39;s motto\u003C\/em\u003E of \u003Cem\u003EProgress and \u003C\/em\u003E\u003Cem\u003EService\u003C\/em\u003E with its unique combination of prestige, accessibility, and affordability. Its launch has changed national and international perspectives on Georgia Tech.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/b.gatech.edu\/2qg2OwD\u0022 target=\u0022_blank\u0022\u003E[RELATED: Juggling Careers, Grad School, Kids: One Family\u0026rsquo;s Story of How They Make OMSCS Work]\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;OMSCS offers wider access to the high quality of our residential program at a substantially lower cost. It helps realign today\u0026rsquo;s workforce with the requirements of a thriving 21st-century economy. This is a fundamental, revolutionary shift from the prevailing paradigm of higher education, in which a brand is bolstered by exclusion and high tuition fees,\u0026rdquo; Galil said.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EBuilding a community\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Being a dean is about community building, about involvement, support, and empowerment,\u0026rdquo; Galil said. \u0026ldquo;You\u0026rsquo;re closer to students, you\u0026rsquo;re closer to staff and faculty. I view my role as dean as working to inspire our community by helping them to connect, encouraging them to excel, increasing their confidence.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGalil has made particular efforts to integrate staff members into the community \u0026ndash; through regular meetings and an annual staff retreat \u0026ndash; and is well known for matching high standards with a collaborative approach and approachability.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Zvi pushes for excellence in a way that stretches everyone,\u0026rdquo; said \u003Cstrong\u003EAlan Katz\u003C\/strong\u003E, assistant dean for finances and administration. \u0026ldquo;He believes in sharing information, serving others, and providing incentives \u0026ndash; he\u0026rsquo;s a carrot person, not a stick person.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;You would never know Zvi has such a high status because he\u0026rsquo;s so down to earth,\u0026rdquo; said \u003Cstrong\u003EPam Ruffin\u003C\/strong\u003E, director of human resources for the college. \u0026ldquo;You can walk up to his door and he\u0026rsquo;ll take time to talk to you.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEven those who don\u0026rsquo;t make it to his office hear from Galil regularly, through a steady stream of e-mails he sends out to the entire GT Computing community. Although he is known as \u0026ldquo;the e-mail dean,\u0026rdquo; he almost never mentions himself in his missives. \u0026ldquo;I love to brag about the achievements of faculty, staff, and students,\u0026rdquo; he said. \u0026ldquo;I want everyone to know they are the most important part of the College.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EIn parting\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGalil \u0026ndash; a highly influential scholar in the fields of algorithmic design and analysis, and computational complexity and cryptography \u0026ndash;\u0026nbsp;is a member of the National Academy of Engineering, and a fellow of the Association for Computing Machinery and of the American Academy of Arts \u0026amp; Sciences. Prior to coming to Georgia Tech, he served as the dean of engineering at Columbia University and the president of Tel Aviv University.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EYet, he views his deanship at GT Computing as the most satisfying period of his career.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;In OMSCS, we pioneered a program that proved high-quality, cost-reduced online education at scale is doable, and that it satisfies an unmet need \u0026ndash; being radically more accessible and affordable than on-campus options,\u0026rdquo; Galil said. \u0026ldquo;I view it as my greatest achievement.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAs for his message to GT Computing faculty, staff, students, and alumni, \u003Cstrong\u003E\u0026ldquo;GO JACKETS!\u0026rdquo;\u003C\/strong\u003E\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Tech\u0027s Zvi Galil is stepping down following his highly successful tenure as dean of the College of Computing."}],"uid":"32045","created_gmt":"2019-04-16 14:37:10","changed_gmt":"2019-04-16 21:09:51","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-16T00:00:00-04:00","iso_date":"2019-04-16T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"620501":{"id":"620501","type":"image","title":"Zvi Galil deanship banner","body":null,"created":"1555448959","gmt_created":"2019-04-16 21:09:19","changed":"1555448959","gmt_changed":"2019-04-16 21:09:19","alt":"web banner for Zvi Galil","file":{"fid":"236262","name":"Super Zvi rotator_april2019.jpeg","image_path":"\/sites\/default\/files\/images\/Super%20Zvi%20rotator_april2019.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Super%20Zvi%20rotator_april2019.jpeg","mime":"image\/jpeg","size":1328486,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Super%20Zvi%20rotator_april2019.jpeg?itok=K9UAyEyp"}}},"media_ids":["620501"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"606703","name":"Constellations Center"},{"id":"576491","name":"CRNCH"},{"id":"545781","name":"Institute for Data Engineering and Science"},{"id":"430601","name":"Institute for Information Security and Privacy"},{"id":"576481","name":"ML@GT"},{"id":"66442","name":"MS HCI"},{"id":"431631","name":"OMS"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"9152","name":"zvi galil"},{"id":"46361","name":"GT computing"},{"id":"181043","name":"deanship"},{"id":"121521","name":"OMSCS"},{"id":"181044","name":"stepping down"},{"id":"10664","name":"charles isbell"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAnn Claycombe, Director of Communications\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:ann.claycombe@cc.gatech.edu?subject=Zvi\u0027s%20Deanship%20Story\u0022\u003Eann.claycombe@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["ann.claycombe@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"620430":{"#nid":"620430","#data":{"type":"news","title":"The Intersection of Artificial Intelligence and Statistics: ML@GT Researchers Present 12 Papers at AISTATS","body":[{"value":"\u003Cp\u003EHeld in Naha, Okinawa, Japan, the \u003Ca href=\u0022https:\/\/www.aistats.org\/\u0022\u003E22\u003Csup\u003End\u003C\/sup\u003E International Conference on Artificial Intelligence and Statistics (AISTATS)\u003C\/a\u003E draws researchers from all over the world to present their latest findings in artificial intelligence, machine learning, statistics, and related areas. \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EThe Machine Learning Center at Georgia Tech\u003C\/a\u003E (ML@GT) researchers will present 12 papers at the 2019 conference, held April 16-18.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;AISTATS is an exciting conference that allows for engaging conversations and interactions at the intersection of machine learning and statistics. ML@GT is thrilled to be a part of this growing conference and we are looking forward to connecting with other researchers from around the world,\u0026rdquo; said \u003Cstrong\u003ESebastian Pokutta, \u003C\/strong\u003Eassociate director of ML@GT and a paper author.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EML@GT faculty members \u003Cstrong\u003ELe Song\u003C\/strong\u003E, \u003Cstrong\u003EByron Boots\u003C\/strong\u003E, and \u003Cstrong\u003ENegar Kiyavash\u003C\/strong\u003E are 2019 area chairs.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech\u0026rsquo;s twelve papers:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1802.03692.pdf\u0022\u003ENearly Optimal Adaptive Procedure for Piecewise-Stationary Bandit: a Change-Point Detection Approach\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1710.04740.pdf\u0022\u003ERobust Submodular Maximization: Offline and Online Algorithms\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1810.02429.pdf\u0022\u003ERestarting Frank-Wolfe\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1810.10667\u0022\u003ETruncated Back-propagation for Bilevel Optimization\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1806.04642.pdf\u0022\u003EAccelerating Imitation Learning with Predictive Models\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1810.00737.pdf\u0022\u003ERisk-Averse Stochastic Convex Bandit\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1807.02290.pdf\u0022\u003EDifferentially Private Online Submodular Minimization\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1710.04740\u0022\u003EStructured Robust Submodular Maximization: Offline and Online Algorithms\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1811.02228.pdf\u0022\u003EKernel Exponential Family Estimation via Doubly Dual Embedding\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1802.07372\u0022\u003EStochastic Variance-Reduced Cubic Regularization for Nonconvex Optimization\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1903.01422.pdf\u0022\u003EDatabase Alignment with Gaussian Features\u003C\/a\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1806.05151.pdf\u0022\u003EOn Landscape of Lagrangian Function for Stochastic Search for Constrained Nonconvex Optimization\u003C\/a\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"AISTATS is an artificial intelligence, machine learning, and statistics conference that begins on April 16, 2019. Georgia Tech will present 12 papers at the conference. "}],"uid":"34773","created_gmt":"2019-04-15 18:02:55","changed_gmt":"2019-04-15 18:02:55","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-15T00:00:00-04:00","iso_date":"2019-04-15T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"620358":{"id":"620358","type":"image","title":"AISTATS 2019 will be held in Okinawa, Japan where Georgia Tech researchers will present 12 papers.","body":null,"created":"1555077996","gmt_created":"2019-04-12 14:06:36","changed":"1555077996","gmt_changed":"2019-04-12 14:06:36","alt":"","file":{"fid":"236215","name":"AISTATS.jpg","image_path":"\/sites\/default\/files\/images\/AISTATS.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/AISTATS.jpg","mime":"image\/jpeg","size":301416,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/AISTATS.jpg?itok=yCFipcyk"}}},"media_ids":["620358"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"620364":{"#nid":"620364","#data":{"type":"news","title":"People May Be Able to Find Images on a Computer Based Solely on Their Eye Movements","body":[{"value":"\u003Cp\u003EWhen humans try to recall images from memory, they involuntarily move their eyes in a pattern that is similar to when they are actually looking at the image.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EJames Hays\u003C\/strong\u003E, an associate professor in the \u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/\u0022\u003ESchool of Interactive Computing\u003C\/a\u003E and the \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EMachine Learning Center at Georgia Tech\u003C\/a\u003E, and researchers from TU Berlin and Universit\u0026auml;t Regensburg, are looking at how these patterns, known as gaze patterns, can be used to retrieve images from memory so that it\u0026rsquo;s easier to find that same image \u0026ndash; like an adorable dog photo \u0026ndash; stashed away in the digital cloud.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThrough a controlled lab experiment and a real-world scenario, Hays and his co-authors have developed a matching technique using machine learning to help computers understand what image someone is thinking of, and accurately retrieve it from a computer folder \u0026ndash; based solely on eye movements.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing eye-tracking software in the lab, the researchers recorded the eye movements of 30 participants as they looked at 100 different indoor and outdoor images, ranging from picturesque lighthouse scenes to cozy living rooms. Participants were then asked to look at a blank screen and recall any of the 100 images they just saw.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe researchers also conducted a realistic scenario by putting together a mock museum with 20 posters of various sizes and orientations spread throughout the \u0026ldquo;museum.\u0026rdquo; They outfitted each participant with a headset complete with a \u003Ca href=\u0022https:\/\/pupil-labs.com\/pupil\/\u0022\u003EPupil mobile eye tracker\u003C\/a\u003E, complete with two eye cameras, and one front-facing camera. Participants were asked to walk around the museum and look at all of the images, taking however long they liked, and in whatever order they preferred. They took anywhere from a few seconds to over a minute looking at each poster.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAfter looking at all of the images, participants were asked to look at a blank whiteboard and recall as many of their favorite images as possible, in any order. Participants remembered between 5 and 10 of the total 20 poster images.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe results from both experiments indicated that the gaze patterns of people looking at a photograph contain a unique signature that computers can use to accurately determine the corresponding photo.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EUsing the data collected from the experiments, researchers created spatial histograms, or heat maps, that could be analyzed by their new machine learning technique to determine which photo someone was thinking about. Hays and Co. also used a \u003Ca href=\u0022https:\/\/en.wikipedia.org\/wiki\/Convolutional_neural_network\u0022\u003EConvolutional Neural Network (CNN)\u003C\/a\u003E to look at the 2,700 collected heat maps.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The ability to retrieve images using eye movements would be beneficial to those who are disabled or unable to search for images using their hands or voice,\u0026rdquo; said Hays. \u0026ldquo;Also, wearable technology is a huge industry right now, and we believe that tracking motion with the eyes would be a natural by-product of that boom.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn Hays\u0026rsquo; previous research, \u003Ca href=\u0022https:\/\/arxiv.org\/abs\/1801.02753\u0022\u003ESketchyGAN\u003C\/a\u003E, people are able to draw (rather than type) what they are looking for to get image search results. But, if images are mislabeled or people can\u0026rsquo;t draw that well, search results are not useful. Other attempts at image retrieval have included various types of brain scans, but those are often too expensive and impractical for everyday use.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWhile this new research may prove helpful to people, it does not come without limitations, researchers note. The scalability of the model in part depends on image content and how many images are in the database. The more images the database holds, the more likely it is that several different photos will produce extremely similar gaze patterns.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne proposed workaround to this potential issue is asking people to make more extensive eye movements than they normally would. At the moment, participants are not asked to do anything more intentional or out of the norm when looking at the images. Researchers think that by putting a small amount of effort back on the user, this would help the computer find the correct image.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAnother foreseen problem is working with people\u0026rsquo;s memories. As people\u0026rsquo;s memories grow weaker with time or age, it will be harder to get a crisp gaze pattern and accurately return the right image. The team plans to explore these potential issues in the future by looking into the influence on memory decay and how it affects image retrieval from long-term memory.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe authors are also looking into combining gaze tracking with a speech interface, as that could be a rich resource for information. No matter which direction they go, the team believes that eye-movement image retrieval is not only possible but also a significant next step to improving human and computer interaction.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne might even say that before long, people will be able to find that favorite dog photo in the blink of an eye.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFurther details on this approach to image retrieval can be found in the paper,\u003Ca href=\u0022http:\/\/cybertron.cg.tu-berlin.de\/xiwang\/files\/mi.pdf\u0022\u003E \u0026ldquo;The Mental Image Revealed by Gaze Tracking,\u0026rdquo;\u003C\/a\u003E which has been accepted at the ACM Conference on Human Factors in Computing Systems (CHI 2019), May 4-9.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"What if we could find images on our computer just by tracking our eye movements? ML@GT assistant professor James Hays explores this idea in new research that will be presented next month at CHI 2019."}],"uid":"34773","created_gmt":"2019-04-12 14:42:21","changed_gmt":"2019-04-12 20:51:03","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-12T00:00:00-04:00","iso_date":"2019-04-12T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"620361":{"id":"620361","type":"image","title":"Machine Learning at Georgia Tech and School of Interactive Computing associate professor James Hays collaborated with researchers from TU Berlin and Universit\u00e4t Regensburg to create new eye-tracking software.","body":null,"created":"1555079754","gmt_created":"2019-04-12 14:35:54","changed":"1555102299","gmt_changed":"2019-04-12 20:51:39","alt":"","file":{"fid":"236216","name":"Screen Shot 2019-04-12 at 10.33.09 AM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202019-04-12%20at%2010.33.09%20AM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202019-04-12%20at%2010.33.09%20AM.png","mime":"image\/png","size":951664,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202019-04-12%20at%2010.33.09%20AM.png?itok=aI5T1_BW"}},"620363":{"id":"620363","type":"image","title":"In one experiment, participants were outfitted with a Pupil mobile eye tracker and asked to observe art in a fake museum.","body":null,"created":"1555079859","gmt_created":"2019-04-12 14:37:39","changed":"1555079859","gmt_changed":"2019-04-12 14:37:39","alt":"","file":{"fid":"236217","name":"Screen Shot 2019-04-12 at 10.33.34 AM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202019-04-12%20at%2010.33.34%20AM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202019-04-12%20at%2010.33.34%20AM.png","mime":"image\/png","size":1804726,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202019-04-12%20at%2010.33.34%20AM.png?itok=FFmiuM7e"}}},"media_ids":["620361","620363"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"134","name":"Student and Faculty"},{"id":"135","name":"Research"},{"id":"153","name":"Computer Science\/Information Technology and Security"}],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"620328":{"#nid":"620328","#data":{"type":"news","title":"IC Student Brianna Tomlinson Earns Campus Life Scholarship","body":[{"value":"\u003Cp\u003ESchool of Interactive Computing Ph.D. student \u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.ic.gatech.edu\/content\/brianna-tomlinson\u0022\u003EBrianna Tomlinson\u003C\/a\u003E\u003C\/strong\u003E was awarded the \u003Ca href=\u0022https:\/\/campusservices.gatech.edu\/scholarships\u0022\u003ECampus Life Scholarship\u003C\/a\u003E in recognition of her leadership, scholarship, and service to Georgia Tech. The scholarship provides $5,000 from Campus Services and offers a lunch to honor recipients on April 18.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETomlinson is involved in the \u003Ca href=\u0022http:\/\/women.cc.gatech.edu\/grad.html\u0022\u003EGraduate Women@CC\u003C\/a\u003E group, helping to organize events. She has been involved in some capacity with the group since she came to Georgia Tech six years ago. The group is a collection of female graduate students who strive for professional success for their members. They meet once each month for coffee, where they discuss current projects they are working on, and also help to organize various workshops throughout the year.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;It\u0026rsquo;s great to hear that people think my impact on GradWomen has been a good one, and the work to keep it going has been useful for the greater campus community,\u0026rdquo; Tomlinson said. \u0026ldquo;I\u0026rsquo;m hoping that it will actually help others learn about GradWomen and encourage them to get involved.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETomlinson is working toward her Ph.D. in human-centered computing. Her current work is on evaluating effective methods for studying engagement, learning, and transfer for multimodal interactive systems. This includes collaboration on a grant to develop and evaluate accessible auditory displays for PhET Interactive Simulations, a non-profit open educational resource project at the University of Colorado that creates and hosts explorable explanations.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EShe is advised by Professor \u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/people\/bruce-walker\u0022\u003EBruce Walker\u003C\/a\u003E\u003C\/strong\u003E, who is jointly appointed in the School of Interactive Computing and the School of Psychology.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"The scholarship provides $5,000 from Campus Services and offers a lunch to honor recipients on April 18."}],"uid":"33939","created_gmt":"2019-04-11 16:57:11","changed_gmt":"2019-04-11 16:57:11","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-11T00:00:00-04:00","iso_date":"2019-04-11T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"620327":{"id":"620327","type":"image","title":"Brianna Tomlinson","body":null,"created":"1555001765","gmt_created":"2019-04-11 16:56:05","changed":"1555001765","gmt_changed":"2019-04-11 16:56:05","alt":"Brianna Tomlinson","file":{"fid":"236201","name":"brianna_tomlinson_headshot.jpg","image_path":"\/sites\/default\/files\/images\/brianna_tomlinson_headshot.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/brianna_tomlinson_headshot.jpg","mime":"image\/jpeg","size":119012,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/brianna_tomlinson_headshot.jpg?itok=DVAnKN9p"}}},"media_ids":["620327"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"620251":{"#nid":"620251","#data":{"type":"news","title":"Georgia Tech\u2019s Newest AI System Explains Its Decisions to People in Real-Time to Understand User Preferences","body":[{"value":"\u003Cp\u003EGeorgia Institute of Technology researchers, in collaboration with Cornell and University of Kentucky, have developed an artificially intelligent (AI) agent that can automatically generate natural language explanations in real-time to convey the motivations behind its actions. The work is designed to give humans engaging with AI agents or robots confidence that the agent is performing the task correctly and can explain a mistake or errant behavior.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe agent also uses everyday language that non-experts can understand. The explanations, or \u0026ldquo;rationales\u0026rdquo; as the researchers call them, are designed to be relatable and inspire trust in those who might be in the workplace with AI machines or interact with them in social situations.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;If the power of AI is to be democratized, it needs to be accessible to anyone regardless of their technical abilities,\u0026rdquo; said \u003Cstrong\u003EUpol Ehsan\u003C\/strong\u003E, Ph.D. student in the School of Interactive Computing at Georgia Tech and lead researcher.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;As AI pervades all aspects of our lives, there is a distinct need for human-centered AI design that makes black-boxed AI systems explainable to everyday users. Our work takes a formative step toward understanding the role of language-based explanations and how humans perceive them.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearchers developed a participant study to determine if their AI agent could offer rationales that mimicked human responses. Spectators watched the AI agent play the videogame Frogger and then ranked three on-screen rationales in order of how well each described the AI\u0026rsquo;s game move.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOf the three anonymized justifications for each move \u0026ndash; a human-generated response, the AI-agent response, and a randomly generated response \u0026ndash; the participants preferred the human-generated rationales first, but the AI-generated responses were a close second.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFrogger offered the researchers the chance to train an AI in a \u0026ldquo;sequential decision-making environment,\u0026rdquo; which is a significant research challenge because decisions that the agent has already made influence future decisions. Therefore, explaining the chain of reasoning to experts is difficult, and even more so when communicating with non-experts, according to researchers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe human spectators understood the goal of Frogger in getting the frog safely home without being hit by moving vehicles or drowned in the river. The simple game mechanics of moving up, down, left or right, allowed the participants to see what the AI was doing, and to evaluate if the rationales on the screen clearly justified the move.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe spectators judged the rationales based on:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EConfidence\u003C\/strong\u003E \u0026ndash; the person is confident in the AI to perform its task\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EHuman-likeness\u003C\/strong\u003E \u0026ndash; looks like it was made by a human\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EAdequate justification\u003C\/strong\u003E \u0026ndash; adequately justifies the action taken\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EUnderstandability\u003C\/strong\u003E \u0026ndash; helps the person understand the AI\u0026rsquo;s behavior\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EAI-generated rationales that were ranked higher by participants were those that showed recognition of environmental conditions and adaptability, as well as those that communicated awareness of upcoming dangers and planned for them. Redundant information that just stated the obvious or mischaracterized the environment were found to have a negative impact.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;This project is more about understanding human perceptions and preferences of these AI systems than it is about building new technologies,\u0026rdquo; said Ehsan. \u0026ldquo;At the heart of explainability is sensemaking. We are trying to understand that human factor.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA second related study validated the researchers\u0026rsquo; decision to design their AI agent to be able to offer one of two distinct types of rationales:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EConcise, \u0026ldquo;focused\u0026rdquo; rationales \u003C\/strong\u003Eor\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cstrong\u003EHolistic, \u0026ldquo;complete picture\u0026rdquo; rationales\u003C\/strong\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EIn this second study, participants were only offered AI-generated rationales\u0026nbsp;after watching the AI play Frogger. They were\u0026nbsp;asked to\u0026nbsp;select\u0026nbsp;the answer that\u0026nbsp;they preferred in a scenario\u0026nbsp;where an AI made a mistake or behaved unexpectedly. They did not know the rationales were grouped into the two categories.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBy a 3-to-1 margin, participants favored answers that were classified in the \u0026ldquo;complete picture\u0026rdquo; category. Responses showed that people appreciated the AI thinking about future steps rather than just what was in the moment, which might make them more prone to making another mistake. People also wanted to know more so that they might directly help the AI fix the errant behavior.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The situated understanding of the perceptions and preferences of people working with AI machines give us a powerful set of actionable insights that can help us design better human-centered, rationale-generating, autonomous agents,\u0026rdquo; said \u003Cstrong\u003EMark Riedl\u003C\/strong\u003E, professor of Interactive Computing and lead faculty member on the project.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA possible future direction for the research will apply the findings to autonomous agents of various types, such as companion agents, and how they might respond based on the task at hand. Researchers will also look at how agents might respond in different scenarios, such as during an emergency response or when aiding teachers in the classroom.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe research was \u003Ca href=\u0022https:\/\/www.youtube.com\/watch?v=9L4CZ5n7rQY\u0022\u003Epresented in March\u003C\/a\u003E\u0026nbsp;at the Association for Computing Machinery\u0026rsquo;s Intelligent User Interfaces 2019 Conference. The paper is titled \u003Cem\u003EAutomated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions\u003C\/em\u003E. Ehsan will present a position paper highlighting the design and evaluation challenges of human-centered Explainable AI systems at the upcoming \u003Cem\u003EEmerging Perspectives in Human-Centered Machine Learning\u003C\/em\u003E workshop at the ACM CHI 2019 conference, May 4-9, in Glasgow, Scotland.\u003C\/p\u003E\r\n\r\n\u003Cdiv\u003E\u0026nbsp;\u003C\/div\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Institute of Technology researchers, in collaboration with Cornell and University of Kentucky, have developed an artificially intelligent (AI) agent that can automatically generate natural language explanations in real-time to convey the motivations behind its actions. The work is designed to give humans engaging with AI agents or robots confidence that the agent is performing the task correctly and can explain a mistake or errant behavior.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Institute of Technology researchers have developed an artificially intelligent (AI) agent that can automatically generate natural language explanations in real-time to convey the motivations behind its actions."}],"uid":"27592","created_gmt":"2019-04-09 19:42:53","changed_gmt":"2019-04-09 20:06:57","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-09T00:00:00-04:00","iso_date":"2019-04-09T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"620255":{"id":"620255","type":"image","title":"Explainable AI for Frogger","body":null,"created":"1554840392","gmt_created":"2019-04-09 20:06:32","changed":"1554840392","gmt_changed":"2019-04-09 20:06:32","alt":"AI study with Frogger","file":{"fid":"236161","name":"Explainable AI.png","image_path":"\/sites\/default\/files\/images\/Explainable%20AI.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Explainable%20AI.png","mime":"image\/png","size":48748,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Explainable%20AI.png?itok=wGcqqHq9"}}},"media_ids":["620255"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003EJoshua Preston\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGVU Center, College of Computing\u003C\/p\u003E\r\n\r\n\u003Cp\u003E678.231.0787\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"620110":{"#nid":"620110","#data":{"type":"news","title":"Six Members of GT Computing Awarded Prestigious Fellowships","body":[{"value":"\u003Cp\u003EEach year, Georgia Tech\u0026rsquo;s College of Computing is home to a number of students and faculty who are recognized by the computing community with fellowships from industry across the field.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThis year is no different as six GT Computing individuals have been awarded fellowships with four different companies, including J.P. Morgan, IBM, Snap, and Facebook. Only those who accepted their awards are listed below.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EJ.P. Morgan Chase \u0026amp; Co.\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.jpmorgan.com\/global\/technology\/ai\/awards\u0022\u003EJ.P. Morgan Chase \u0026amp; Co.\u003C\/a\u003E awarded \u003Cstrong\u003ECharles David Byrd\u003C\/strong\u003E (Research Scientist and Ph.D. student advised by Professor \u003Cstrong\u003ETucker Balch\u003C\/strong\u003E) and Assistant Professor \u003Cstrong\u003EXu Chu\u003C\/strong\u003E for efforts in artificial intelligence research. It is the company\u0026rsquo;s first AI Research Awards, which are aimed at studying the use of AI and machine learning in areas including investment advice, risk management, digital assistants, and trading behavior. Only 47 fellowships were awarded by the company.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EByrd\u0026rsquo;s work, along with Balch, is focused on machine learning for financial applications, investigating mutual fund portfolio inference, intraday equity market forecasting, stock market simulation, and machine learning approaches to the evaluation of market efficiency. Byrd has been recognized in the past as the 2018 Graduate Student Instructor of the Year Award in the School of Interactive Computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EChu\u0026rsquo;s research interests revolve around two themes: using data management technologies to make machine learning more usable and using machine learning to tackle hard data management problems like data integration. Chu also earned the Microsoft Research Ph.D. Fellowship in 2015.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EIBM\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPh.D. student \u003Cstrong\u003EStacey Truex\u003C\/strong\u003E of the School of Computer Science was named a \u003Ca href=\u0022https:\/\/www.research.ibm.com\/university\/awards\/2019_phd_fellowship_awards.shtml\u0022\u003E2019 IBM Ph.D. Fellow\u003C\/a\u003E. The Fellowship, which has been around since the 1950s, recognizes and supports outstanding graduate students who are focused on solving problems that are fundamental to innovation. This includes pioneering work in areas like cognitive computing and augmented intelligence, quantum computing, blockchain, data-centric systems, advanced analytics, security, radical cloud innovation, and more. This highly-competitive award was given to only 16 Ph.D. students worldwide.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETruex (advised by Professor \u003Cstrong\u003ELing Liu\u003C\/strong\u003E) focuses on research from two complementary perspectives: 1) privacy, security, and trust in machine learning models and algorithmic decision making, and 2) secure, privacy-preserving artificial intelligence systems, services, and applications.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ESnap, Inc.\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/snapresearchfs.splashthat.com\/\u0022\u003ESnap, Inc., recognized\u003C\/a\u003E Ph.D. student \u003Cstrong\u003EHarsh Agrawal\u003C\/strong\u003E of the School of Interactive Computing with the 2019 Snap Research Fellowship and Scholarship. This fellowship recognizes students carrying out research in areas of computer science relevant to the company, including computer graphics, computer vision, machine learning, data mining, computational imaging, human-computer interaction, and other related fields. Each awardee will receive a $10,000 award and an offer for a full-time paid internship with the company.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EAgrawal (advised by Assistant Professor \u003Cstrong\u003EDhruv Batra\u003C\/strong\u003E) does research at the intersection of computer vision and natural language processing. Prior to joining Georgia Tech, he spent time as a research engineer at Snap Research, where he was responsible for building large-scale infrastructure for visual recognition, search and developed algorithms for low-shot instance detection.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EFacebook\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/research.fb.com\/announcing-the-2019-facebook-fellows-and-emerging-scholars\/\u0022\u003EFacebook Research announced the selection of 21 Fellows and seven Emerging Scholars\u003C\/a\u003E this year out of more than 900 submitted applications from Ph.D. students all over the world. Among the awardees were \u003Cstrong\u003EAbhishek Das\u003C\/strong\u003E with the Facebook Fellowship and \u003Cstrong\u003EVanessa Oguamanam \u003C\/strong\u003Ewith the Emerging Scholar Award. The Facebook Fellowship program, now in its eighth year, is designed to encourage and support doctoral students engaged in innovative research in computer science and engineering.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDas (advised by Dhruv Batra) does research in deep learning and its applications in building agents that can see, think, talk, and act. His research has been supported by fellowships from Facebook, Adobe, and Snap, Inc., over the years. Oguamanam, who is in the School of Interactive Computing, pursues research in educational technology, human-computer interaction for development, diversity in STEM, and entrepreneurship. She is co-advised by Associate Professor \u003Cstrong\u003EBetsy DiSalvo\u003C\/strong\u003E and Assistant Professor \u003Cstrong\u003ENeha Kumar\u003C\/strong\u003E.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"J.P. Morgan, IBM, Snap, and Facebook awarded six College of Computing faculty and students."}],"uid":"33939","created_gmt":"2019-04-04 22:23:48","changed_gmt":"2019-04-04 22:23:48","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-04T00:00:00-04:00","iso_date":"2019-04-04T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"620109":{"id":"620109","type":"image","title":"2019 College of Computing Fellowships","body":null,"created":"1554416151","gmt_created":"2019-04-04 22:15:51","changed":"1554416151","gmt_changed":"2019-04-04 22:15:51","alt":"Harsh Agrawal, Xu Chu, Abhishek Das, Vanessa Oguamanam, Charles David Byrd, and Stacey Truex","file":{"fid":"236101","name":"CoC Fellowships.png","image_path":"\/sites\/default\/files\/images\/CoC%20Fellowships.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/CoC%20Fellowships.png","mime":"image\/png","size":852597,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/CoC%20Fellowships.png?itok=TjGe8z44"}}},"media_ids":["620109"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1299","name":"GVU Center"},{"id":"576481","name":"ML@GT"},{"id":"431631","name":"OMS"},{"id":"50877","name":"School of Computational Science and Engineering"},{"id":"50875","name":"School of Computer Science"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"145171","name":"Cybersecurity"},{"id":"39431","name":"Data Engineering and Science"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:david.mitchell@cc.gatech.edu\u0022\u003Edavid.mitchell@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}}}