{"602011":{"#nid":"602011","#data":{"type":"news","title":"Georgia Tech Artificial Intelligence Research Includes Collaborative Approaches with Humans, Automating Content, and More","body":[{"value":"\u003Cp\u003EGeorgia Tech\u0026rsquo;s latest artificial intelligence research, presented Feb. 2-7\u0026nbsp;at the\u0026nbsp;\u003Ca href=\u0022https:\/\/aaai.org\/Conferences\/AAAI-18\/\u0022\u003EAAAI Conference on Artificial Intelligence\u003C\/a\u003E\u0026nbsp;in New Orleans, demonstrates some of the many approaches to developing capabilities for the next generation of autonomous machines.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFour faculty from the Schools of Interactive Computing and Computational Science and Engineering had research accepted into the program. They include Interactive Computing\u0026rsquo;s\u0026nbsp;\u003Cstrong\u003EDhruv Batra\u003C\/strong\u003E,\u0026nbsp;\u003Cstrong\u003EAshok Goel\u003C\/strong\u003E\u0026nbsp;and\u0026nbsp;\u003Cstrong\u003EMark Riedl\u003C\/strong\u003E, and CSE\u0026rsquo;s\u0026nbsp;\u003Cstrong\u003ELe Song\u003C\/strong\u003E.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EInvited talks at the conference include:\u0026nbsp;\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EAshok Goel -\u0026nbsp; \u0026ldquo;Jill Watson, Family, and Friends: Experiments in Building Automated Teaching Assistants\u0026rdquo; (Also a panelist on \u0026ldquo;Next Big Steps in AI for Education\u0026rdquo;)\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003EDhruv Batra -\u0026nbsp; Emerging Topics Program in \u0026ldquo;Human-AI Collaboration\u0026rdquo;\u0026nbsp;\u003C\/li\u003E\r\n\t\u003Cli\u003ECharles Isbell - \u0026ldquo;How Machines Learn Best from Humans\u0026rdquo;\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Ch3\u003E\u003Cstrong\u003EBuilding for Creativity\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EAmong the accepted Georgia Tech research is work on\u0026nbsp;deep neural networks to teach AI agents how to write and construct narratives with a human collaborator, allowing for stories to be generated in new ways.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EResearchers have come up with a method to simplify sentences into \u0026ldquo;events,\u0026rdquo; akin to an elementary school grammar lesson. Understanding the subject, verb and other constituent parts of a sentence makes it easier for the computer to generate a reasonable next event in a story. That AI\u0026rsquo;s event is translated back into a human-readable sentence.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We can use these methods in an AI that goes back and forth with someone, co-creating a brand new story in real-time,\u0026rdquo; says Lara Martin, Ph.D. candidate in Human-Centered Computing and lead researcher. \u0026ldquo;More importantly, this system will be able to continue a story about any topic, which is crucial for improvisation.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EMark Riedl, director of the Entertainment Intelligence Lab and co-author on the paper, has developed many systems to advance AI creativity as a domain that can spur growth in the field.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;As human-AI interaction becomes more common, it becomes more important for AIs to be able to engage in open-world improvisational storytelling,\u0026rdquo; he says. \u0026ldquo;This is because it enables AIs to communicate with humans in a natural way without sacrificing the human\u0026rsquo;s perception of agency.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Cstrong\u003ECreating Context for Visual Media\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003EAnother Georgia Tech innovation is defining a method to create captions for images from any digital file on- or offline. The research team studied current machine learning models for automatic image captioning and assessed that they had limitations in providing robust output. The team looked to improve on what they considered boring, generic descriptions. Their approach, Diverse Beam Search, is an algorithm that tries to capture the richness of language by generating a diverse set of descriptions that are in general more preferred by humans.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u0026ldquo;We categorized images based on their complexity and observed that on \u0026lsquo;complex\u0026rsquo; scenes, say, a view of a kitchen with multiple objects,\u0026nbsp;our method indeed resulted in significant improvements in captions,\u0026rdquo; says Ashwin Vijayakumar, Ph.D. student in Computer Science and lead author.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESimpler images were tougher for the AI system - the internet\u0026rsquo;s many cat closeups could only be described in so many ways, according to Vijayakumar.\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPictures can be uploaded on the system and tested here in real-time:\u0026nbsp;\u003Ca href=\u0022http:\/\/dbs.cloudcv.org\/\u0022\u003Ehttp:\/\/dbs.cloudcv.org\/\u003C\/a\u003E.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Cstrong\u003EAAAI 2018 Conference\u0026nbsp;\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EPAPERS\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EDiverse Beam Search for Improved Description of Complex Scenes\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EAshwin Vijayakumar, Michael Cogswell, Ramprasaath Selvaraju, Qing Sun, Stefan Lee, David Crandall, Dhruv Batra\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EThe Structural Affinity Method for Solving the Raven\u0026#39;s Progressive Matrices Test for Intelligence\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003ESnejana Shegheva, Ashok Goel\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EEvent Representations for Automated Story Generation with Deep Neural Nets\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003ELara Martin, Prithviraj Ammanabrolu, Xinyu Wang, William Hancock, Shruti Singh, Brent Harrison, Mark Riedl\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EDeep Semi-Random Features for Nonlinear Function Approximation\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EKenji Kawaguchi, Bo Xie, Le Song\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ELearning Conditional Generative Models for Temporal Point Processes\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EShuai Xiao, hongteng Xu, Junchi Yan, Mehrdad Farajtabar, Xiaokang Yang, Le Song, Hongyuan Zha\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EVariational Reasoning for Question Answering with Knowledge Graph\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EYuyu Zhang, Hanjun Dai, Zornitsa Kozareva, Alexander Smola, Le Song\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EWORKSHOPS\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EKnowledge Extraction from Games\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EMatthew Guzdial (committee)\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003ECOMMITTEES\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EComputational Sustainability Co-chair - Bistra Dilkina\u003C\/p\u003E\r\n\r\n\u003Ch3\u003E\u003Cstrong\u003EAAAI\/ACM Conference on Artificial Intelligence, Ethics, and Society\u003C\/strong\u003E\u003C\/h3\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EPAPERS\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EJill Watson Doesn\u0026rsquo;t Care if You\u0026rsquo;re Pregnant: Grounding AI Ethics in Empirical Studies\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003EBobbie Eicher, Lalith Polepeddi and Ashok Goel\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cem\u003ECOMMITTEES\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EStudent Track, AI and Law Program Chair -\u0026nbsp;Deven Desai\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Four faculty from the Schools of Interactive Computing and Computational Science and Engineering had research accepted into the program. They include Interactive Computing\u2019s\u00a0Dhruv Batra,\u00a0Ashok Goel\u00a0and\u00a0Mark Riedl, and CSE\u2019s\u00a0Le Song."}],"uid":"33939","created_gmt":"2018-02-06 22:50:54","changed_gmt":"2018-02-06 22:50:54","author":"David Mitchell","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2018-02-06T00:00:00-05:00","iso_date":"2018-02-06T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"602010":{"id":"602010","type":"image","title":"AAAI 2018 logo","body":null,"created":"1517957142","gmt_created":"2018-02-06 22:45:42","changed":"1517957142","gmt_changed":"2018-02-06 22:45:42","alt":"AAAI 2018 logo","file":{"fid":"229445","name":"Screen Shot 2018-02-06 at 5.44.50 PM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202018-02-06%20at%205.44.50%20PM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202018-02-06%20at%205.44.50%20PM.png","mime":"image\/png","size":183057,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202018-02-06%20at%205.44.50%20PM.png?itok=Jgbk5XpV"}}},"media_ids":["602010"],"related_links":[{"url":"https:\/\/aaai.org\/Conferences\/AAAI-18\/","title":"AAAI 2018"},{"url":"https:\/\/www.ic.gatech.edu\/content\/artificial-intelligence-machine-learning","title":"Artificial Intelligence and Machine Learning"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[],"keywords":[{"id":"98401","name":"AAAI"},{"id":"177034","name":"AAAI 2018"},{"id":"2556","name":"artificial intelligence"},{"id":"177035","name":"Thirty-second AAAI Conference on Artificial Intelligence"},{"id":"173615","name":"dhruv batra"},{"id":"127171","name":"Le Song"},{"id":"112431","name":"ashok goel"},{"id":"66281","name":"Mark Riedl"},{"id":"166848","name":"School of Interactive Computing"},{"id":"654","name":"College of Computing"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EDavid Mitchell\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Edavid.mitchell@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}}}