{"674510":{"#nid":"674510","#data":{"type":"news","title":"New Tool Teaches Responsible AI Practices When Using Large Language Models","body":[{"value":"\u003Cp\u003EThanks to a Georgia Tech researcher\u0027s new tool, application developers can now see potential harmful attributes in their prototypes.\u003C\/p\u003E\u003Cp\u003EFarsight is a tool designed for developers who use large language models (LLMs) to create applications powered by artificial intelligence (AI). Farsight alerts prototypers when they write LLM prompts that could be harmful and misused.\u003C\/p\u003E\u003Cp\u003EDownstream users can expect to benefit from better quality and safer products made with Farsight\u2019s assistance. The tool\u2019s lasting impact, though, is that it fosters responsible AI awareness by coaching developers on the proper use of LLMs.\u003C\/p\u003E\u003Cp\u003EMachine Learning Ph.D. candidate\u0026nbsp;\u003Ca href=\u0022https:\/\/zijie.wang\/\u0022\u003EZijie (Jay) Wang\u003C\/a\u003E\u0026nbsp;is\u0026nbsp;\u003Ca href=\u0022https:\/\/zijie.wang\/papers\/farsight\/\u0022\u003EFarsight\u003C\/a\u003E\u2019s lead architect. He will present the paper at the upcoming\u0026nbsp;\u003Ca href=\u0022https:\/\/sites.gatech.edu\/research\/chi-2024\/\u0022\u003EConference on Human Factors in Computing Systems\u003C\/a\u003E\u0026nbsp;(CHI 2024). Farsight ranked in the top 5% of papers accepted to CHI 2024, earning it an honorable mention for the conference\u2019s best paper award.\u003C\/p\u003E\u003Cp\u003E\u201cLLMs have empowered millions of people with diverse backgrounds, including writers, doctors, and educators, to build and prototype powerful AI apps through prompting. However, many of these AI prototypers don\u2019t have training in computer science, let alone responsible AI practices,\u201d said Wang.\u003C\/p\u003E\u003Cp\u003E\u201cWith a growing number of AI incidents related to LLMs, it is critical to make developers aware of the potential harms associated with their AI applications.\u201d\u003C\/p\u003E\u003Cp\u003EWang referenced an example when\u0026nbsp;\u003Ca href=\u0022https:\/\/www.reuters.com\/legal\/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22\/\u0022\u003Etwo lawyers used ChatGPT to write a legal brief\u003C\/a\u003E. A U.S. judge sanctioned the lawyers because their submitted brief contained six fictitious case citations that the LLM fabricated.\u003C\/p\u003E\u003Cp\u003EWith Farsight, the group aims to improve developers\u2019 awareness of responsible AI use. It achieves this by highlighting potential use cases, affected stakeholders, and possible harm associated with an application in the early prototyping stage.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EA user study involving 42 prototypers showed that developers could better identify potential harms associated with their prompts after using Farsight. The users also found the tool more helpful and usable than existing resources.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EFeedback from the study showed Farsight encouraged developers to focus on end-users and think beyond immediate harmful outcomes.\u003C\/p\u003E\u003Cp\u003E\u201cWhile resources, like workshops and online videos, exist to help AI prototypers, they are often seen as tedious, and most people lack the incentive and time to use them,\u201d said Wang.\u003C\/p\u003E\u003Cp\u003E\u201cOur approach was to consolidate and display responsible AI resources in the same space where AI prototypers write prompts. In addition, we leverage AI to highlight relevant real-life incidents and guide users to potential harms based on their prompts.\u201d\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/pair-code.github.io\/farsight\/\u0022\u003EFarsight employs an in-situ user interface\u003C\/a\u003E\u0026nbsp;to show developers the potential negative consequences of their applications during prototyping.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EAlert symbols for \u201cneutral,\u201d \u201ccaution,\u201d and \u201cwarning\u201d notify users when prompts require more attention. When a user clicks the alert symbol, an awareness sidebar expands from one side of the screen.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThe sidebar shows an incident panel with actual news headlines from incidents relevant to the harmful prompt. The sidebar also has a use-case panel that helps developers imagine how\u0026nbsp;different groups of people can use their applications in varying contexts.\u003C\/p\u003E\u003Cp\u003EAnother key feature is the harm envisioner. This functionality takes a user\u2019s prompt as input and assists them in envisioning potential harmful outcomes. The prompt branches into an interactive node tree that lists use cases, stakeholders, and harms, like \u201csocietal harm,\u201d \u201callocative harm,\u201d \u201cinterpersonal harm,\u201d and more.\u003C\/p\u003E\u003Cp\u003EThe novel design and insightful findings from the user study resulted in Farsight\u2019s acceptance for presentation at CHI 2024.\u003C\/p\u003E\u003Cp\u003ECHI is considered the most prestigious conference for human-computer interaction and one of the top-ranked conferences in computer science.\u003C\/p\u003E\u003Cp\u003ECHI is affiliated with the Association for Computing Machinery. The conference takes place May 11-16 in Honolulu.\u003C\/p\u003E\u003Cp\u003EWang worked on Farsight in Summer 2023 while interning at Google + AI Research group (PAIR).\u003C\/p\u003E\u003Cp\u003EFarsight\u2019s co-authors from Google PAIR include\u0026nbsp;\u003Ca href=\u0022https:\/\/www.linkedin.com\/in\/chinmayk\/\u0022\u003EChinmay Kulkarni\u003C\/a\u003E,\u0026nbsp;\u003Ca href=\u0022https:\/\/laurenwilcox.net\/\u0022\u003ELauren Wilcox\u003C\/a\u003E,\u0026nbsp;\u003Ca href=\u0022https:\/\/research.google\/people\/michael-terry\/\u0022\u003EMichael Terry\u003C\/a\u003E, and\u0026nbsp;\u003Ca href=\u0022http:\/\/michaelmadaio.com\/\u0022\u003EMichael Madaio\u003C\/a\u003E. The group possesses closer ties to Georgia Tech than just through Wang.\u003C\/p\u003E\u003Cp\u003ETerry,\u0026nbsp;\u003Ca href=\u0022https:\/\/medium.com\/people-ai-research\/meet-the-new-co-leads-of-pair-lucas-dixon-and-michael-terry-17a67754fc10\u0022\u003Ethe current co-leader of Google PAIR\u003C\/a\u003E, earned his Ph.D. in human-computer interaction from Georgia Tech in 2005. Madaio graduated from Tech in 2015 with a M.S. in digital media. Wilcox was a full-time faculty member in the School of Interactive Computing from 2013 to 2021 and serves in an adjunct capacity today.\u003C\/p\u003E\u003Cp\u003EThough not an author, one of Wang\u2019s influences is his advisor,\u0026nbsp;\u003Ca href=\u0022https:\/\/poloclub.github.io\/\u0022\u003EPolo Chau\u003C\/a\u003E. Chau is an associate professor in the School of Computational Science and Engineering. His group specializes in data science, human-centered AI, and visualization research for social good.\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cI think what makes Farsight interesting is its unique in-workflow and human-AI collaborative approach,\u201d said Wang.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cFurthermore, Farsight leverages LLMs to expand prototypers\u2019 creativity and brainstorm a wide range of use cases, stakeholders, and potential harms.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EThanks to a Georgia Tech researcher\u0027s new tool, application developers can now see potential harmful attributes in their prototypes.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFarsight is a tool designed for developers who use large language models (LLMs) to create applications powered by artificial intelligence (AI). Farsight alerts prototypers when they write LLM prompts that could be harmful and misused.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDownstream users can expect to benefit from better quality and safer products made with Farsight\u2019s assistance. The tool\u2019s lasting impact, though, is that it fosters responsible AI awareness by coaching developers on the proper use of LLMs.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Thanks to a Georgia Tech researcher\u0027s new tool, application developers can now see potential harmful attributes in their prototypes."}],"uid":"36319","created_gmt":"2024-05-06 00:10:44","changed_gmt":"2024-12-09 17:36:57","author":"Bryant Wine","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2024-05-06T00:00:00-04:00","iso_date":"2024-05-06T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"673947":{"id":"673947","type":"image","title":"Farsight CHI.jpg","body":null,"created":"1714954253","gmt_created":"2024-05-06 00:10:53","changed":"1714954253","gmt_changed":"2024-05-06 00:10:53","alt":"CHI 2024 Farsight","file":{"fid":"257404","name":"Farsight CHI.jpg","image_path":"\/sites\/default\/files\/2024\/05\/05\/Farsight%20CHI.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/05\/05\/Farsight%20CHI.jpg","mime":"image\/jpeg","size":139358,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/05\/05\/Farsight%20CHI.jpg?itok=6genJVjw"}}},"media_ids":["673947"],"related_links":[{"url":"https:\/\/www.cc.gatech.edu\/news\/new-tool-teaches-responsible-ai-practices-when-using-large-language-models","title":"New Tool Teaches Responsible AI Practices When Using Large Language Models"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50877","name":"School of Computational Science and Engineering"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"135","name":"Research"},{"id":"8862","name":"Student Research"}],"keywords":[{"id":"187915","name":"go-researchnews"},{"id":"192863","name":"go-ai"},{"id":"10199","name":"Daily Digest"},{"id":"7846","name":"Georgia Tech Office of the Provost"},{"id":"654","name":"College of Computing"},{"id":"166983","name":"School of Computational Science and Engineering"},{"id":"2556","name":"artificial intelligence"},{"id":"9167","name":"machine learning"},{"id":"9153","name":"Research Horizons"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EBryant Wine, Communications Officer\u003Cbr\u003Ebryant.wine@cc.gatech.edu\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}}}