{"674257":{"#nid":"674257","#data":{"type":"news","title":"New Strategic Design Approach Focuses on Turning AI Mistakes into User Benefits","body":[{"value":"\u003Cp\u003EMore and more often, automated lending systems powered by artificial intelligence (AI) reject qualified loan applicants without explanation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEven worse, they leave rejected applicants with no recourse.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPeople can have similar experiences when applying for jobs or petitioning their health insurance providers. While AI tools determine the fate of people in difficult situations daily, Upol Ehsan says more thought should be given to challenging these decisions or working around them.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEhsan, a Georgia Tech explainable AI (XAI) researcher, says many rejection cases are not the applicant\u2019s fault. Rather, it\u2019s more likely a \u201cseam\u201d in the design process \u2014 a mismatch between what designers thought the AI could do and what happens in reality.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEhsan said \u201cseamless design\u201d is the standard practice of AI designers. While the goal is to create a process by which users get what they need without interruption or barriers, seamless design has a way of doing just the opposite.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ENo amount of thought or design input will keep AI tools from making mistakes. When mistakes happen, those impacted by them want to know why they happened.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBecause seamless design often includes black-boxing \u2014 the act of concealing the AI\u2019s reasoning \u2014 answers are never provided.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EBut what if there were a way to challenge an AI\u2019s decisions and turn its mistakes into benefits for end users? Ehsan believes that can be done through \u201cseamful design.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003En his latest paper,\u0026nbsp;\u003Cem\u003ESeamful Explainable AI: Operationalizing Seamful Design in XAI,\u0026nbsp;\u003C\/em\u003EEhsan proposes a strategic way of anticipating AI harms, learning their reasonings, and leveraging mistakes instead of concealing them.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Ch6\u003EGIVING USERS MORE OPTIONS\u003C\/h6\u003E\r\n\r\n\u003Cp\u003EIn his research, Ehsan worked with loan officers who used automated lending support systems. The seams, or flaws, he discovered in these tools\u2019 processes impacted applicants and lenders.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThe expectation is that the lending system works for everyone,\u201d Ehsan said. \u201cThe reality is that it doesn\u2019t. You\u2019ve found the seam once you\u2019ve figured out the difference between expectation and reality. Then we ask, \u2018How can we show this to end users so they can leverage it?\u2019\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo give users options when AI negatively impacts them, Ehsan suggests three things for designers to consider:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003EActionability: Does the information about the flaw help the user take informed actions on the AI\u2019s recommendation?\u003C\/li\u003E\r\n\t\u003Cli\u003EContestability: Does the information provide the resources necessary to justify saying no to the AI?\u003C\/li\u003E\r\n\t\u003Cli\u003EAppropriation: Does identifying these seams help the user to adapt and appropriate the AI\u2019s output in a way that is different from the provided design but helps the user make the right decision?\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003EEhsan uses the example of someone who was rejected for a loan despite having a good credit history. The rejection may have been due to a seam, such as a flawed discriminating algorithm, in the AI that screens the applications.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EA post-deployment process is needed in cases like this to mitigate damage and empower affected end users. Loan applicants, for instance, should be allowed to contest the AI\u2019s decision based on known issues with an algorithm.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Ch6\u003EAGAINST THE GRAIN\u003C\/h6\u003E\r\n\r\n\u003Cp\u003EEhsan said his idea for seamful design is outside of the mainstream vernacular. However, his challenge to current accepted principles is gaining traction.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EHe is now working with cybersecurity, healthcare, and sales companies that are adopting his process.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThese companies may pioneer a new way of thinking in AI design. Ehsan believes this new mindset can allow designers to switch to a proactive mindset instead of being stuck in a reactive state of conducting damage control.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cYou want to stay a little ahead of the curve so you\u2019re not always caught off guard when things happen,\u201d Ehsan said. \u201cThe more proactive you can be and the more passes you can take at your design process, the safer and more responsible your systems will be.\u201d\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEhsan collaborated with researchers from Georgia Tech, the University of Maryland, and Microsoft. They will present their paper later this year at the 2024 Association for Computing Machinery\u2019s Conference on Computer-Supported Cooperative Work and Social Computing (CSCW) in Costa Rica.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cSeamful design embraces the imperfect reality of our world and makes the most out of it,\u201d he said. \u201cIf it becomes mainstream, it can help us address the hype cycle AI suffers from now. We don\u2019t need to overhype AI\u2019s capacity or impose unachievable goals. That\u2019d be a gamechanger in calibrating people\u2019s trust in the system.\u201d\u0026nbsp;\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EMore and more often, automated lending systems powered by artificial intelligence (AI) reject qualified loan applicants without explanation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEven worse, they leave rejected applicants with no recourse.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EPeople can have similar experiences when applying for jobs or petitioning their health insurance providers. While AI tools determine the fate of people in difficult situations daily, Upol Ehsan says more thought should be given to challenging these decisions or working around them.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EEhsan, a Georgia Tech explainable AI (XAI) researcher, says many rejection cases are not the applicant\u2019s fault. Rather, it\u2019s more likely a \u201cseam\u201d in the design process \u2014 a mismatch between what designers thought the AI could do and what happens in reality.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Method Provides Users Options When AI Rejects or Discriminates Against Them."}],"uid":"36530","created_gmt":"2024-04-18 13:27:06","changed_gmt":"2024-05-13 14:15:00","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2024-05-07T00:00:00-04:00","iso_date":"2024-05-07T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"673748":{"id":"673748","type":"image","title":"AdobeStock_453025210 (1).jpeg","body":null,"created":"1713446832","gmt_created":"2024-04-18 13:27:12","changed":"1713446832","gmt_changed":"2024-04-18 13:27:12","alt":"Two people discuss a loan application","file":{"fid":"257181","name":"AdobeStock_453025210 (1).jpeg","image_path":"\/sites\/default\/files\/2024\/04\/18\/AdobeStock_453025210%20%281%29.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/04\/18\/AdobeStock_453025210%20%281%29.jpeg","mime":"image\/jpeg","size":161965,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/04\/18\/AdobeStock_453025210%20%281%29.jpeg?itok=v8RVvlkP"}}},"media_ids":["673748"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"135","name":"Research"}],"keywords":[{"id":"187915","name":"go-researchnews"},{"id":"10199","name":"Daily Digest"},{"id":"181991","name":"Georgia Tech News Center"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer I\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["ndeen6@gatech.edu"],"slides":[],"orientation":[],"userdata":""}}}