{"671194":{"#nid":"671194","#data":{"type":"news","title":"Research Can Help to Tackle AI-generated Disinformation","body":[{"value":"\u003Cp\u003E\u003Cem\u003EIn an article published this week\u0026nbsp;in Nature Human Behaviour, computational science and engineering Assistant Professor\u0026nbsp;\u003Cstrong\u003ESrijan Kumar\u003C\/strong\u003E\u0026nbsp;and his colleagues describe why new behavioral science interventions are needed to tackle AI-generated disinformation.\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGenerative artificial intelligence (AI) tools have made it easy to create realistic disinformation that is hard to detect by humans and may undermine public trust. Some approaches used for assessing the reliability of online information may no longer work in the AI age. We offer suggestions for how research can help to tackle the threats of AI-generated disinformation.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn March 2023, images of former president Donald Trump ostensibly getting arrested circulated on social media. Former president Trump, however, did not get arrested in March. The images were fabricated using generative AI technology. Although the phenomenon of fabricated or altered content is not new, recent advances in generative AI technology have made it easy to produce fabricated content that is increasingly realistic, which makes it harder for people to distinguish what is real.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGenerative AI tools can be used to create original content, such as text, images, audio and video. Although most applications of these tools are benign, there is substantial concern about the potential for increased proliferation of disinformation (which we refer to broadly as content spread with the intent to deceive, including propaganda and fake news). Because the content generated appears highly realistic, some of the strategies presently used for detecting manipulative accounts and content are rendered ineffective by AI-generated disinformation.\u003C\/p\u003E\r\n\r\n\u003Ch4\u003EHow AI disinformation differs\u003C\/h4\u003E\r\n\r\n\u003Cp\u003EWhat makes AI-generated disinformation different from traditional, human-generated disinformation? Here, we highlight four potentially differentiating factors: scale, speed, ease of use and personalization. First, generative AI tools make it possible to mass-produce content for disinformation campaigns.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EOne example of the scale of AI-generated disinformation is the use of generative AI tools to produce dozens of different fake images showing Pope Francis in haute fashion across different postures and backgrounds. In particular, AI tools can be used to create multiple variations of the same false stories, translate them into different languages, mimic conversational dialogues and more.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESecond, compared to the manual generation of content, AI technology allows disinformation to be produced very rapidly. For example, fake images can be created with tools such as Midjourney in seconds, whereas without generative AI the creation of similar images would take hours or days. These first two factors \u2014 scale and speed \u2014 are challenges for fact-checkers, who will be flooded with disinformation but still need substantial amounts of time for debunking.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EContinue reading\u0026nbsp;\u003Ca href=\u0022https:\/\/www.nature.com\/articles\/s41562-023-01726-2\u0022\u003E\u003Cem\u003EResearch Can Help to Tackle AI-generated Disinformation\u003C\/em\u003E\u003C\/a\u003E.\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003ENature Human Behaviour has published an article from Georgia Tech School of Computational Science and Engineering Assistant Professor Srijan Kumar and his colleagues that serves as\u0026nbsp;a roadmap to detect and mitigate disinformation created by increasingly sophisticated generative AI systems.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech\u0027s Srijan and his colleagues have developed a roadmap to detect and mitigate disinformation created by increasingly sophisticated generative AI systems."}],"uid":"32045","created_gmt":"2023-11-21 16:48:31","changed_gmt":"2023-11-21 17:00:31","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2023-11-20T00:00:00-05:00","iso_date":"2023-11-20T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"672426":{"id":"672426","type":"image","title":"Srijan Kumar is an assistant professor in Georgia Tech\u0027s School of Computational Science and Engineering","body":null,"created":"1700585377","gmt_created":"2023-11-21 16:49:37","changed":"1700585377","gmt_changed":"2023-11-21 16:49:37","alt":"Srijan Kumar is an assistant professor in Georgia Tech\u0027s School of Computational Science and Engineering","file":{"fid":"255659","name":"srijan kumar850x478.jpg","image_path":"\/sites\/default\/files\/2023\/11\/21\/srijan%20kumar850x478.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2023\/11\/21\/srijan%20kumar850x478.jpg","mime":"image\/jpeg","size":37090,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2023\/11\/21\/srijan%20kumar850x478.jpg?itok=zyiPsaWd"}}},"media_ids":["672426"],"groups":[{"id":"37041","name":"Computational Science and Engineering"},{"id":"1188","name":"Research Horizons"}],"categories":[{"id":"135","name":"Research"}],"keywords":[{"id":"187915","name":"go-researchnews"},{"id":"10199","name":"Daily Digest"}],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAsst. Professor Srijan Kumar\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESchool of Computational Science \u0026amp; Engineering\u003C\/p\u003E\r\n\r\n\u003Cp\u003Esrijan@gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}}}