{"687527":{"#nid":"687527","#data":{"type":"news","title":"All-Powerful AI Isn\u2019t an Existential Threat, According to New Georgia Tech Research","body":[{"value":"\u003Cp\u003EEver since ChatGPT\u2019s debut in 2023, concerns about artificial intelligence (AI) potentially wiping out humanity have dominated\u0026nbsp;\u003Ca href=\u0022https:\/\/safe.ai\/work\/press-release-ai-risk\u0022\u003Eheadlines\u003C\/a\u003E. New research from Georgia Tech suggests that those anxieties are misplaced.\u003C\/p\u003E\u003Cp\u003E\u201cComputer scientists often aren\u2019t good judges of the social and political implications of technology,\u201d said\u0026nbsp;\u003Ca href=\u0022https:\/\/research.gatech.edu\/people\/milton-mueller\u0022\u003EMilton Mueller\u003C\/a\u003E, a professor in the\u0026nbsp;\u003Ca href=\u0022https:\/\/spp.gatech.edu\/\u0022\u003EJimmy and Rosalynn Carter School of Public Policy\u003C\/a\u003E. \u201cThey are so focused on the AI\u2019s mechanisms and are overwhelmed by its success, but they are not very good at placing it into a social and historical context.\u201d\u003C\/p\u003E\u003Cp\u003EIn the four decades Mueller has studied information technology policy, he has never seen any technology hailed as a harbinger of doom \u2014\u0026nbsp;until now. So, in a \u003Cem\u003EJournal of Cyber Policy\u003C\/em\u003E\u0026nbsp;\u003Ca href=\u0022https:\/\/www.tandfonline.com\/doi\/full\/10.1080\/23738871.2025.2597194#abstract\u0022\u003Epaper\u003C\/a\u003E published late last year, he researched whether the existential AI threat was a real possibility.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EWhat Mueller found is that deciding how far AI can go, and its limitations, is something society shapes. How policymakers get involved depends on the specific AI application.\u0026nbsp;\u003C\/p\u003E\u003Ch2\u003E\u003Cstrong\u003EDefining Intelligence\u003C\/strong\u003E\u003C\/h2\u003E\u003Cp\u003EThe AI sparking all this alarm is called artificial general intelligence (AGI) \u2014 a \u201csuperintelligence\u201d that would be all-powerful and fully autonomous.\u0026nbsp;Part of the debate, Mueller realized, is that no one could agree on the definition of what artificial general intelligence is.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ESome computer scientists claim AGI would match human intelligence, while others argue it could surpass it. Both assumptions hinge on what \u201chuman intelligence\u201d really means. Today\u2019s AI is already better than humans at performing thousands of calculations in an instant, but that doesn\u2019t make it creative or capable of complex problem-solving.\u0026nbsp;\u003C\/p\u003E\u003Ch2\u003E\u003Cstrong\u003EUnderstanding Independence\u0026nbsp;\u003C\/strong\u003E\u003C\/h2\u003E\u003Cp\u003EDeciding on the definition isn\u2019t the only issue.\u0026nbsp;Many computer scientists assume that as computing power grows, AI could eventually overtake humans and act autonomously.\u003C\/p\u003E\u003Cp\u003EMueller argued that this assumption is misguided.\u0026nbsp;AI is always directed or trained toward a goal and doesn\u2019t act autonomously right now. Think of the prompt you type into ChatGPT to start a conversation.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EWhen AI seems to disregard instructions, it\u2019s caused by inconsistencies in its instructions, not by the machine coming alive. For example, in a boat race video game Mueller studied, the AI discovered it could get more points by circling the course instead of winning the race against other challengers. This was a glitch in the system\u2019s reward structure, not AGI autonomy.\u003C\/p\u003E\u003Cp\u003E\u201cAlignment gaps happen in all kinds of contexts, not just AI,\u201d Mueller said. \u201cI\u0027ve studied so many regulatory systems where we try to regulate an industry, and some clever people discover ways that they can fulfill the rules but also do bad things. But if the machine is doing something wrong, computer scientists can reprogram it to fix the problem.\u201d\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ERelying on Regulation\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EIn its current form, even misaligned AI can be corrected. Misalignment also doesn\u2019t mean the AI would snowball past the point where humans lose control of its outcomes. To do that, AI would need to have a physical capability, like robots, to do its bidding, and the power source and infrastructure to maintain itself. A mere data center couldn\u2019t do that and would need human intervention to become omnipotent. Basic laws of physics \u2014 how big a machine can be, how much it can compute \u2014 would also prevent a super AI.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EMore importantly, AI is not one homogenous being. Mueller argued that different applications involve different laws, regulations, and social institutions. For example, the data scraping AI does is a copyright issue subject to copyright laws. AI used in medicine can be overseen by the Food and Drug Administration, regulated drug companies, and medical professionals. These are just a few areas where policymakers could intervene from a specific expertise level instead of trying to create universal AI regulations.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThe real challenge isn\u2019t stopping an AI apocalypse \u2014 it\u2019s crafting smart, sector-specific policies that keep technology aligned with human values.\u0026nbsp;To avoid being a victim of AI, humans can, and should, put up focused guardrails.\u0026nbsp;\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003E\u003Cstrong\u003EThe study suggests that the fear of AI destroying society distracts from real policy interventions to better control computing applications.\u003C\/strong\u003E\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"The study suggests that the fear of AI destroying society distracts from real policy interventions to better control computing applications."}],"uid":"34541","created_gmt":"2026-01-20 22:19:23","changed_gmt":"2026-03-20 12:57:11","author":"Tess Malone","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2026-01-20T00:00:00-05:00","iso_date":"2026-01-20T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"679043":{"id":"679043","type":"image","title":"GIGconference_MMatPodium2.jpg","body":"\u003Cp\u003EMilton Mueller speaking at the AI Governance and Global Economic Development, an o\ufb03cial pre-summit event of the AI Impact Summit 2026.\u003C\/p\u003E","created":"1768947605","gmt_created":"2026-01-20 22:20:05","changed":"1768947605","gmt_changed":"2026-01-20 22:20:05","alt":"Milton at podium","file":{"fid":"263155","name":"GIGconference_MMatPodium2.jpg","image_path":"\/sites\/default\/files\/2026\/01\/20\/GIGconference_MMatPodium2.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2026\/01\/20\/GIGconference_MMatPodium2.jpg","mime":"image\/jpeg","size":1326513,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2026\/01\/20\/GIGconference_MMatPodium2.jpg?itok=S07ycvKV"}}},"media_ids":["679043"],"groups":[{"id":"1281","name":"Ivan Allen College of Liberal Arts"},{"id":"1214","name":"News Room"},{"id":"1188","name":"Research Horizons"}],"categories":[{"id":"194606","name":"Artificial Intelligence"}],"keywords":[{"id":"187915","name":"go-researchnews"},{"id":"186858","name":"go-sei"},{"id":"187023","name":"go-data"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ETess Malone\u003Cbr\u003ESenior Research Writer\/Editor\u003Cbr\u003EGeorgia Tech\u003Cbr\u003E\u003Ca href=\u0022mailto:tess.malone@gatech.edu\u0022\u003Etess.malone@gatech.edu\u003C\/a\u003E\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}}}