news
All-Powerful AI Isn’t an Existential Threat, According to New Georgia Tech Research
Primary tabs
Ever since ChatGPT’s debut in 2023, concerns about artificial intelligence (AI) potentially wiping out humanity have dominated headlines. New research from Georgia Tech suggests that those anxieties are misplaced.
“Computer scientists often aren’t good judges of the social and political implications of technology,” said Milton Mueller, a professor in the Jimmy and Rosalynn Carter School of Public Policy. “They are so focused on the AI’s mechanisms and are overwhelmed by its success, but they are not very good at placing it into a social and historical context.”
In the four decades Mueller has studied information technology policy, he has never seen any technology hailed as a harbinger of doom — until now. So, in a Journal of Cyber Policy paper published late last year, he researched whether the existential AI threat was a real possibility.
What Mueller found is that deciding how far AI can go, and its limitations, is something society shapes. How policymakers get involved depends on the specific AI application.
Defining Intelligence
The AI sparking all this alarm is called artificial general intelligence (AGI) — a “superintelligence” that would be all-powerful and fully autonomous. Part of the debate, Mueller realized, is that no one could agree on the definition of what artificial general intelligence is.
Some computer scientists claim AGI would match human intelligence, while others argue it could surpass it. Both assumptions hinge on what “human intelligence” really means. Today’s AI is already better than humans at performing thousands of calculations in an instant, but that doesn’t make it creative or capable of complex problem-solving.
Understanding Independence
Deciding on the definition isn’t the only issue. Many computer scientists assume that as computing power grows, AI could eventually overtake humans and act autonomously.
Mueller argued that this assumption is misguided. AI is always directed or trained toward a goal and doesn’t act autonomously right now. Think of the prompt you type into ChatGPT to start a conversation.
When AI seems to disregard instructions, it’s caused by inconsistencies in its instructions, not by the machine coming alive. For example, in a boat race video game Mueller studied, the AI discovered it could get more points by circling the course instead of winning the race against other challengers. This was a glitch in the system’s reward structure, not AGI autonomy.
“Alignment gaps happen in all kinds of contexts, not just AI,” Mueller said. “I've studied so many regulatory systems where we try to regulate an industry, and some clever people discover ways that they can fulfill the rules but also do bad things. But if the machine is doing something wrong, computer scientists can reprogram it to fix the problem.”
Relying on Regulation
In its current form, even misaligned AI can be corrected. Misalignment also doesn’t mean the AI would snowball past the point where humans lose control of its outcomes. To do that, AI would need to have a physical capability, like robots, to do its bidding, and the power source and infrastructure to maintain itself. A mere data center couldn’t do that and would need human intervention to become omnipotent. Basic laws of physics — how big a machine can be, how much it can compute — would also prevent a super AI.
More importantly, AI is not one homogenous being. Mueller argued that different applications involve different laws, regulations, and social institutions. For example, the data scraping AI does is a copyright issue subject to copyright laws. AI used in medicine can be overseen by the Food and Drug Administration, regulated drug companies, and medical professionals. These are just a few areas where policymakers could intervene from a specific expertise level instead of trying to create universal AI regulations.
The real challenge isn’t stopping an AI apocalypse — it’s crafting smart, sector-specific policies that keep technology aligned with human values. To avoid being a victim of AI, humans can, and should, put up focused guardrails.
Status
- Workflow status: Published
- Created by: Tess Malone
- Created: 01/20/2026
- Modified By: Tess Malone
- Modified: 01/21/2026
Categories
Keywords