<node id="688528">
  <nid>688528</nid>
  <type>news</type>
  <uid>
    <user id="34541"><![CDATA[34541]]></user>
  </uid>
  <created>1772050165</created>
  <changed>1774011386</changed>
  <title><![CDATA[Safe Artificial Intelligence Isn’t Enough, According to New Georgia Tech Research ]]></title>
  <body><![CDATA[<p>Artificial intelligence (AI) loves to cheat. When matched against a chess bot, an OpenAI model preferred hacking into its opponent’s system to winning the game fairly, according to a recent&nbsp;<a href="https://time.com/7259395/ai-chess-cheating-palisade-research/">study</a>.&nbsp;</p><p>While chess doesn’t have moral stakes, more serious ethical issues could arise in everything from medicine to self-driving cars as AI becomes even more pervasive. So, what does it mean for AI to be safe?&nbsp;</p><p>“No one is saying developing safe AI will be easy, but we need to make sure we cover as many ethical concerns as possible,” said&nbsp;<a href="https://www.tylercookphd.com/">Tyler Cook</a>, a research affiliate at the&nbsp;<a href="https://spp.gatech.edu/">Jimmy and Rosalynn Carter School of Public Policy</a> at Georgia Tech and assistant program director of the&nbsp;<a href="https://ailearning.emory.edu/" target="_blank">Center for AI Learning</a>&nbsp;at Emory University. “Humans also care about being treated fairly. We care about not being deceived. We should aim for much more than safety.”</p><p>AI is too complex for simple guardrails, Cook argues in a recent <em>Science and Engineering Ethics</em>&nbsp;<a href="https://philpapers.org/rec/COOACF-3">paper</a>. But AI still needs to be limited and incorporated with human values of fairness, honesty, and transparency so it doesn’t make ethically dubious decisions.</p><p>AI is not just a problem to manage. It’s a technology whose impact depends on the values we choose to build in it, Cook claims. Developers must think carefully about the world their systems will shape. AI shouldn’t make our world, but instead integrate into it.</p><h2><strong>Safe vs. Autonomous AI</strong></h2><p>Some computer scientists would say “safe” AI, or AI that doesn’t cause harm, is the answer. But AI is not a simple machine like a lawnmower that needs just a blade guard to prevent harm.&nbsp;</p><p>Establishing AI safety is more complex than adding protective features. Being prudent with how much autonomy AI gets is also paramount.</p><p>“We don't want AI systems deciding that they don't want to pursue fairness anymore,” Cook said. “We don't want AI to be autonomous with respect to its ethical goals or values.”&nbsp;</p><p>Such ethical autonomy&nbsp;could lead to unpredictable or undesirable outcomes. Consider algorithmic bias: Human biases, combined with machine automation, can lead to unequal consequences. An AI mortgage lender could favor certain applicant demographics over others, for example.&nbsp;</p><p>Cook posits there is a middle ground between merely safe AI and autonomous ethical AI — “end-constrained ethical AI.”&nbsp;</p><p>“As designers of AI systems, computer scientists should choose what we want the AI to prioritize: fairness, honesty, transparency,” Cook said. “That's why I use the language of constraint. We're constraining the AI’s values so they can actually benefit society.”</p><p>End‑constrained ethical AI asks designers to set those boundaries intentionally, not as an afterthought. And if developers take that responsibility seriously, AI doesn’t have to reinvent our world — it can strengthen the one we already have.</p><p dir="ltr">"<a href="https://doi.org/10.1007/s11948-025-00577-6" target="_blank">A Case for End-Constrained Ethical Artificial Intelligence</a>." <em>Science and Engineering Ethics </em>32.7 (2026).</p><p dir="ltr">DOI: 10.1007/s11948-025-00577-6</p>]]></body>
  <field_subtitle>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_subtitle>
  <field_dateline>
    <item>
      <value>2026-02-25T00:00:00-05:00</value>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_dateline>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Fairness, honesty, and transparency are needed in AI for it to benefit humanity. ]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[<p><strong>Fairness, honesty, and transparency are needed in AI for it to benefit humanity.&nbsp;</strong></p>]]></value>
    </item>
  </field_summary>
  <field_media>
          <item>
        <nid>
          <node id="679437">
            <nid>679437</nid>
            <type>image</type>
            <title><![CDATA[TylerCook.jpeg]]></title>
            <body><![CDATA[<p>Tyler Cook is a research affiliate at the&nbsp;<a href="https://spp.gatech.edu/">Jimmy and Rosalynn Carter School of Public Policy</a> at Georgia Tech and assistant program director of the&nbsp;<a href="https://ailearning.emory.edu/" target="_blank">Center for AI Learning</a>&nbsp;at Emory University.&nbsp;</p>]]></body>
                          <field_image>
                <item>
                  <fid>263600</fid>
                  <filename><![CDATA[TylerCook.jpeg]]></filename>
                  <filepath><![CDATA[/sites/default/files/2026/02/25/TylerCook.jpeg]]></filepath>
                  <file_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2026/02/25/TylerCook.jpeg]]></file_full_path>
                  <filemime>image/jpeg</filemime>
                  <image_740><![CDATA[]]></image_740>
                  <image_alt><![CDATA[Tyler Cook]]></image_alt>
                </item>
              </field_image>
            
                      </node>
        </nid>
      </item>
      </field_media>
  <field_contact_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_contact_email>
  <field_location>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_location>
  <field_contact>
    <item>
      <value><![CDATA[<p>Tess Malone, Senior Research Writer/Editor</p><p>tess.malone@gatech.edu</p>]]></value>
    </item>
  </field_contact>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <!--  TO DO: correct to not conflate categories and news room topics  -->
  <!--  Disquisition: it's funny how I write these TODOs and then never
         revisit them. It's as though the act of writing the thing down frees me
         from the responsibility to actually solve the problem. But what can I
         say? There are more problems than there's time to solve.  -->
  <links_related> </links_related>
  <files> </files>
  <og_groups>
          <item>1281</item>
          <item>1214</item>
          <item>1188</item>
      </og_groups>
  <og_groups_both>
      </og_groups_both>
  <field_categories>
      </field_categories>
  <core_research_areas>
      </core_research_areas>
  <field_news_room_topics>
          <item>
        <tid>71881</tid>
        <value><![CDATA[Science and Technology]]></value>
      </item>
      </field_news_room_topics>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>1281</item>
          <item>1214</item>
          <item>1188</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[Ivan Allen College of Liberal Arts]]></item>
          <item><![CDATA[News Room]]></item>
          <item><![CDATA[Research Horizons]]></item>
      </og_groups_both>
  <field_keywords>
          <item>
        <tid>187915</tid>
        <value><![CDATA[go-researchnews]]></value>
      </item>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
