event

Power-Seeking AI and X-risk Mitigation (AI Alignment Speaker Series)

Primary tabs

A number of leading AI researchers have raised concerns about the existential risk posed by advanced AI, including Stuart Russell (UC Berkeley), Francesca Rossi (IBM), Shane Legg (DeepMind), and Eric Horvitz (Microsoft). In his report "Is power-seeking AI an existential risk?", Joseph Carlsmith critically examines the case that AI poses an existential risk and attempts to estimate the level of risk, focusing on AI with advanced capability, agentic planning, and strategic awareness. In this talk, Joseph will discuss topics such as when and whether we should expect humans to build AI with these capabilities, the incentives for and against doing so, and the difficulties of aligning AI to human values.

Joseph Carlsmith is a senior research analyst at Open Philanthropy and a doctoral student in philosophy at the University of Oxford.

We are going to livestream the event, and you will receive more information once you RSVP here (you can also place a free order for a vegan meal from Chipotle!). If you prefer to participate independently, you can register here, and you will receive the Zoom link.

Groups

Status

  • Workflow Status:Published
  • Created By:Kristen Bailey
  • Created:03/01/2022
  • Modified By:Kristen Bailey
  • Modified:03/01/2022

Categories

Keywords

  • No keywords were submitted.