news

Georgia Tech Student Group Puts AI Safety at the Forefront of Research

Primary tabs

As artificial intelligence (AI) permeates everything we do — from internet searches to writing — questions and concerns about its safe use have emerged. How do large language models actually work? Is AI decision‑making aligned with human values? What if AI is misused for warfare? How should society govern AI?

The questions surrounding AI may be an unprecedented new challenge, but at Georgia Tech, students are already trying to answer them. The AI Safety Initiative (AISI) is a student group aiming to steer AI research and policy for society’s benefit.

“AI introduces new kinds of challenges into our legal and societal frameworks,” said Rocio Perales Valdes, AISI co-director and second-year computer science student. “Its capabilities emerge fast and on a jagged, hard-to-predict edge, which leaves AI governance like chasing a moving target. The work ahead is building the governance and technical tools we need to evaluate these systems, set direction, and enforce them without hindering innovation.”

AISI focuses on developing and deploying AI responsibly, rather than avoiding it. The group offers guest talks from AI researchers, fellowships that immerse students in the latest safety research through reading and discussion groups, and independent projects that contribute directly to the field. Past projects from AISI include demonstrating large language model security risks on Capitol Hillresponding to U.S. Federal Requests for Information, and running a war game for GTRI faculty. Part lab and part learning community, AISI prepares students to become the next generation of AI safety researchers and practitioners. They have placed alumni at leading organizations such as Anthropic, RAND, Model Evaluations and Threat Research, the UK AI Security Institute, and the Horizon Institute for Public Service.

“AI safety is an urgent problem because there is a rapidly growing gap between what AI systems can do and what we understand about them; yet mitigating AI risks is systematically neglected by current market incentives,” said Yixiong Hao, third‑year computer science student and co‑director of AISI. “I think the set of things I can do to directly move the needle is quite limited in the next three to five years, and that’s why I run this group. I have higher leverage in convincing smart people to work on neglected problems in AI safety.”

Founded in 2022 by Gaurav Sett, who is now a Ph.D. student at the RAND School of Public Policy and a fellow at the Institute for Progress, AISI has grown quickly. Its 10‑member executive board supports a broad base of student involvement, with more than 70 students participating in the fellowship program each semester. Over the past two years, members have also published 13 papers at top conferences such as the International Conference on Learning Representations, with projects spanning AI security and algorithmic transparency. 

From Discussion to Discovery

As a first‑year computer science student, Ishan Khire joined AISI looking for a deeper way to engage with AI safety and quickly found a pathway into research. After attending one general meeting, Khire enrolled in the group’s six‑week fellowship program, where students meet weekly to discuss current technical and policy challenges shaping the field.

“Finding a community that cares about AI safety was a big part of joining the fellowship,” Khire said. “Because AI safety is a broad subject, it was helpful to have an accountability group to discuss current issues.”

Thanks to the connections he made at AISI, Khire began conducting AI research with computing faculty member Giri Krishnan to predict the 3D structure of proteins. 

“AI is going to be really transformative in the next five to 10 years, and we want to make that transformation go well,” Khire said. “AISI tries to upskill people and connect them to technical and policy research that helps them find impactful work.”

Student Advantage

AISI is entirely student‑run, with a small group of faculty advisors. That structure lends itself to uncertain research that can be difficult to fund through traditional academic labs, and faculty support has followed.

“Any cursory look at the news today will show there is significant angst about AI and whether it is being developed responsibly and with sufficient guardrails in place,” said Tom Conte, the College of Computing associate dean for Research. “AISI has Georgia Tech at the forefront of that conversation.”

AISI member and computer science Ph.D. student Glenn Matlin has recruited many undergraduate researchers from the group for his own projects.

“I consider AISI like a third lab,” he said. “I use it as a great place for recruiting students. I’m constantly sharing my own research, and it helps me stay up to date with what other researchers are talking about.”

Matlin also credits AISI with advancing his own work in AI safety. Through the fellowship, he synthesized research that helped him apply for opportunities such as the prestigious AI safety mentorship at the MATS Program, which has connected him to additional research funding.

In a future increasingly shaped by algorithms, AISI’s students are betting that the most important safeguards won’t come from code alone, but from the people guiding how that code is built, deployed, and governed.

“AI safety matters to everyone,” Matlin said. “AI is going to disrupt not just technology, but also politics and business — and its risks are creating urgent opportunities to make it safer.”

 

Status

  • Workflow status: Published
  • Created by: Tess Malone
  • Created: 04/27/2026
  • Modified By: Tess Malone
  • Modified: 04/27/2026

Categories

  • No categories were selected.

Keywords

User Data