news
Tackling Global AI Hiring Bias: Prioritizing Collaboration Over Division Between the US, EU, and China
Primary tabs
While AI-driven systems hold the potential to streamline hiring processes, the issue of hiring discrimination has emerged as a pressing global concern as AI-automated recruitment tools gain widespread adoption. For instance, in August 2023, the US Equal Employment Opportunity Commission (EEOC) reached a landmark settlement with iTutorGroup, a Chinese education technology company, marking the first US case to address AI-driven hiring bias with a foreign company. iTutorGroup was accused of rejecting over 200 candidates solely based on age, a protected status in the US, highlighting the serious ethical risks that AI-driven hiring processes can pose.
As automated tools for job posting, resume screening, and video interviews become more prevalent worldwide, they increasingly influence employment opportunities, often affecting marginalized groups such as women, ethnic minorities, and individuals with disabilities. Addressing bias in these systems demands a collaborative, cross-border effort to design and deploy ethical frameworks, regulatory priorities, and technological innovations to establish a global standard.
This research was conducted by Huaigu Li, Ph.D. student in computer science at the Georgia Institute of Technology, and Michael L. Best, professor at the Sam Nunn School of International Affairs and the School of Interactive Computing at the Georgia Institute of Technology.
Read the full article in Tech Policy Press (Jan 31, 2025) >>
Groups
Status
- Workflow Status:Published
- Created By:Walter Rich
- Created:02/03/2025
- Modified By:Walter Rich
- Modified:02/03/2025
Categories
Keywords