event
PhD Defense by Jacob Logas
Primary tabs
Title: Casualizing Privacy: Bridging the Gap Between Anti-Facial Recognition Obfuscation Design and User Acceptance
Date: Friday, April 11th, 2025
Time: 10:00 AM (Eastern Time)
Location: GVU Cafe
Zoom Link: https://gatech.zoom.us/j/91758844496?pwd=G6wpBLbaiWBeBUGrJaY9BinamYjjzd.1
Jacob Logas
Ph.D. Candidate
School of Interactive Computing
Georgia Institute of Technology
Committee
Dr. Rosa I. Arriaga (Advisor) - School of Interactive Computing, Georgia Institute of Technology
Dr. Sauvik Das (Advisor) - Human-Computer Interaction Institute, Carnegie Mellon University
Dr. Thad Starner - School of Interactive Computing, Georgia Institute of Technology
Dr. Duen Horng (Polo) Chau - School of Computational Science and Engineering, Georgia Institute of Technology
Dr. Annie Antón - School of Interactive Computing, Georgia Institute of Technology
Dr. Kelly Caine - School of Computing (Human-Centered Computing), Clemson University
Abstract:
Advancements in computer vision have enabled effortless and ubiquitous surveillance through facial recognition, posing challenges for online users trying to maintain anonymity as their faces index their activities across social networks. Despite technically effective anti-facial recognition obfuscation methods, their low adoption rates can be attributed to insufficient attention to human factors in design. My dissertation addresses this gap with a mixed-methods approach that prioritizes user perspectives. In the first study, I design a novel obfuscation method that empowers users to avoid data collection by centralized online social networks (cOSNs). This proof of concept, evaluated with 19 marginalized users, reveals that acceptable obfuscations must support users’ public presentation, even to those they are obfuscating against. In the second study, I recruit 15 users to qualitatively assesses user-facing Adversarial Machine Learning (AML) obfuscation to disrupt facial recognition pipelines while optimizing for imperceptibility. I find that these systems often produce undesirable outputs when visible or are distrusted due to a lack of perceptual assurances. I synthesize these qualitative findings into a brief, validated psychometric scale—the SAIA-8—to ease integrating user perspectives into obfuscation evaluation. In study 3, I build on insights from the prior two studies to develop AML-based obfuscations that might better align with a user's preferred presentation. I implement a set of 40 unique obfuscations that each emulate an existing aesthetic image filter. These novel obfuscations maintain adversarial effectiveness against facial recognition and outperform prior work on the SAIA-8 across 630 participants. Finally, I present AuraMask, an extensible development pipeline, for rapid prototyping and evaluation of anti-facial recognition obfuscation design to accelerate future research in this space
Groups
Status
- Workflow Status:Published
- Created By:Tatianna Richardson
- Created:03/31/2025
- Modified By:Tatianna Richardson
- Modified:03/31/2025
Categories
Keywords
Target Audience