event

PhD Proposal by Zachary R. Tidler

Primary tabs

Name: Zachary R. Tidler

Dissertation Proposal Meeting

Date: Friday, December 19, 2025

Time: 1:00 PM

Location: JS Coon - Room 148

Remote Meeting Link: https://gatech.zoom.us/j/98219983791

Advisor:
Richard Catrambone, Ph.D. (Georgia Tech)

Dissertation Committee Members:

Richard Catrambone, Ph.D. (Georgia Tech)

Mengyao Li, Ph.D. (Georgia Tech)

Dobromir Rahnev, Ph.D. (Georgia Tech)

Bruce N. Walker, Ph.D. (Georgia Tech)

Chris Hale, Ph.D. (Georgia Tech Research Institute)

Title: The Psychology of Deepfake-Video Detection

Abstract: Deepfake videos (i.e., synthetic media generated using artificial intelligence to misrepresent reality) pose an escalating threat to information integrity and public trust. Despite advances in computational detection methods, humans remain critical arbiters of media authenticity in many real-world contexts. Yet recent meta-analytic evidence suggests that human detection accuracy barely exceeds chance levels, with substantial and poorly understood variation across individuals. This dissertation is a comprehensive research program intended to illuminate the psychological nature of human deepfake-detection performance.

Across nine proposed studies organized into three empirical chapters, this work addresses fundamental questions about who is susceptible to deepfake deception, why detection performance varies, and under what conditions humans can successfully identify manipulated media. The first set of studies are intended to reveal psychological trait predictors of individual differences in detection ability. Prospective predictors include general mental ability, affect-detection skill, Big Five personality traits, and clinical status (including mild cognitive impairment, autism spectrum disorder, and ADHD). The second set of studies features tests of hypothesized mechanisms underlying performance differences, while simultaneously evaluating practical interventions, including directed visual attention strategies, motivational manipulations, and trial-by-trial corrective feedback. The third set of studies shift the focus from psychological variables to stimuli-related variables (e.g., video duration, video-subject demographics, and target prevalence) that determine the boundaries of detection capability

In all studies, performance will be quantified within a signal-detection-theoretic framework, separating true sensitivity to manipulation from simple tendencies to label videos as real or fake. By pairing this analytic approach with a broad set of psychological-trait, procedural, and stimulus-level factors, the work is intended to map the conditions under which human judgment is fragile versus robust. The results will help clarify who is most at risk from deepfake deception, which viewing strategies genuinely help (or fail), and how task design and base rates shape detection outcomes, thereby providing a psychological foundation for training programs, human–AI detection systems, and policy responses that must increasingly account for synthetic media in everyday information environments.

Status

  • Workflow status: Published
  • Created by: Tatianna Richardson
  • Created: 12/02/2025
  • Modified By: Tatianna Richardson
  • Modified: 12/02/2025

Categories

Keywords

Target Audience