PhD Defense by Medha Shekhar

Event Details
  • Date/Time:
    • Monday July 12, 2021
      8:30 am - 10:30 am
  • Location: Atlanta, GA; REMOTE
  • Phone:
  • URL: Bluejeans
  • Email:
  • Fee(s):
  • Extras:
No contact information submitted.

Summary Sentence: How do humans give confidence? Comparing popular process models of confidence generation

Full Summary: No summary paragraph submitted.

Name: Medha Shekhar

Dissertation Defense Meeting
Date: Monday, July 12, 2021
Time: 8:30 AM
Location: Virtual,


Advisor: Dobromir Rahnev, Ph.D. (Georgia Tech)


Dissertation Committee Members:
Rick Thomas, Ph.D. (Georgia Tech)
Paul Verhaeghen, Ph.D. (Georgia Tech)
Christopher J. Rozell, Ph.D. (Georgia Tech)
Rani Moran, Ph.D. (University College London)


Title: How do humans give confidence? Comparing popular process models of confidence generation


Abstract: Humans have the metacognitive ability to assess the likelihood of their decisions being correct via estimates of confidence. Several theories have attempted to model the computational mechanisms that generate confidence. Yet, due to little work directly comparing these models using the same data, there is no consensus among these theories. In this study, we extensively compare twelve popular process models by fitting them to large datasets from two experiments in which participants completed a perceptual task with confidence ratings. Quantitative comparisons selected the best fitting model as one that postulates a single system for generating both choice and confidence judgments, where confidence is additionally corrupted by signal-dependent noise. These results contradict dual processing theories – according to which confidence and choice arise from coupled or independent systems. Model evidence from these data also failed to support popular notions that confidence is derived from post-decisional evidence, strictly decision-congruent evidence, or posterior probability computations. Further, qualitative analyses showed that the best fitting models were those that closely predicted individual variations in both primary task performance and metacognitive ability, while the worst performing models were relatively poor at predicting either or both. Together, these analyses establish a general framework for model evaluation that also provides qualitative insights into their successes and failures. Most importantly, these results, by quantifying evidence for theories about confidence, begin to reveal the nature of metacognitive computations.   


Additional Information

In Campus Calendar

Graduate Studies

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Phd Defense
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Jun 24, 2021 - 2:15pm
  • Last Updated: Jun 24, 2021 - 2:15pm