event

PhD Defense by Ben Tamo

Primary tabs

Title: Causal Inference and Evidence-Grounded Language Models for Trustworthy Personalized Clinical Decision Support

 

Date: Thursday, April 9th, 2026

Time: 4:00 - 6:00 PM EST

Location: Zoom Link - https://gatech.zoom.us/j/97794499684

 

Ben Tamo

Machine Learning Candidate

School of Electrical and Computer Engineering

Georgia Institute of Technology

 

Committee

  1. May D. Wang, PhD, Professor of Biomedical Engineering, Electrical and Computer Engineering, and Computational Science and Engineering at Georgia Tech and Emory University, Director of Biomedical Big Data Initiative, and Georgia Distinguished Cancer Scholar.
  2. Cassie Mitchell, PhD, Associate Professor of Biomedical Engineering at Georgia Tech and Emory University.
  3. David Anderson, PhD,  Professor in the School of Electrical and Computer Engineering at Georgia Tech
  4. Larry Heck, PhD, Professor with a joint appointment in the Schools of Electrical and Computer Engineering and Interactive Computing at the Georgia Institute of Technology.
  5. B. Randall Brenn, MD, Chief of Anesthesia, Shriners Hospital for Children, Philadelphia, Associate Professor of Anesthesia and Critical Care at Kennett Square, Pennsylvania.

 

Abstract

Clinical decision support systems (CDSS) are increasingly expected to function not merely as predictive tools, but as active partners in clinical reasoning. However, most existing machine learning approaches remain limited to population-level risk estimation, lacking the ability to personalize decisions, ensure reliability, and provide verifiable justification. This thesis addresses these limitations through a unifying framework organized around three fundamental questions: Can we personalize decisions? Can we trust those decisions? And can we ensure those decisions are evidence-grounded and reasoned?

 

To address the first question, can we personalize decisions? This thesis develops causal machine learning approaches that move beyond population-level prediction toward individualized decision-making. By combining latent patient representations with counterfactual modeling, we enable estimation of heterogeneous treatment effects and extend these capabilities to real-time surgical settings.

 

The second question, can we trust those decisions?, focuses on reliability in high-stakes environments. We address key failure modes of clinical AI, bias, overconfidence, and opacity, through fairness-aware learning, uncertainty quantification, and reliability metrics that assess consistency and evidence alignment.

 

Finally, the third question, can we ensure those decisions are evidence-grounded and reasoned?, unifies personalization and trustworthiness through structured, evidence-guided reasoning. We develop methods to constrain model outputs using clinical knowledge and introduce training frameworks that directly optimize for evidence adherence, ensuring decisions are both traceable and verifiable.

 

Together, these contributions advance CDSS from passive prediction systems to transparent, personalized, and evidence-aligned reasoning frameworks. By integrating causal personalization, reliability-aware modeling, and evidence-grounded reasoning, this thesis establishes a foundation for trustworthy personalized clinical decision support that can meaningfully augment clinical decision-making in high-stakes care.

Status

  • Workflow status: Published
  • Created by: Tatianna Richardson
  • Created: 04/01/2026
  • Modified By: Tatianna Richardson
  • Modified: 04/01/2026

Categories

Keywords

User Data

Target Audience