event

PhD Proposal by Edward Choi

Primary tabs

 

Title:

Doctor AI: Interpretable Deep Learning for Modeling Electronic Health Records

 

Edward Choi

Ph.D. Student

School of Computational Science & Engineering

College of Computing

Georgia Institute of Technology

 

Date: Monday, November 27 2017

Time: 12:00pm-2:00pm (EDT)

Location: Klaus 1202

 

Committee:

Dr. Jimeng Sun (Advisor, School of Computational Science & Engineering, Georgia Institute of Technology)

Dr. James Rehg (School of Interactive Computing, Georgia Institute of Technology)

Dr. Jon Duke (School of Computational Science & Engineering, Georgia Institute of Technology)

Dr. Jacob Eisenstein (School of Interactive Computing, Georgia Institute of Technology)

 

Abstract:

Deep learning recently has been showing superior performance in complex domains such as computer vision, audio processing and natural language processing compared to traditional statistical methods. Naturally, deep learning techniques, combined with large electronic health records (EHR) data generated from healthcare organizations has potential to bring dramatic changes to the healthcare industry. However, typical deep learning models can be seen as a highly expressive blackboxes, but difficult to be adopted in real-world healthcare applications due to the lack of interpretability. In order for deep learning methods to be readily adopted by real-world clinical practices, deep learning models must be interpretable without sacrificing its prediction accuracy.

In this thesis, we propose interpretable and accurate deep learning methods for modeling EHR, specifically focusing on longitudinal EHR data. We will begin with a direct application of a well-known deep learning algorithm, the recurrent neural networks (RNN), to capture the temporal nature of longitudinal EHR. Then, based on the initial approach we develop interpretable deep learning models by focusing on three aspects of computational healthcare: efficient representation learning of medical concepts, code-level interpretation for sequence predictions, and leveraging domain knowledge into the model. Another important aspect that we will address as our future work is to extend the interpretable deep learning framework to incorporate multiple data modalities such as lab measures and clinical text.

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:11/28/2017
  • Modified By:Tatianna Richardson
  • Modified:11/28/2017

Categories

Keywords