event

PhD Defense by Li Tong

Primary tabs

Li Tong

BME PhD Thesis Defense Presentation

 

Date: Nov. 11th, 2020

Time: 9:00 – 10:30 AM

BlueJeans Link: https://bluejeans.com/132901779

Meeting ID: 132 901 779

 

Advisor:

May D. Wang, Ph.D.

Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University

 

Thesis Committee Members:

Omer T. Inan, Ph.D.

School of Electrical and Computer Engineering, Georgia Institute of Technology

 

Wei Sun, PhD

Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University

 

Nikhil K Chanani, MD

Department of Pediatrics, Emory University School of Medicine

 

Shriprasad Deshpande, MD

Children’s National Health System, George Washington University, Washington DC

 

 

Title: Enable Precision Medicine by Integrating Multi-Modal Biomedical Data

 

Abstract:

With the advancement of technologies including high-throughput sequencing and wearable devices, a massive amount of multi-modal biomedical data has been generated at an unprecedented speed and volume every year. However, extracting information and obtaining knowledge from these noisy, unstructured, heterogeneous, and usually high-dimensional biomedical big data remains a significant challenge in research and clinical applications. This thesis developed deep learning-based biomedical data integration methods to combine data from disparate sources to increase data value and improve model performance. We hypothesize there is both independent and dependent information in multi-modal biomedical data. The independent modalities (e.g., genetic factors vs. environmental factors) can be integrated by the complementary principle so that the unique information in each modality can jointly contribute to the final decision. The dependent modalities (e.g., multi-omics data) can be integrated with the consensus principle to improve the model robustness and eliminate inconsistencies. In this thesis, three aims are accomplished: (Aim 1) We utilize the complementary principle and integrate independent modalities by concatenating the hidden features learned with independent feature representation. We have applied the proposed framework to integrate electronic health records (EHRs) with MRI Imaging and single nucleotide polymorphisms (SNPs) data for improved prediction of Alzheimer's Disease. (Aim 2) We integrate the dependent modalities by modeling the complex interactions implicitly with the consensus principle.  A modality-invariant feature representation is achieved by either cross-modality translation or divergence-based consensus regularization. We have applied the proposed deep networks to integrate multi-omics data (i.e., mRNA expression, DNA methylation, miRNA expression, and copy number variations) for improved prediction of breast cancer survival. (Aim 3) We have developed an autoencoder-based semi-supervised learning framework to integrate the unlabeled endomicroscopic imaging data and a weakly-supervised learning framework to learn from whole-slide images without local annotations. By developing multiple deep neural networks to integrate multi-modal biomedical data, we aim to improve the healthcare quality towards precision medicine by providing a more comprehensive evaluation of the patient.

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:10/30/2020
  • Modified By:Tatianna Richardson
  • Modified:10/30/2020

Categories

Keywords