PhD Defense | A deep learning approach to solving inverse problems

Primary tabs

Jihui Jin - Machine Learning PhD Student - School of Electrical and Computer Engineering

Date: April 16th

Time: 12:00 PM – 1:30 PM ET

Location: Online

Meeting Link: https://gatech.zoom.us/j/97696407108, Meeting ID: 976 9640 7108

Committee

Justin Romberg (Advisor), School of Electrical and Computer Engineering

Mark Davenport, School of Electrical and Computer Engineering

Ghassan AlRegib, School of Electrical and Computer Engineering

Karim Sabra, George W. Woodruff School of Mechanical Engineering

David Anderson, School of Electrical and Computer Engineering

Abstract

The objective of this thesis is to develop machine learning methods to solve inverse problems. Inverse problems arise when there is a signal, image, or volume of interest that can only be measured indirectly. The forward mapping process is typically well understood and can be modeled accurately through analytical methods or simulation. However, inverting the forward model to recover the signal of interest from measurements is typically ill-posed, often requiring extensive computation to recover an acceptable solution. 

Modern machine learning methods have achieved tremendous success in recent years by leveraging labeled data (such as signals and their corresponding measurements) to learn arbitrary mappings, typically in a black box manner. Inverse problems offer a rich understanding of the forward mapping process, describing the relationship between the paired data, that can be leveraged for machine learning algorithms. This research aims to integrate knowledge of the forward model into machine learning solutions to mitigate and address the computational expenses of solving inverse problems.  In our first aim, we train a surrogate forward model using supervised learning techniques and integrate it into a classical optimization framework. The surrogate model vastly reduces both the forward and gradient calculation, allowing for cheaper iterates. In the next aim, we improve on this method by incorporating an ensemble of linearizations that approximate the forward model to reduce the black box nature of neural network surrogates. In our third aim, we develop a computationally feasible method to integrate non-linear forward models into "Deep Unrolled" architectures to allow for training end-to-end.

Groups

Status

Categories

  • No categories were selected.

Keywords

  • No keywords were submitted.