event

ISyE Statistical Seminar Speaker- Tan Bui-Thanh

Primary tabs

Bio

Tan Bui-Thanh is an associate professor, and the endowed William J Murray Jr. Fellow in Engineering No. 4, of the Oden Institute for Computational Engineering & Sciences, and the Department of Aerospace Engineering & Engineering mechanics at the university of Texas at Austin. Bui-Thanh obtained his PhD from the Massachusetts Institute of Technology in 2007, Master of Sciences from the Singapore MIT-Alliance in 2003, and Bachelor of Engineering from the Ho Chi Minh City University of Technology (DHBK) in 2001. He has decades of experience and expertise on multidisciplinary research across the boundaries of different branches of computational science, engineering, and mathematics. Bui-Thanh is a former elected vice president of the SIAM Texas-Louisiana Section, and currently the elected secretary of the SIAM SIAG/CSE. Bui-Thanh was an NSF early CAREER recipient, the Oden Institute distinguished research award, and a two-time winner of the Moncrief Faculty Challenging award.

 

 

Abstract

Deep Learning (DL) by design is purely data-driven and in general does not require physics. This is the strength of DL but also one of its key limitations when applied to science and engineering problems in which underlying physical properties (such as stability, conservation, and positivity) and desired accuracy need to be achieved. DL methods in their original forms are not capable of respecting the underlying mathematical models or achieving desired accuracy even in big-data regimes. On the other hand, many data-driven science and engineering problems, such as inverse problems, typically have limited experimental or observational data, and DL would overfit the data in this case. Leveraging information encoded in the underlying mathematical models, we argue, not only compensates missing information in low data regimes but also provides opportunities to equip DL methods with the underlying physics and hence obtaining higher accuracy. This talk introduces a Tikhonov Network (TNet) that is capable of learning Tikhonov regularized inverse problems. We present and provide intuitions for our formulations for general nonlinear problems. We rigorously show that our TNet approach can learn information encoded in the underlying mathematical models, and thus can produce consistent or equivalent inverse solutions, while naive purely data-based counterparts cannot. Furthermore, we theoretically study the error estimate between TNet and Tikhhonov inverse solutions and under which conditions they are the same. Extension to statistical inverse problems will also be presented.

Status

  • Workflow Status:Published
  • Created By:chumphrey30
  • Created:08/24/2022
  • Modified By:chumphrey30
  • Modified:08/24/2022

Keywords

  • No keywords were submitted.