Phd Defense by Bo Dai

Event Details
  • Date/Time:
    • Monday July 23, 2018 - Tuesday July 24, 2018
      12:00 pm - 1:59 pm
  • Location: : Klaus Advanced Computing Building Room 1315
  • Phone:
  • URL:
  • Email:
  • Fee(s):
  • Extras:
No contact information submitted.

Summary Sentence: Learning over Functions, Distributions and Dynamics via Stochastic Optimization

Full Summary: No summary paragraph submitted.

Title: Learning over Functions, Distributions and Dynamics via Stochastic Optimization


Bo Dai

School of Computational Science and Engineering

College of Computing

Georgia Institute of Technology


Date: Monday, July 23rd, 2018

Time: 12:00 PM to 2:00 PM EST

Location: Klaus Advanced Computing Building Room 1315





Dr. Le Song (Advisor), School of Computational Science and Engineering, Georgia Institute of Technology

Dr. Hongyuan Zha, School of Computational Science and Engineering, Georgia Institute of Technology

Dr. Guanghui Lan, H. Milton Stewart School of Industrial & Systems Engineering, Georgia Institute of Technology

Dr. Byron Boots, School of Interactive Computing, Georgia Institute of Technology

Dr. Arthur Gretton, Gatsby Computational Neuroscience Unit, University College London




Machine learning has recently witnessed revolutionary success in a wide spectrum of domains. The learning objectives, model representation, and learning algorithms are important components of machine learning methods. To construct successful machine learning methods that are naturally fit to different problems with different targets and inputs, one should consider these three components together in a principled way.


This dissertation aims for developing a unified learning framework for such purpose. The heart of this framework is the optimization with the integral operator in infinite-dimensional spaces. Such integral operator representation view in the proposed framework provides us an abstract tool for considering these three components together for plenty of machine learning tasks and will lead to efficient algorithms equipped with flexible representation achieving better approximation ability, scalability, and statistical properties. 


We mainly investigate several motivated machine learning problems, i.e., kernel methods, Bayesian inference, invariance learning, policy evaluation and policy optimization in reinforcement learning, as the special cases of the proposed framework with different instantiations of the integral operator. These instantiations result in the learning problems with inputs as functions, distributions, and dynamics. The corresponding algorithms are derived to handle the particular integral operators via efficient and provable stochastic approximation by exploiting the particular structure properties in the operators. The proposed framework and the derived algorithms are deeply rooted in functional analysis, stochastic optimization, nonparametric method, and Monte Carlo approximation, and contributed to several sub-fields in machine learning community, including kernel methods, Bayesian inference, and reinforcement learning. 


We believe the proposed framework is a valuable tool for developing machine learning methods in a principled way and can be potentially applied to many other scenarios.

Additional Information

In Campus Calendar

Graduate Studies

Invited Audience
Faculty/Staff, Public, Undergraduate students
No categories were selected.
Phd Defense
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Aug 1, 2018 - 1:55pm
  • Last Updated: Aug 1, 2018 - 1:55pm