PhD Proposal by Bo Dai

Event Details
  • Date/Time:
    • Wednesday December 20, 2017
      8:30 am - 10:30 am
  • Location: KACB 1315
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: : Learning on Functions via Stochastic Optimization

Full Summary: No summary paragraph submitted.

Title: Learning on Functions via Stochastic Optimization

 

Bo Dai

School of Computational Science and Engineering

College of Computing

Georgia Institute of Technology

 

Date: Wednesday, December 20, 2017

Time: 8:30 AM to 10:30 AM EST

Location: KACB 1315

 

Committee

-------------

 

Dr. Le Song (Advisor), School of Computational Science and Engineering, Georgia Institute of Technology

Dr. Hongyuan Zha, School of Computational Science and Engineering, Georgia Institute of Technology

Dr. Byron Boots, School of Interactive Computing, Georgia Institute of Technology

Dr. Guanghui Lan, H. Milton Stewart School of Industrial & Systems Engineering, Georgia Institute of Technology

Dr. Arthur Gretton, Gatsby Computational Neuroscience Unit, University College London

 

Abstract

-------------

Machine learning has recently witnessed revolutionary success in a wide spectrum of domains. Most of these applications involve learning with complex inputs and/or outputs, which could be graphs, functions, distributions, and even dynamics. The success of these machine learning applications often requires at least two factors: i) the exploitation of structure information in learning models, and ii) the utilization of huge amount of data. However, the structure information corresponds delicate conditions in optimization point of view, while a huge amount of data requires algorithms efficient and scalable. Integrating both parts can be very challenging, from both computational and theoretical perspectives. 

 

In this dissertation, we mainly focused on large-scale learning on functions problems, which includes kernel methods as the inputs are functions, Bayesian inference as the inputs are log-likelihoods, and learning with invariances and reinforcement learning as the inputs are dynamics. By exploiting the special structures with stochastic optimization in function spaces, we developed principled and practical algorithms for each problem. Moreover, the new perspective sheds light on existing open problems in particular areas.  

Additional Information

In Campus Calendar
No
Groups

Graduate Studies

Invited Audience
Public, Graduate students
Categories
Other/Miscellaneous
Keywords
Phd proposal
Status
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Dec 18, 2017 - 2:59pm
  • Last Updated: Dec 18, 2017 - 2:59pm