event

SCS & CSE Recruiting Seminar: Chi Jin

Primary tabs

TITLE: Machine Learning: Why Do Simple Algorithms Work So Well?

ABSTRACT:

While state-of-the-art machine learning models are deep, large-scale, sequential, and highly nonconvex, the backbone of modern learning algorithms are simple algorithms such as stochastic gradient descent, or Q-learning (in the case of reinforcement learning tasks). A basic question endures —why do simple algorithms work so well even in these challenging settings?
 
This talk focuses on two fundamental problems: (1) in nonconvex optimization, can gradient descent escape saddle points efficiently? (2) In reinforcement learning, is Q-learning sample efficient? We will provide the first line of provably positive answers to both questions. In particular, we will show that simple modifications to these classical algorithms guarantee significantly better properties, which explains the underlying mechanisms behind their favorable performance in practice.

 

BIO:

Chi Jin is a Ph.D. candidate in computer science at UC Berkeley, advised by Michael I. Jordan. He received a B.S. in Physics from Peking University. His research interests lie in machine learning, statistics, and optimization, with his PhD work primarily focused on nonconvex optimization and reinforcement learning.
 

Status

  • Workflow Status:Published
  • Created By:Tess Malone
  • Created:02/06/2019
  • Modified By:Tess Malone
  • Modified:02/08/2019

Keywords

  • No keywords were submitted.