event

Distinguished Lecture with Tong Zhang, executive director of Tencent AI Lab!

Primary tabs

Abstract
In classical optimization, one needs to calculate a full (deterministic) gradient of the objective function at each step, which can be  extremely costly for modern applications of big data machine learning. A remedy to this problem is to approximate each full gradient with a random sample over the data. This approach reduces the computational cost at each step, but introduces statistical variance.

In this talk, I will present some recent progresses on applying variance reduction techniques previously developed for statistical Monte Carlo methods to this new problem setting. The resulting stochastic optimization methods are highly effective for practical big data problems in machine learning, and the new methods have strong theoretical guarantees that significantly improve the computational lower bounds of classical optimization algorithms.

Collaborators: Rie Johnson, Shai Shalev-Schwartz, Jialei Wang

Biography: 
Tong Zhang is a machine learning researcher, and the executive director of Tencent AI Lab. Previously, he was a professor at Rutgers university, and worked at IBM, Yahoo, and Baidu.

Tong Zhang's research interests include machine learning algorithms and theory, statistical methods for big data and their applications. His research has been supported by many grants from funding agencies such as NSF and NIH. He is a fellow of ASA and IMS, and he has been in the editorial boards of leading machine learning journals and program committees of top machine learning conferences. His Google scholar page can be found here.

Tong Zhang received a B.A. in mathematics and computer science from Cornell University and a Ph.D. in Computer Science from Stanford University.

 

Status

  • Workflow Status:Published
  • Created By:Birney Robert
  • Created:11/26/2018
  • Modified By:Birney Robert
  • Modified:11/26/2018

Keywords

  • No keywords were submitted.