Large-scale stochastic approximation proceedures

Primary tabs

Adaptive Gain Choice for Large-Scale Stochastic Approximation Procedures

Prof. Anatoli Iouditski

University Joseph Fourier, Grenoble, France

The subject of this talk is a complexity analysis of a family of large-scale stochastic approximation algorithms. The methods belongs to the family of primal-dual descent algorithms, introduced by Yu. Nesterov. We propose an adaptive choice of the gain sequences of the algorithm which make it possible to attain the optimal rates of convergence on wide classes of problems. We show, for instance, that if it is known a priori that the objective function is Lipschitz, the proposed algorithm attains minimax rate of convergence. Further, if the objective belongs to a "better class" of smooth functions with Lipschitz-continuous gradient, the proposed algorithm also attains the minimax rate.


  • Workflow Status: Published
  • Created By: Ruth Gregory
  • Created: 10/12/2009
  • Modified By: Fletcher Moore
  • Modified: 10/07/2016

Target Audience