Catholic University og Louvain (UCL) Belgium
In this talk we describe the new methods for solving huge-scale optimization problems. For problems of this size, even the simplest full-dimensional vector operations are very expensive. Hence, we suggest to apply an optimization technique based on random partial update of decision variables. For these methods, we prove the global estimates for the rate of convergence. Surprisingly enough, for certain classes of objective functions, our results are better than the standard worst-case bounds for deterministic algorithms. We present constrained and unconstrained versions of the method, and its accelerated variant. Our numerical test confirms a high efficiency of this technique on problems of very big size.
The paper can be downloaded as CORE Discussion Paper 2010/2:
- Workflow Status: Published
- Created By: Mike Alberghini
- Created: 12/20/2012
- Modified By: Fletcher Moore
- Modified: 10/07/2016