Consider the problem of minimizing a convex objective function defined on a possibly infinite dimensional linear vector space spanned by a possibly infinite set of library vectors.

In many engineering applications, we are interested in seeking a vector with near optimal objective value such that it can be sparsely represented as a linear combination of the library vectors. In the literature, greedy algorithm is frequently used to obtain such a sparse representation, where at each step we add an additional library vector to approximately minimize

the objective function.

This problem is typically ill-posed if we regard it as a sparse parameter estimation problem. Consequently simple applications of techniques from traditional numerical convergence analysis lead to unreasonable assumptions that cannot be satisfied in practice. In this talk, I present a new convergence analysis for these methods which is applicable in much greater generality, but predicts a slower worst case convergence rate than the typical (linear or faster) rate in optimization.

In particular, I will focus on the following variants of the problem, each with a slightly different greedy algorithm:

1. Convergence rate when the optimization is constrained in the convex hull of the library vectors.

2. Convergence analysis when the optimization is performed on the whole linear vector space.

3. The near optimal sparse approximation property of a greedy algorithm: conditions and approximation rate.