Model Selection/Estimation: Multiple Reproducing Kernel Hilbert Spaces
In this talk, we consider the problem of learning a target function that belongs to the linear span of a large number of reproducing Kernel Hilbert spaces. Such a problem arises naturally in many practice situations with the ANOVA, the additive model and multiple kernel learning as the most
well known and important examples. We investigate approaches based on l1-type complexity regularization and the nonnegative garrote respectively. We show that the computation of both procedures can be done efficiently and the nonnegative garrote could be more favorable at times.
We also study their theoretical properties from both variable selection and estimation perspective. We establish several probabilistic inequalities providing bounds on the excess risk and L2-error that depend on the sparsity of the problem.
(parts of the talk are based on joint work with Yi Lin and Vladimir Koltchinskii respectively.)