Many optimization problems arising in studying of high-dimensional inference problems exhibit an apparent gap between the optimal values which can be estimated by non-constructive means, and the best values achievable by fast (polynomial time) algorithms. Recently it became apparent that a potential and in some cases a provable obstruction for designing algorithms bridging this gap is a phase transition in the geometry of nearly optimal solutions, in particular the presence of a certain Overlap Gap Property (OGP).

In this talk we will discuss this property in the context of sparse high dimensional linear regression problem. We show that, on the one hand, in the sampling regime where the known fast methods for this problem are effective, including LASSO, Basis Pursuit, Orthogonal Matching Pursuit, the space of solutions exhibits a monotonicity with respect to the proximity to the ground truth regression vector and no local optimums exist apart from the ground truth. On the other hand, once the sampling number is asymptotically in the regime where the known methods fail, we show that the monotonicity is lost, and the model exhibits an OGP. In the context of the regression problem this means every solution exhibiting a small mean squared error is either fairly close to the ground truth or is very far from it, with no middle ground.

While for most of the other problems the presence of the OGP indicates a likely hardness of the problem, surprisingly, for the regression problem this is not quite the case. In particular, in the special case of noiseless or very low noise regression problems, powerful algorithms based on the so-called Lattice Basis Reduction technique arising in the field of cryptography can be used to recover the underlying regression vector tractably.

Joint work with Ilias Zadik (MIT).

]]>