event

Machine Learning Center Seminar: Vidya Muthukumar – Surprises in high-dimensional (overparameterized) linear classification

Primary tabs

Abstract: Seemingly counter-intuitive phenomena in deep neural networks have prompted a recent re-investigation of classical machine learning methods, like linear models and kernel methods. Of particular focus is sufficiently high-dimensional setups in which interpolation of training data is possible. In this talk, we will first briefly review recent works showing that zero regularization, or fitting of noise, need not be harmful in regression tasks. Then, we will use this insight to uncover two new surprises for high-dimensional linear classification: 

  • least-2-norm interpolation can classify consistently even when the corresponding regression task fails, and  
  • the support-vector-machine and least-2-norm interpolation solutions exactly coincide in sufficiently high-dimensional models.  

 

These findings taken together imply that the (linear/kernel) SVM can generalize well in settings beyond those predicted by training-data-dependent complexity measures. Time permitting, we will also discuss preliminary implications of these results for adversarial robustness, and the influence of the choice of training loss function in the overparameterized regime. 

 

This is joint work with Misha Belkin, Daniel Hsu, Adhyyan Narang, Anant Sahai, Vignesh Subramanian, Christos Thrampoulidis, Ke Wang and Ji Xu. 

 

Speaker Info:  Vidya Muthukumar is an Assistant Professor in the Schools of Electrical and Computer Engineering and Industrial and Systems Engineering at Georgia Institute of Technology. Her broad interests are in game theory, online and statistical learning. She is particularly interested in designing learning algorithms that provably adapt in strategic environments, fundamental properties of overparameterized models, and fairness, accountability, and transparency in machine learning.  

Vidya received the PhD degree in Electrical Engineering and Computer Sciences from University of California, Berkeley. She is the recipient of the Adobe Data Science Research Award, Simons-Berkeley Research Fellowship (for the Fall 2020 program on "Theory of Reinforcement Learning"), IBM Science for Social Good Fellowship and a Georgia Tech Class of 1969 Teaching Fellowship for the academic year 2021-2022.  

Groups

Status

  • Workflow Status:Published
  • Created By:Joshua Preston
  • Created:03/23/2022
  • Modified By:Joshua Preston
  • Modified:03/23/2022

Keywords

  • No keywords were submitted.