**Title: **Robust Regression via Hard Thresholding

**Speaker: **Kush Bhatia, Microsoft Research Bangalore

**Abstract:**
In this talk, we explore the problem of Robust Least Squares Regression
(RLSR) where several response variables can be adversarially corrupted.
More specifically, for a data matrix X \in R^{p x n} and an underlying
model w*, the response vector is generated as y = X'w* + b where b \in
R^n is the corruption vector supported over at most C.n coordinates.
Existing exact recovery results for RLSR focus solely on L1-penalty
based convex formulations and impose relatively strict model assumptions
such as requiring the corruptions b to be selected independently of X.
We propose a simple hard-thresholding algorithm called TORRENT which,
under mild conditions on X, can recover w* exactly even if b corrupts
the response variables in an adversarial manner, i.e. both the support
and entries of b are selected adversarially after observing X and w*.
Our results hold under deterministic assumptions which are satisfied if
X is sampled from any sub-Gaussian distribution. Finally unlike existing
results that apply only to a fixed w*, generated independently of X, our
results are universal and hold for any w* \in R^p. We also propose a
gradient descent-based extensions of TORRENT that can scale efficiently
to large scale problems, such as high dimensional sparse recovery and
prove similar recovery guarantees for these extensions. This is joint
work with Prateek Jain and Purushottam Kar.

**Bio:**
Kush Bhatia is currently a Research Fellow in the Machine Learning and
Optimization group at Microsoft Research, India. Prior to this, he
completed his undergraduate studies from the Computer Science department
at IIT Delhi, where he was advised by Prof. Parag Singla for his
undergraduate thesis. His interests are in developing and analyzing
algorithms for large scale machine learning and optimization problems.