# COV883: Special Module in Theoretical Computer Science

Topic: Matrix Concentration Inequalities

# II semester: 2018-19

# Amitabha Bagchi

**Next class**: 3PM, Wed 13 Mar 2019. SIT 006.
### Overview

Concentration inequalities for sums of random variables (Chernoff bounds and others) have played a key role in the analysis of randomized algorithms. Recent developments in the theory of random matrices have shown that similar concentration results can be proved for the spectral norm of the sums of random matrices. Among many other things these results have major consequences in a number of applications that lie at the heart of modern data science including matrix multiplication, subsampling, sparsification. In this course we will go deep into the basic workings of these inequalities and also discuss some applications.
### Topics

Probability with matrices, matrix moments and cumulants, the matrix Laplace transform method, Lieb's theorem and the subadditivity of the matrix cumulant generating function, proof sketch of Lieb's theorem, matrix Chernoff bounds and applications, matrix Bernstein's inequality and applications.
Background required: Probability theory, linear algebra.

### Texts

**Note**: This class will closely follow the presentation in [Tropp2015].

### Other readings

- Jonathan Novak.
*Three lectures on free probability*. arXiv:1205.2097 [math.CO]. See Lecture 1 for a connection between moments, cumulants and the number of connected graphs on n vertices.

### Lecture record

**09.01.2019**. (Duration: 1.75 hrs). History and background of random matrices. Refresher on vectors spaces and matrices and basic questions for random matrix theory as presented by Tropp.
**10.01.2019**. (Duration: 1 hr). The sample covariance estimator, the scalar Bernstein inequality with proof.
**16.01.2019**. (Duration: 2 hrs). Statement of the matrix Bernstein inequality. Error bound for the sample covariance estimator. Refresher of basic matrix theory, the positive semidefinite partial order, standard matrix functions and some properties, monotonicity of the trace exponential function.
**17.01.2019**. (Duration: 1 hr). Operator monotone functions, singular value decompositions, the spectral norm, random matrices, the matrix variance, the matrix variance statistic.
**23.01.2019**. (Duration: 1.5 hrs). Matrix cumulant generating function, the matrix Laplace transform method, subadditivity of the matrix CGF, master bounds for sums of independent random matrices.
**13.02.2019**. (Duration: 1.5 hrs). Matrix Chernoff bounds with proof. Application: Connectivity threshold of the Erdös-Rényi graph .
**20.02.2019**. (Duration: 1.5 hrs). Matrix Bernstein bounds with proof. Matrix approximation by random sampling. Application: Randomized sparsification of a matrix.
**27.02.2019**. (Duration: 1.75 hrs). Proof of Lieb's theorem (upto Sec 8.4.4).

### Assignments

**Homework 1**. Give an example of a random vector for which the expectation bound shown in Sec 1.6.3 is asymptotically tight, and one example for which it is not tight. You will have to define simple examples, work out the actual error bound and compare it with the bound derived in Sec 1.6.3. The homework will count towards the grade for the students who are registered. It is due in class on **23.01.2019**.

### Evaluation

Students will be asked to give 30 min presentations explaining how the basic results are used in various applications.

Amitabha
Bagchi