Title: On the computational and theoretical analysis of high dimensional precision matrix estimation
The estimation of high dimensional precision matrices has been a central topic in statistical learning. In this talk, I will summarize some algorithms and theoretical analysis for high dimensional precision matrices. For precision matrix estimation via penalized quadratic loss functions, we can conduct a very efficient ADMM algorithm. Under the high dimension low sample size setting, the computation complexity of our algorithm is linear in both the sample size and the number of parameters. Such a computation complexity is in some sense optimal, as it is the same as the complexity needed for computing the sample covariance matrix. For the theoretical analysis, we consider the weak sparsity condition where many entries are nearly zero. We study a Lasso-type method for high dimensional precision matrix estimation and derive general error bounds under the weak sparsity condition. A unified framework is established to deal with various cases including the heavy-tailed data, the non-paranormal data, and the matrix variate data. These new methods can achieve the same convergence rates as the existing methods and can be implemented efficiently.
Cheng Wang is the Associate Professor of Department of Statistics, Shanghai Jiao Tong University. His work focuses on the theoretical analysis and applications of high dimensional covariance matrix. Wang’s work has appeared in Science China, Statistica Sinica, Journal of Multivariate Analysis, Electronic Journal of Statistics and so on. He is also a reviewer for AOS, JASA, Bernoulli, among others. Wang earned his PhD and BS in statistics from the University of Science and Technology of China on 2013 and 2007, respectively.
|Date||September 27, 2022 (Tue) 16:00 - 17:00|