Convex Relaxation and Estimation of High-Dimensional Matrices
published: May 6, 2011, recorded: April 2011, views: 737
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Problems that require estimating high-dimensional matrices from noisy observations arise frequently in statistics and machine learning. Examples include dimensionality reduction methods (e.g., principal components and canonical correlation), collaborative filtering and matrix completion (e.g., Netflix and other recommender systems), multivariate regression, estimation of time-series models, and graphical model learning. When the sample size is less than the matrix dimensions, all of these problems are ill-posed, so that some type of structure is required in order to obtain interesting results.
In recent years, relaxations based on the nuclear norm and other types of convex matrix regularizers have become popular. By framing a broad class of problems as special cases of matrix regression, we present a single theoretical result that provides guarantees on the accuracy of such convex relaxations. Our general result can be specialized to obtain various non-asymptotic bounds, among them sharp rates for noisy forms of matrix completion, matrix compression, and matrix decomposition. In all of these cases, information-theoretic methods can be used to show that our rates are minimax-optimal, and thus cannot be substantially improved upon by any algorithm, regardless of computational complexity.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !