Guaranteed Non-convex Learning Algorithms through Tensor Factorization
published: May 27, 2016, recorded: May 2016, views: 4740
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Modern machine learning involves massive datasets of text, images, videos, biological data, and so on. Most learning tasks can be framed as optimization problems which turn out to be non-convex and NP-hard to solve. This hardness barrier can be overcome by: (i) focusing on conditions which make learning tractable, (ii) replacing the given optimization objective with better behaved ones, and (iii) exploiting non-obvious connections that abound in learning problems.
I will discuss the above in the context of: (i) unsupervised learning of latent variable models and (ii) training multi-layer neural networks, through a novel framework involving spectral decomposition of moment matrices and tensors. Tensors are rich structures that can encode higher order relationships in data. Despite being non-convex, tensor decomposition can be solved optimally using simple iterative algorithms under mild conditions. In practice, tensor methods yield enormous gains both in running times and learning accuracy over traditional methods for training probabilistic models such as variational inference. These positive results demonstrate that many challenging learning tasks can be solved efficiently, both in theory and in practice.
Download slides: iclr2016_anandkumar_nonconvex_learning_01.pdf (3.7 MB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !