Optimization for Machine Learning

Optimization for Machine Learning

7 Lectures · Dec 8, 2012

About

Optimization lies at the heart of ML algorithms. Sometimes, classical textbook algorithms suffice, but the majority problems require tailored methods that are based on a deeper understanding of the ML requirements. ML applications and researchers are driving some of the most cutting-edge developments in optimization today. The intimate relation of optimization with ML is the key motivation for our workshop, which aims to foster discussion, discovery, and dissemination of the state-of-the-art in optimization as relevant to machine learning.

Much interest has focused recently on stochastic methods, which can be used in an online setting and in settings where data sets are extremely large and high accuracy is not required. Many aspects of stochastic gradient remain to be explored, for example, different algorithmic variants, customizing to the data set structure, convergence analysis, sampling techniques, software, choice of regularization and tradeoff parameters, distributed and parallel computation. The need for an up-to-date analysis of algorithms for nonconvex problems remains an important practical issue, whose importance becomes even more pronounced as ML tackles more and more complex mathematical models.

Finally, we do not wish to ignore the not particularly large scale setting, where one does have time to wield substantial computational resources. In this setting, high-accuracy solutions and deep understanding of the lessons contained in the data are needed. Examples valuable to MLers may be exploration of genetic and environmental data to identify risk factors for disease; or problems dealing with setups where the amount of observed data is not huge, but the mathematical model is complex.

Workshop homepage: http://opt.kyb.tuebingen.mpg.de/

Related categories

Uploaded videos:

Invited Talks

video-img
52:44

Semidefinite Optimization and Convex Algebraic Geometry

Pablo A. Parrilo

Jan 16, 2013

 · 

4876 Views

Invited Talk
video-img
50:35

Sharp analysis of low-rank kernel matrix approximations

Francis R. Bach

Jan 16, 2013

 · 

4029 Views

Invited Talk

Contributed Talks

video-img
15:28

On Convergence Rate of Concave-Convex Procedure

Ian E.H. Yen

Jan 16, 2013

 · 

3980 Views

Lecture
video-img
17:48

Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization

Shai Shalev-Shwartz

Jan 16, 2013

 · 

3671 Views

Lecture
video-img
13:27

Provable Matrix Completion using Alternating Minimization

Praneeth Netrapalli

Jan 16, 2013

 · 

4211 Views

Lecture
video-img
13:44

Convergence rates of nested accelerated inexact proximal methods

Silvia Villa

Jan 16, 2013

 · 

2975 Views

Lecture
video-img
17:50

On the Complexity of Bandit and Derivative-Free Stochastic Convex Optimization

Ohad Shamir

Jan 16, 2013

 · 

2802 Views

Lecture