Multi-Task Learning via Matrix Regularization

author: Andreas Argyriou, École Centrale Paris
published: May 6, 2009,   recorded: April 2009,   views: 3594

See Also:

Download slides icon Download slides: smls09_argyriou_mtlvmr_01.pdf (468.3 KB)

Help icon Streaming Video Help

Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.


We present a method for learning representations shared across multiple tasks. Multi-task learning has become increasingly important recently in applications such as collaborative filtering, object detection, integration of databases, signal processing etc. Our method addresses the problem of learning a low-dimensional subspace on which task regression vectors lie. This non-convex problem can be relaxed as a trace (nuclear) norm regularization problem, which we solve with an alternating minimization algorithm. This algorithmic scheme can be shown to always converge to an optimal solution. Moreover, the method can easily be extended in order to use nonlinear feature maps as inputs via reproducing kernels. This is a consequence of optimality conditions known as representer theorems, for which we show a necessary and sufficient condition. Finally, we consider matrix regularization with more general spectral functions, such as the Schatten Lp norms, instead of the trace norm. We show that our algorithm and results apply in these cases as well.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: