published: April 1, 2009, recorded: February 2009, views: 11405
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Watch videos: (click on thumbnail to launch)
The first part of his tutorial will discuss un-supervised, semi-supervised and partially-supervised learning. Convex relaxations will be presented for un-supervised and semi-supervised training of support vector machines, max-margin Markov networks, log-linear models and Bayesian networks. The concept of partially-supervised training will then be introduced, with convex relaxations developed for training multi-layer perceptrons and deep networks. Relationships of these methods to classical training algorithms (EM, Viterbi-EM, and self-supervised training) will be discussed. Limitations of convex relaxations will also be considered. The tutorial will then present methods for scaling up such training algorithms. Finally, some simple approximation bounds will be introduced, along with a rudimentary generalization theory for self-supervised training.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !