On-line learning competitive with reproducing kernel Hilbert spaces

author: Vladimir Vovk, University of London
published: Feb. 25, 2007,   recorded: October 2005,   views: 4057


Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.


In this talk I will describe a new technique for designing competitive on-line prediction algorithms and proving loss bounds for them. The goal of such algorithms is to perform almost as well as the best decision rules in a wide benchmark class, with no assumptions made about the way the observations are generated. However, standard algorithms in this area can only deal with finite-dimensional (often countable) benchmark classes. The new technique gives similar results for decision rules ranging over infinite-dimensional function spaces. It is based on a recent game-theoretic approach to the foundations of probability and, more specifically, on recent results about defensive forecasting. Given the probabilities produced by a defensive forecasting algorithm, which are known to be well calibrated and to have good resolution in the long run, the expected loss minimization principle is used to find a suitable prediction.

See Also:

Download slides icon Download slides: mcslw04_vovk_llcrk_01.pdf (180.4┬áKB)

Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: