Minimum Error Entropy Principle for Learning
published: Aug. 26, 2013, recorded: July 2013, views: 4393
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Information theoretical learning is inspired by introducing information theory ideas into a machine learning paradigm. Minimum error entropy is a principle of information theoretical learning and provides a family of supervised learning algorithms. It is a substitution of the classical least squares method when the noise is non-Gaussian. Its idea is to extract from data as much information as possible about the data generating systems by minimizing error entropies. In this talk we will discuss some minimum error entropy algorithms in a regression setting by minimizing empirical Renyi's entropy of order 2. Consistency results and learning rates are presented. In particular, some error estimates dealing with heavy-tailed noise will be given.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !