published: July 30, 2008, recorded: July 2008, views: 6550
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
In this talk by concentration inequalities we mean inequalities that bound the deviations of a function of independent random variables from its mean. Due to their generality and elegance, many of such results have served as standard tools in a variety of areas, including statistical learning theory, probabilistic combinatorics, and the geometry of Banach spaces. To illustrate some of the basic ideas, we start by showing simple ways of bounding the variance of a general function of several independent random variables. We show how to use these inequalities on a few key quantities in statistical learning theory. In the past two decades several techniques have been introduced to improve such variance inequalities to exponential tail inequalities. We focus on a particularly elegant and effective method, the so-called "entropy method", based on logarithmic Sobolev inequalities and their modifications. Similar ideas appear in a variety of areas of mathematics, including discrete and Gaussian isoperimetric problems, and estimation of mixing times of Markov chains. We intend to shed some light to some of these connections. In particular, we mention some closely related results on influences of variables of Boolean functions, phase transitions, and threshold phenomena.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !