Exp-Concavity of Proper Composite Losses
published: Aug. 20, 2015, recorded: July 2015, views: 34
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
The goal of on-line prediction with expert advice is to find a decision strategy which will perform almost as well as the best expert in a given pool of experts, on any sequence of outcomes. This problem has been widely studied and O(T^0.5) and O(log T) regret bounds can be achieved for convex losses (Zinkevich (2003)) and strictly convex losses with bounded first and second derivatives (Hazan et al. (2007)) respectively. In special cases like the Aggregating Algorithm (Vovk (1995)) with mixable losses and the Weighted Average Algorithm (Kivinen and Warmuth (1999)) with exp-concave losses, it is possible to achieve O(1) regret bounds. van Erven (2012) has argued that mixability and exp-concavity are roughly equivalent under certain conditions. Thus by understanding the underlying relationship between these two notions we can gain the best of both algorithms (strong theoretical performance guarantees of the Aggregating Algorithm and the computational efficiency of the Weighted Average Algorithm). In this paper we provide a complete characterization of the exp-concavity of any proper composite loss. Using this characterization and the mixability condition of proper losses (Van Erven et al. (2012)), we show that it is possible to transform (re-parameterize) any beta-mixable proper loss into a beta-exp-concave composite loss with the same beta.
Download slides: colt2015_kamalaruban_composite_losses_01.pdf (588.7 KB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !