Boosting Products of Base Classiﬁers
published: Aug. 26, 2009, recorded: June 2009, views: 3697
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
In this paper we show how to boost products of simple base learners. Similarly to trees, we call the base learner as a subroutine but in an iterative rather than recursive fashion. The main advantage of the proposed method is its simplicity and computational efﬁciency. On benchmark datasets, our boosted products of decision stumps clearly outperform boosted trees, and on the MNIST dataset the algorithm achieves the second best result among no-domain-knowledge algorithms after deep belief nets. As a second contribution, we present an improved base learner for nominal features and show that boosting the product of two of these new subset indicator base learners solves the maximum margin matrix factorization problem used to formalize the collaborative ﬁltering task. On a small benchmark dataset, we get experimental results comparable to the semi-deﬁnite-programming-based solution but at a much lower computational cost.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !