Manifold Boost: Stagewise Function Approximation for Fully-, Semi- and Un-supervised Learning
published: Aug. 6, 2008, recorded: July 2008, views: 369
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
We describe a manifold learning framework that naturally accommodates supervised learning manifold learning, partially supervised learning and unsupervised clustering as particular cases. Our method chooses a function by minimizing loss subject to a manifold regularization penalty. This augmented cost is minimized using a greedy stagewise functional minimization procedure, as in Gradientboost. Each stage of boosting is fast and efficient. We demonstrate our approach using both radial basis function approximations and classification trees. The performance of our method is at the state of the art on standard problems.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !