Empirical Comparisons of Learning Methods & Case Studies
published: Feb. 25, 2007, recorded: May 2005, views: 6153
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Decision trees may be intelligible, but can they cut the mustard? Have SVMs replaced neural nets, or are neural nets still best for regression, and SVMs best for classification? Boosting maximizes a margin much like SVMs, but can boosting compete with SVMs? And is it better to boost weak models, as theory suggests, or to boost stronger models? Bagging is much easier than boosting, so how well does bagging stack up against boosting? Bagging is supposed to be best with low bias high variance methods like decision trees, so if we bag lower variance models like neural nets are they as good as bagged trees? What happens if we do bagging with steroids, i.e. switch to random forests? And what about old friends like k-nearest neighbor — should they just be put out to pasture? In this lecture I'll compare the performance of a variety of popular machine learning methods on nine performance criteria: Accuracy, F-score, Lift, Precision/Recall Break-Even? Point, Area under the ROC, Average Precision, Squared Error, Cross-Entropy?, and Probabilistic Calibration. I'll show that while no one learning method does it all, it is possible to "repair" some of them so that they do well on all metrics. I'll then describe NACHOS, a new ensemble method that does even better by by building on top of these other learning methods. Finally, I'll discuss how the nine performance metrics relate to each other, and look at a few case-studies to show why it is important to use the right metric for each problem.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !