en-de
en-es
en-fr
en-sl
en
en-zh
0.25
0.5
0.75
1.25
1.5
1.75
2
Which Supervised Learning Method Works Best for What? An Empirical Comparison of Learning Methods and Metrics
Published on Feb 25, 200731483 Views
Decision trees are intelligible, but do they perform well enough that you should use them? Have SVMs replaced neural nets, or are neural nets still best for regression, and SVMs best for classificatio
Related categories
Chapter list
An Empirical Comparison of Learning Methods++01:31
Preliminaries: What is Supervised Learning?03:22
Sad State of Affairs: Supervised Learning03:46
Sad State of Affairs: Supervised Learning04:21
Sad State of Affairs: Supervised Learning04:31
A Real Decision Tree04:33
Not ALL Decision Trees Are Intelligible04:34
Sad State of Affairs: Supervised Learning04:36
A Typical Neural Net05:14
Linear Regression05:30
Logistic Regression05:56
Sad State of Affairs: Supervised Learning06:13
Sad State of Affairs: Supervised Learning06:25
Questions09:22
Data Sets11:25
Binary Classification Performance Metrics14:01
Normalized Scores18:05
Massive Empirical Comparison19:44
Look at Predicting Probabilities First20:47
Results on Test Sets (Normalized Scores)21:47
Bagged Decision Trees26:50
Bagging Results27:56
Random Forests (Bagged Trees++)29:50
Calibration & Reliability Diagrams33:29
Back to SVMs: Results on Test Sets37:18
SVM Reliability Plots37:39
Platt Scaling by Fitting a Sigmoid38:31
Results After Platt Scaling SVMs39:21
Results After Platt Scaling SVMs40:22
Results After Platt Scaling SVMs43:16
Summary of Model Performances43:34
Smart Model ? Good Probs44:25
Ada Boosting44:55
Why Boosting is Not Well Calibrated47:04
Consistent With Interpretations of Boosting49:31
Platt Scaling of Boosted Trees (7 problems)50:48
Results After Platt Scaling All Models52:02
Revenge of the Decision Tree!53:50
Methods for Achieving Calibration 56:52
Boosting with Log-Loss58:11
Isotonic Regression01:00:08
Isotonic Regression01:00:12
Platt Scaling vs. Isotonic Regression01:01:37
Platt Scaling vs. Isotonic Regression01:02:17
Summary: Before/After Calibration01:04:46
Where Does That Leave Us?01:05:52
Best of the Best of the Best01:06:32
If we need to train all models and pick best, can we do better than picking best? 01:10:00
Normalized Scores of Ensembles01:11:00
Basic Ensemble Selection Algorithm01:11:54
Basic Ensemble Selection Algorithm01:12:03
Basic Ensemble Selection Algorithm01:12:28
Big Problem: Overfitting01:13:06
Normalized Scores for ES01:13:27
Ensemble Selection vs Best: 3 NLP Problems01:14:05
Ensemble Selection Works, But Is It Worth It?01:14:12
Computational Cost01:14:13
Ensemble Selection01:14:29
Best Ensembles are Big and Ugly!01:14:52
Solution: Model Compression01:15:22