A Robust Ranking Methodology based on Diverse Calibration of AdaBoost thumbnail
Pause
Mute
Subtitles
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

A Robust Ranking Methodology based on Diverse Calibration of AdaBoost

Published on Oct 03, 20113145 Views

In subset ranking, the goal is to learn a ranking function that approximates a gold standard partial ordering of a set of objects (in our case, relevance labels of a set of documents retrieved for the

Related categories

Chapter list

A Robust Ranking Methodology based on Diverse Calibration of AdaBoost00:00
Outline00:28
Introduction to learning to rank00:36
Learning to rank02:56
De nition of the Learning-To-Rank(LTR) task03:44
Normalized Discounted Cumulative Gain (NDCG)05:33
Basic approaches09:33
Bayes optimal permutation10:51
A good property of Bayes optimal permutation12:33
Upper bound for the excess of DCG14:10
Our approach - 115:18
Our approach - 215:48
Training cost-sensitive multi-class AdaBoost.MH 16:17
Training AdaBoost.MH16:23
Calibration - 117:13
Class-probability-based calibration (CPC)17:17
Class-probability-based calibration: obtaining 17:52
Regression-based Calibration (RBC)18:53
Ensemble of ensembles: putting the calibrated models into a huge ensemble classi er19:44
Ensemble of ensembles: choosing19:49
LETOR datasets20:21
Experiments20:54
Results21:32
NDCG values for various ranking algorithms.21:51
Conclusions and further work22:24
Thanks for Your Attention!23:44