en
0.25
0.5
0.75
1.25
1.5
1.75
2
Feature Selection Stability Assessment based on the Jensen-Shannon Divergence
Published on Oct 03, 20113244 Views
Feature selection and ranking techniques play an important role in the analysis of high-dimensional data. In particular, their stability becomes crucial when the feature importance is later studied in
Related categories
Chapter list
Feature Selection Stability Assessment based on the Jensen-Shannon Divergence00:00
Outline00:14
Outline: Introduction00:45
Introduction (1)00:47
Introduction (2)00:59
Outline: Feature Ranking/Selection02:53
Feature Ranking Outcomes02:55
Feature Selection Outcomes03:55
Feature Ranking: Full Ranked lists04:21
Feature Selection: Top-k lists05:04
Outline: Future Selection/Ranking Stability Metrics05:31
Feature Selection/Ranking Robustness05:32
Similarity between two lists (1)06:32
Similarity between two lists (2)07:38
Stability for a set of lists08:16
Outline: Stability based on the Jensen-Shannon divergence08:50
Stability based on the Jensen-Shannon Divergence08:58
Divergence Measures10:10
Stability based on the Jensen-Shannon Divergence10:59
Extension to Partial ranked lists12:13
Extension to top-k lists12:42
Outline: Empirical Study13:19
Empirical Study: Illustration on artificial outcomes (1)13:21
Empirical Study: Illustration on artificial outcomes (2)14:58
Empirical Study: Illustration on artificial outcomes (3)15:23
Empirical Study: Illustration on artificial outcomes (4)17:03
Empirical Study: Evaluation on Spectral Data (1)17:53
Empirical Study: Evaluation on Spectral Data (2)18:37
Empirical Study: Evaluation on Spectral Data (3)19:20
Outline: Conclusion20:13
Conclusion20:16