## Neyman-Pearson classification under a strict constraint

author: Philippe Rigollet, Department of Operations Research and Financial Engineering, Princeton University
published: Aug. 2, 2011,   recorded: July 2011,   views: 452
Categories
You might be experiencing some problems with Your Video player.

# Slides

0:00 Slides Neyman-Pearson classification Binary classification Neyman-Pearson paradigm Asymmetry in the errors Some examples of binary classification from FHT - 1 Some examples of binary classification from FHT - 2 Some examples of binary classification from FHT - 3 Some examples of binary classification from FHT - 4 Some examples of binary classification from FHT - 5 Some examples of binary classification from FHT - 6 Some examples of binary classification from FHT - 7 Some examples of binary classification from FHT - 8 Classification error may be dangerous Observations Previous work Idea of Cannon et al.: relaxed constraint - 1 Idea of Cannon et al.: relaxed constraint - 2 Comments Empirical risk Contribution of this talk Convexification Convexified NP problem Neyman-Pearson classifier Theorem - 1 An assumption Theorem - 2 Sketch of the proof Theorem - 2 Sketch of the proof Some intuition - 1 Some intuition - 2 Some intuition - 3 Proposition Chance constrained optimization - 1 Chance constrained optimization - 2 Chance constrained optimization - 3

# Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.

# Description

Motivated by problems of anomaly detection, this paper implements the Neyman-Pearson paradigm to deal with asymmetric errors in binary classification with a convex loss. Given a finite collection of classifiers, we combine them and obtain a new classifier that satisfies simultaneously the two following properties with high probability: (i), its probability of type I error is below a pre-specified level and (ii), it has probability of type II error close to the minimum possible. The proposed classifier is obtained by minimizing an empirical objective subject to an empirical constraint. The novelty of the method is that the classifier output by this problem is shown to satisfy the original constraint on type I error. This strict enforcement of the constraint has interesting consequences on the control of the type II error and we develop new techniques to handle this situation. Finally, connections with chance constrained optimization are evident and are investigated.