Cost-sensitive Classification: Algorithms and Advances

author: Hsuan-Tien Lin, Department of Computer Science and Information Engineering, National Taiwan University
published: March 27, 2014,   recorded: November 2013,   views: 238


Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Delicious Bibliography

 Watch videos:   (click on thumbnail to launch)

Watch Part 1
Part 1 50:21
Watch Part 2
Part 2 37:07
Watch Part 3
Part 3 45:48


Classification is an important problem in machine learning. It can be used in a variety of applications, such as distinguishing apples, oranges, and bananas automatically. Traditionally, the regular classification problem aims at minimizing the number of future mis-prediction errors. Nevertheless, many real-world applications require varying costs for different types of mis-classification errors. For instance, a false-negative prediction for a spam classification system only takes the user an extra second to delete the email, while a false-positive prediction can mean a huge loss when the email actually carries important information; in bacteria classification, mis-classifying a Gram-positive species as a Gram-negative one leads to totally ineffective treatments and is hence more serious than mis-classifying a Gram-positive species as another Gram-positive one; when classifying a patient as healthy, cold-infected, or H1N1-infected, predicting an H1N1-infected patient as healthy is significantly more serious than predicting a predicting a healthy patient as H1N1-infected. Such a cost-sensitive classification problem can be very different from the regular classification one, and can be used by applications like targeted marketing, information retrieval, medical decision making, object recognition and intrusion detection. In fact, cost-sensitive classification can be used to express any finite-choice and bounded-loss supervised learning problems, and connects to popular machine learning problems including ranking, structured learning and online decision making.

Binary cost-sensitive classification problem considers only two kinds of costs: mis-predicting the first class as the second; mis-predicting the second class as the first. The studies on the problem can be traced back to the works of Elkan (2001) and Zadrozny (2003), which embeds the costs into the learning procedure by the technique of re-weighting the importance of each example. Multiclass cost-sensitive classification problem, on the other hand, can be more difficult than the binary one. There are several families of approaches:
[1] by properly considering the costs during decision making rather than training, which tackles the problem from the Bayesian perspective (Domingos, 1999)
[2] by re-weighting, which extends from the approach in binary classification (Zhou, 2006)
[3] by reducing to regular classification, which generally needs re-weighting and re-labeling the examples (Abe, 2004; Lin, 2008)
[4] by reducing to binary classification based on some different decomposition structures (Lin, 2008; Baygelzimer, 2005; Langford, 2005; Beygelzimer, 2007)
[5] by reducing to regression by embedding the costs in the labels (Tu and Lin, 2010)

In addition to the algorithmic developments discussed above, cost-sensitive classification is being used in more and more applications in recent years. Selected applications that the speaker have first-hand experience include improving the performance in a real-world bacteria classification system (Jan, 2012), improving the ranking performance for information retrieval (Ruan, 2013), and constructing a useful model for recommender systems (Chen, 2011) which is part of the winning solution of the National Taiwan University team in KDD Cup 2011. The applications stimulate new needs for cost-sensitive classification, such as using the costs to approximate the true evaluation criteria of interest (Ruan, 2013), or being both cost-sensitive and error-sensitive (Jan, 2012).

In this tutorial, we review existing approaches of cost-sensitive classification, for both binary classification and multi-class classification. The approaches range from the Bayesian perspective of including costs during decision making to the reduction-based approaches that transform the cost-sensitive classification task to regular classification or regression tasks. We discuss the theoretical guarantees behind the approaches as well as their practical uses. We will also introduce some recent advances of cost-sensitive classification, including its success in applications. The tutorial assumes the basic background (techniques in classification and regression) in machine learning, and nothing more.

See Also:

Download slides icon Download slides: acml2013_lin_cost_sensitive_classification.pdf (6.5┬áMB)

Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: