Capacity Control for Partially Ordered Feature Sets
published: Oct. 20, 2009, recorded: September 2009, views: 2638
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Partially ordered feature sets appear naturally in classification settings with structured instances. For example, when the instances are graphs and the features represent subgraph-occurrence-checks, the features can be partially ordered according to an “is subgraph of” relation. We investigate how the redundancy in such datasets affects the capacity control behavior of linear classification methods. While the capacity does not decrease in general, we derive better capacity bounds for distributions, which assign lower probabilities to instances in the lower levels of the feature hierarchy. For itemset, subsequence and subtrees, the capacity is finite even for data with an infinite number of features. We validate these results empirically and show that the limited capacity of linear classifiers makes underfitting rather than overfitting the more prominent capacity control problem. To avoid underfitting, we propose substructure classes with “elastic edges”, and we demonstrate how such broad feature classes can be used with large datasets.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !