Three Hours on Multiple Classififier Systems
published: Dec. 3, 2009, recorded: September 2009, views: 371
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Motivations and basic concepts Motivations of multiple classifier systems. The “worst” case and “best” case motivations. Practical and theoretical motivations. Basic concepts. Architectures for multiple classifier systems. Ensemble types, combiner types. The concept of classifier “diversity”. The design cycle of a multiple classifier system.
Creating multiple classifiers Systematic methods for creating classifier ensembles. Methods based on training data manipulation: data splitting methods, Bagging and Boosting. Methods based on input and output feature manipulation: feature selection, the Random Subspace method, noise injection, and error-correcting codes.
Combining multiple classifiers Methods for combining multiple classifiers at the “abstract” level (voting methods, the Behaviour Knowledge Space method, etc.) Methods for combining multiple classifiers at the “rank” level (the Borda count method, etc.) Methods for combining multiple classifiers at the “measurement” level (linear combiners, the product rule, etc.) Basic concepts on dynamic classifier selection methods.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !