A boosting approach to multiview classification with cooperation

author: Sokol Koço, Laboratoire d’Informatique Fondamentale de Marseille
published: Oct. 3, 2011,   recorded: September 2011,   views: 3356
Categories

Slides

Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.
  Bibliography

Description

Nowadays in numerous elds such as bioinformatics or multimedia, data may be described using many di erent sets of features (or views) which carry either global or local information. Many learning tasks make use of these competitive views in order to improve overall predictive power of classi ers through fusion-based methods. Usually, these approaches rely on a weighted combination of classi ers (or selected descriptions), where classi ers are learnt independently the ones from the others. One drawback of these methods is that the classi er learnt on one view does not communicate its lack to the other views. In other words, learning algorithms do not cooperate although they are trained on the same objects. This paper deals with a novel approach to integrate multiview information within an iterative learning scheme, where the classi er learnt on one view is allowed to somehow communicate its performances to the other views. The proposed algorithm, named Mumbo, is based on boosting. Within the boosting scheme, Mumbo maintains one distribution of examples on each view, and at each round, it learns one weak classi er on each view. Within a view, the distribution of examples evolves both with the ability of the dedicated classi er to deal with examples of the corresponding features space, and with the ability of classi ers in other views to process the same examples within their own description spaces. Hence, the principle is to slightly remove the hard examples from the learning space of one view, while their weights get higher in the other views. This way, we expect that examples are urged to be processed by the most appropriate views, when possible. At the end of the iterative learning process, a nal classi er is computed by a weighted combination of selected weak classi ers. Such an approach is merely useful when some examples detected as outliers in a view { for instance because of noise { are quite probabilisticaly regular hence informative within some other view. This paper provides the Mumbo algorithm in a multiclass and multiview setting, based on recent advances in theoretical boosting. The boosting properties of Mumbo are proven, as well as a some results on its generalization capabilities. Several experimental results are reported which point out that complementary views may actually cooperate under some assumptions.

See Also:

Download slides icon Download slides: ecmlpkdd2011_koco_boosting_01.pdf (804.8 KB)


Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: