Workshop on Modelling in Classification and Statistical Learning, Eindhoven 2004

Workshop on Modelling in Classification and Statistical Learning, Eindhoven 2004

17 Lectures · Oct 2, 2005

About

The present workshop addresses the problem of predicting a - binary - label Y from given the feature X. A procedure for classification is to be learned from a training set (X1, Y1) , ... , (Xn , Yn ). In the statistical literature on classification, the training set is traditionally seen as an i.i.d. sample from the distribution P of (X,Y), but one otherwise does not assume any a priori knowledge on P. Theoretical results have been derived that hold no matter what P is, which typically means that such results concentrate on worst cases. There are various reasons to step aside from this so-called black box approach. For example, the by now generally accepted rule regression is harder that classification" has led to a bad name for certain "plug in" methods, although under distributional assumptions the latter are at least competitive with direct" methods. Moreover, theoretical results for a case where P is assumed to be within a small class, can give benchmarks on what one may hope for. Also, procedures which adapt to properties of P need further exploration. These procedures are designed to work well in case one is "lucky", and are as such also inspired by having certain distributional assumptions in the back of ones mind. It moreover is often quite reasonable to assume some knowledge of the marginal distribution of X.

Related categories

Uploaded videos:

Lectures

video-img
56:22

PERFORMANCE BOUNDS FOR KERNEL PCA

Gilles Blanchard

Apr 12, 2007

 · 

5370 Views

Lecture
video-img
52:15

Mistake bounds and risk bounds for on-line learning algorithms

Nicolò Cesa-Bianchi

Feb 25, 2007

 · 

3133 Views

Lecture
video-img
01:11:50

Robustness properties of support vector machines and related methods

Andreas Christmann

Feb 25, 2007

 · 

4882 Views

Lecture
video-img
01:08:43

Suboptimality of MDL and Bayes in Classification under Misspecification

Peter Grünwald

Feb 25, 2007

 · 

3212 Views

Lecture
video-img
58:21

On minimax estimation of infinite dimensional vector of binomial proportions

Eduard Belitser

Feb 25, 2007

 · 

3501 Views

Lecture
video-img
01:06:22

Universal Principles, Approximation and Model Choices

Lauri Davies

Feb 25, 2007

 · 

2939 Views

Lecture
video-img
26:12

Unified Loss Function and Estimating Function Based Learning

Mark van der Laan

Feb 25, 2007

 · 

3601 Views

Lecture
video-img
01:00:29

How classifieres can be use to solve any reasonable loss

John Langford

Feb 25, 2007

 · 

3238 Views

Lecture
video-img
44:59

Penalized empirical risk minimization in the estimation of thresholds

Leila Mohammadi

Feb 25, 2007

 · 

2941 Views

Lecture
video-img
01:10:07

Generalization Error under Covariate Shift Input-Dependent Estimation of General...

Klaus-Robert Müller

Feb 25, 2007

 · 

3654 Views

Lecture
video-img
59:51

Faster Rates via Active Learning

Robert D. Nowak

Feb 25, 2007

 · 

3739 Views

Lecture
video-img
58:37

Nonparametric Tests between Distributions

Alex Smola

Feb 25, 2007

 · 

7395 Views

Lecture
video-img
54:33

The Limit of One-Class SVM

Regis Vert

Feb 25, 2007

 · 

9858 Views

Lecture
video-img
01:02:32

On-line learning competitive with reproducing kernel Hilbert spaces

Vladimir Vovk

Feb 25, 2007

 · 

4075 Views

Lecture

Impromptu Session

video-img
20:15

Agnostic Active learning

John Langford

Apr 12, 2007

 · 

3631 Views

Lecture
video-img
16:50

Generalization to Unseen Cases: (No) Free Lunches and Good-Turing estimation

Teemu Roos

Apr 12, 2007

 · 

3365 Views

Lecture
video-img
16:29

Anti-Learning Signature in Biological Classification

Adam Kowalczyk

Apr 12, 2007

 · 

3033 Views

Lecture