en-de
en-es
en-fr
en-sl
en
en-zh
0.25
0.5
0.75
1.25
1.5
1.75
2
Utilizing Unlabeled Data for Classification-Prediction Learning
Published on Nov 11, 20113738 Views
In many classication learning tasks, labeled data may be expensive or scarce. At the same time, unlabeled or \weakly labeled" samples, may be available in abundance. We consider three algorithmic p
Related categories
Chapter list
Utilizing Unlabeled Data for Classification-Prediction Learning00:00
In many applications unlabeled data is cheap and abundant00:46
The issues we wish to address01:52
What’s in this talk?02:27
Using unlabeled data to improve prediction accuracy04:10
"Proper Learning" - Desirable classifiers06:10
Utility/Feasibility Trade-off09:07
Utilizing unlabeled data to help proper learning09:18
Our Formal Model: Proper SSL learning09:41
Our algorithmic paradigm10:27
Previous Work11:52
Overview12:35
Scenario 1: SSL with an approximation class13:18
Upper bound14:08
Savings in labeled data15:03
An example scenario where unlabeled data provably helps15:44
A lower bound on the sample complexity without unlabeled data16:50
Scenario 2: SSL with the cluster assumption17:23
A new formalization of the cluster-assumption18:18
Example–Smoothly clustered data19:18
SSL with nearest neighbors19:40
Upper bound on the sample complexity for clusterable data19:54
Upper bound on the sample complexity for data satisfying the Probabilistic Lipschitzness19:56
Experiments–Setup19:58
Experiments-Results20:56
Experiments with NN22:13
Experiments–Results with NN22:14
How many unlabeled examples are needed?22:51
Our results (BD and Ben-David, ALT 2011)24:14
Use of unlabeled sample for Domain Adaptation25:09
Learning when Training and Test distributions differ25:34
Domain Adaptation27:18
Three aspects determining a DA framework27:36
Our Model (the input available to the learner)28:37
Some source-target relatedness assumption29:40
Conservative vs. Adaptive algorithms30:58
Previous work on conservative algorithms31:42
Are there better adaptive DA algorithms?32:28
The prior knowledge about the task that the leaner has33:29
DA with learner’s prior knowledge34:31
The algorithmic idea35:01
Results36:03
Learning from Weak Teachers37:11
Modeling weak teachers41:02
Overview41:33
Our requirements from weak teachers42:14
Our results - utilizing weak-teacher’s labels42:35
The algorithmic paradigm43:14
Result44:29
Conclusions45:59
Open questions and research challenges46:52