Multitask learning: the Bayesian way thumbnail
slide-image
Pause
Mute
Subtitles not available
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Multitask learning: the Bayesian way

Published on Feb 25, 20076009 Views

Multi-task learning lends itself particularly well to a Bayesian approach. Cross-inference between tasks can be implemented by sharing parameters in the likelihood model and the prior for the task-spe

Related categories

Chapter list

Multi-Task Learning: The Bayesian Way00:01
Contents00:50
Newspaper sales02:03
Data04:14
Explanatory variables05:56
Classical multi-task learning09:17
Does it help?11:16
Does it make sense (1)?12:33
Does it make sense (2)?14:25
The Bayesian way15:37
Summary of the model18:07
Priors on the task-specific parameters22:07
Empirical Bayes25:51
Summary of the model26:00
Empirical Bayes28:05
Expectation Maximization28:50
Does it help (1)?30:46
Does it help (2)?32:21
Does it make sense (1)?35:23
Does it make sense (2)?36:20
How about different priors?38:16
Outlook42:34
Comparison with kernel approaches45:21