en-de
en-es
en-fr
en-sl
en
en-zh
0.25
0.5
0.75
1.25
1.5
1.75
2
Multi-task feature learning
Published on Feb 25, 20077167 Views
We present a method for learning a low-dimensional representation which is shared across a set of multiple related tasks. The method builds upon the well-known 1-norm regularization problem using a ne
Related categories
Chapter list
Multi-Task Feature Learning00:02
Learning Multiple Tasks Simultaneously01:28
Multi-Task Feature Learning01:51
Sharing Features Across Tasks03:52
Learning Paradigm06:25
Weighting Features08:16
Sharing Features Across Tasks08:55
Sharing Features Across Tasks09:26
(2; 1)-Norm12:05
(2; 1)-Norm13:18
Sharing Features Across Tasks15:20
(2; 1)-Norm16:22
(2; 1)-Norm16:41
(2; 1)-Norm Regularization16:52
L1 Regularization19:29
Learning the Features20:10
Convex Reformulation21:22
Convex Reformulation (cont.)22:34
Alternating Algorithm23:50
Convex Reformulation (cont.)27:20
Alternating Algorithm27:50
Experiment 1 (toy data)31:16
Experiment 1 (toy data)32:35
Experiment 2 (real data)35:56
Experiment 2 (real data)36:52
Experiment 2 (real data)38:16
Summary38:52
Future Work39:45
Convex Reformulation (cont.)41:30
Alternating Algorithm46:55
Convex Reformulation (cont.)47:45
Regularization with the Trace Norm48:20