en
0.25
0.5
0.75
1.25
1.5
1.75
2
Sharp analysis of low-rank kernel matrix approximations
Published on Jan 16, 20134029 Views
We consider supervised learning problems within the positive-definite kernel framework, such as kernel ridge regression, kernel logistic regression or the support vector machine. With kernels leading t
Related categories
Chapter list
Sharp analysis of low-rank kernel matrix approximations00:00
Don’t forget kernels methods!00:36
Don’t forget asymptotic analysis!00:46
Supervised machine learning with convex optimization (1)01:14
Supervised machine learning with convex optimization (2)02:09
Supervised machine learning with convex optimization (3)03:05
Outline03:12
Supervised machine learning (1)04:11
Supervised machine learning (2)05:07
Supervised machine learning (3)05:59
Supervised machine learning (4)07:06
Supervised machine learning (5)07:28
Supervised machine learning (6)08:35
Supervised machine learning (7)09:15
Why kernels? (1)10:01
Why kernels? (2)10:55
Why kernels? (3)11:48
Why kernels? (4)12:35
Supervised learning with kernels13:25
Efficient algorithms for kernel machines (1)14:42
Efficient algorithms for kernel machines (2)16:21
Efficient algorithms for kernel machines (3)18:31
Efficient algorithms for kernel machines (4)20:07
Column sampling for kernel matrix approximation (1)20:48
Column sampling for kernel matrix approximation (2)22:08
Column sampling for kernel matrix approximation (3)23:25
Outline26:34
Kernel ridge regression26:56
Fixed design analysis of kernel ridge regression (1)27:52
Fixed design analysis of kernel ridge regression (2)29:20
Degrees of freedom (1)29:51
Degrees of freedom (3)30:56
Degrees of freedom vs. rank of column sampling approximation (1)32:11
Degrees of freedom vs. rank of column sampling approximation (2)33:32
Generalization performance of column sampling (Bach, 2012) - 133:40
Generalization performance of column sampling (Bach, 2012) - 234:38
Generalization performance of column sampling35:54
Beyond least-square regression37:09
Optimal choice of the regularization parameter λ (1)38:50
Optimal choice of the regularization parameter λ (2)40:21
Optimization algorithms with column sampling42:21
Simulations on synthetic examples43:36
Simulations on pumadyn datasets47:12
Conclusions47:12
References (1)50:22
References (2)50:24
References (3)50:25