en-es
en
0.25
0.5
0.75
1.25
1.5
1.75
2
Minimum Error Entropy Principle for Learning
Published on Aug 26, 20134409 Views
Information theoretical learning is inspired by introducing information theory ideas into a machine learning paradigm. Minimum error entropy is a principle of information theoretical learning and pr
Related categories
Chapter list
Minimum error entropy principle for learning00:00
Outline of the Talk00:06
Least squares regression and ERM00:25
Least squares generalization error01:42
Error analysis04:07
Regularized least squares regression and kernel PCA06:23
Principle component analysis (PCA)07:30
Kernel principle components08:49
Regularized kernel PCA - 112:57
Regularized kernel PCA - 213:25
Properties induced by concave penalties - 114:51
Properties induced by concave penalties - 216:52
Simulation on MHC-peptide binding data - 119:07
Simulation on MHC-peptide binding data - 222:11
Simulation on MHC-peptide binding data - 322:50
Simulation on MHC-peptide binding data - 424:33
Simulation on MHC-peptide binding data - 525:10
Simulation on MHC-peptide binding data - 625:46
Simulation on MHC-peptide binding data - 727:08
Minimum error entropy (MEE) principle and kernel approximation - 127:54
Minimum error entropy (MEE) principle and kernel approximation - 233:18
Minimum error entropy (MEE) principle and kernel approximation - 333:41
MEE algorithm with large parameter - 136:33
MEE algorithm with large parameter - 238:06
Key feature for large parameter38:53
MEE algorithm with small parameter39:17
Positive result on entropy consistency40:23
Regression consistency for homoskedastic models41:04
Fourier analysis for homoskedastic models - 141:30
Fourier analysis for homoskedastic models - 241:36
Fourier analysis for heteroskedastic models - 141:38
Fourier analysis for heteroskedastic models - 243:06