0.25
0.5
0.75
1.25
1.5
1.75
2
Theory of Optimal Learning Machines
Published on May 15, 2019248 Views
Related categories
Chapter list
Theory of Optimal Learning Machines00:00
I come from QLS@ICTP00:37
The behaviour of living systems rely on the efficiency of the representations they form of their environment* - 101:19
The behaviour of living systems rely on the efficiency of the representations they form of their environment* - 202:54
Main Results05:27
Here is the plan07:01
What is noise? The Asymptotic Equipartition Property - 108:29
What is noise? The Asymptotic Equipartition Property - 210:00
What is noise? The Asymptotic Equipartition Property - 310:08
What is noise? The Asymptotic Equipartition Property - 411:48
What is noise? The Asymptotic Equipartition Property - 512:10
What is noise? The Asymptotic Equipartition Property - 613:37
What is noise? The Asymptotic Equipartition Property - 714:40
What is noise? The Asymptotic Equipartition Property - 815:10
What is noise? The Asymptotic Equipartition Property - 916:23
What is NOT noise? Maximally Informative Representation16:36
Set of typical values of ~ x - 117:29
Set of typical values of ~ x - 217:51
Typical properties of Optimal Learning Machines18:50
signal/noise trade-off - 122:05
signal/noise trade-off - 223:42
Maximally Informative Samples24:55
Minimally sufficient representations - 125:36
Minimally sufficient representations - 226:28
H[s]=Resolution, H[k]= Relevance - 127:08
H[s]=Resolution, H[k]= Relevance - 227:28
Resolution - Relevance trade-off - 127:43
Resolution - Relevance trade-off - 228:54
Maximally informative samples look critical - 129:33
Maximally informative samples look critical - 230:07
Maximally informative samples look critical - 330:59
Exponential energy density is equivalent to Statistical Criticality31:30
How does an OLM differs from a glass of water? - 132:02
How does an OLM differs from a glass of water? - 233:47
A generic model of a complex system - 135:11
A generic model of a complex system - 240:17
y=1: Δc = Zipf’s law40:29
Does this really work in practice?41:15
Zipf’s law in efficient representations41:19
Deep Neural Networks as Optimal Learning Machines42:45
Maximally informative representations in deep layers (MNIST)43:47
Zipf = optimal generalisation44:49
Evolution of W(E) during learning in RBM45:26
Universal codes in Minimum Description Length46:39
Identifying relevant variables47:22
Searching for relevant neurons in the brain - 147:38
Searching for relevant neurons in the brain - 250:48
Multi-scale Relevance - 151:05
Multi-scale Relevance - 252:13
Multi-scale Relevance - 353:34
Multi-scale Relevance - 454:47
Identifying relevant positions in proteins Critical Variable Selection55:23
Challenges in statistical learning - 155:39
Challenges in statistical learning - 256:13
Summary57:51