en-es
en-fr
en-sl
en
0.25
0.5
0.75
1.25
1.5
1.75
2
Theoretical Neuroscience and Deep Learning Theory
Published on Jul 27, 20176624 Views
Related categories
Chapter list
Theoretical Neuroscience and Deep Learning Theory00:00
Theoretical neuroscience in the disciplinary landscape00:19
Neural circuits and behavior: theory, computation and experiment01:57
Motivations for an alliance between theoretical neuroscience and theoretical machine learning02:23
Talk Outline05:38
The shape of things to come… on monkeys and models - 107:03
The shape of things to come… on monkeys and models - 211:45
Deep neural network models of the retinal response to natural scenes13:54
A brief tour of the retina14:18
Linear-Nonlinear models14:54
How well do linear-nonlinear models explain the retina in natural vision?15:05
Modeling ganglion cells with convolutional neural networks (CNNs) - 115:31
Modeling ganglion cells with convolutional neural networks (CNNs) - 215:47
Modeling ganglion cells with convolutional neural networks (CNNs) - 316:01
Modeling ganglion cells with convolutional neural networks (CNNs) - 416:13
Modeling ganglion cells with convolutional neural networks (CNNs) - 516:19
Convolutional neural network model16:54
CNNs approach retinal reliability17:02
CNNs trained on less data outperform simpler models on more data17:36
Features bear striking resemblance to internal structure in retina17:51
Most retinal neurons have sub-Poisson variability (while LNP models are Poisson)18:51
We can inject Gaussian noise into each hidden unit of our CNN model18:58
Model has lower variance than data19:00
However model uncertainty has same scaling relationship as the retina19:01
Capturing contrast adaptation from retinal responses to natural scenes19:05
Summary19:42
Talk Outline20:00
Some of the theoretical puzzles of deep learning20:16
A Mathematical Theory of Semantic Development* 21:21
What is “semantic cognition”? 21:40
Psychophysical tasks that probe semantic cognition22:32
The project that really keeps me up at night22:55
Semantic Cognition Phenomena24:17
Evolution of internal representations24:55
A Network for Semantic Cognition25:03
Categorical representations in human and monkey26:10
Categorical representations in human and monkey27:14
Evolution of internal representations27:20
Theoretical questions 27:27
Nontrivial learning dynamics28:25
Problem formulation29:22
Learning dynamics - 129:47
Learning dynamics - 329:54
Decomposing input-output correla6ons30:37
Analytical learning trajectory31:09
Origin of the rapid learning transi6on: saddle point dynamics in synaptic weight space32:31
Take home messages, so far33:28
Learning hierarchical structure34:01
A hierarchical branching diffusion process35:31
Object analyzer vectors35:39
Singular values 37:07
Progressive differentiation - 137:13
Progressive differentiation - 237:24
Progressive differentiation - 337:34
Connecting hierarchical generative models and neural network learning38:06
Other work41:05
What is a category and what makes it “coherent?” - 142:22
What is a category and what makes it “coherent?” - 244:32
What is a category and what makes it “coherent?” - 345:06
What is a category and what makes it “coherent?” - 445:47
What is a category and what makes it “coherent?” - 545:49
What is a category and what makes it “coherent?” - 646:25
What is a category and what makes it “coherent?” - 746:34
What is a category and what makes it “coherent?” - 850:51
Towards a theory of deep learning dynamics50:53
Dynamic Isometry Initialization 50:59
Some of the theoretical puzzles of deep learning51:12
High dimensional nonconvex optimization51:40
General properties of error landscapes in high dimensions52:35
Properties of Error Landscapes on the Synaptic Weight Space of a Deep Neural Net56:09
Performance of saddle free Newton in learning deep neural networks59:10
How to descend saddle points01:02:18
Some of the theoretical puzzles of deep learning01:02:37
A theory of deep neural expressivity through transient chaos01:03:46
Seminal works on the expressive power of depth - 101:04:05
Seminal works on the expressive power of depth - 201:05:03
Seminal works on the expressive power of depth - 301:06:10
Questions01:06:47
Limitations of prior work01:07:30
Another perspective on the advantage of depth: disentangling01:08:28
A maximum entropy ensemble of deep random networks 01:08:36
Emergent, deterministic signal propagation in random neural networks 01:09:09
Propagation of two points through a deep network - 101:11:07
Propagation of a manifold through a deep network - 201:12:44
Propagation of a manifold through a deep network - 301:13:02
Propagation of a manifold through a deep network -401:14:32
Riemannian geometry I: Euclidean length01:15:07
Riemannian geometry II: Extrinsic Gaussian Curvature01:15:19
Curvature propagation: theory and experiment 01:15:26
Exponential expressivity is not achievable by shallow nets01:15:30
Boundary disentangling: theory - 101:16:35
Boundary disentangling: theory - 201:16:42
Boundary disentangling: experiment 01:16:46
Summary 01:16:48
Some of the theoretical puzzles of deep learning01:16:57
Statistical mechanics of high dimensional data analysis 01:17:08
Optimal inference in high dimensions - 101:18:40
Optimal inference in high dimensions - 201:21:40
More generally: upper bounds on generalization error01:27:15
Recent observations on generalization in deep nets01:27:56
Talk Outline01:29:37
There are more things in heaven and earth… - 101:29:47
There are more things in heaven and earth… - 201:30:57
There are more things in heaven and earth - 301:31:27
Continual learning through synaptic intelligence - 101:32:04
Continual learning through synaptic intelligence - 201:32:09
Summary01:32:19
References01:32:23