Theoretical neuroscience and deep learning theory
published: Aug. 23, 2016, recorded: August 2016, views: 9417
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Both neuroscience and machine learning are experiencing a renaissance in which fundamental technological changes are driving qualitatively new phases of conceptual progress. In neuroscience, new methods for probing and perturbing multi-neuronal dynamics during behavior have lead to the ability to create complex neural network models for the emergence of behavior from the brain. In machine learning, new methods and computing infrastructure for training neural networks have lead to the creation of deep neural networks capable of solving complex computational problems. These advances in each of these individual fields are laying the groundwork for a new alliance between neuroscience and machine learning. A key dividend of this alliance would be the genesis of new unified theories underlying neural learning dynamics, expressive power, generalization capability, and interpretability of both biological and artificial networks. Ideally such theories could yield both scientific insight into the operation of biological and artificial neural networks, as well as provide engineering design principles for the creation of new artificial neural networks. Here we outline a roadmap for this new alliance, and discuss several vignettes from the beginnings of such an alliance, including how neural network learning dynamics can model infant semantic learning, how dynamically critical weight initializations can lead to rapid training, and how the expressive power of deep neural networks can have its origins in the theory of chaos. We also speculate on how several elements of neurobiological reality, as yet not extensively employed by neural network practitioners, could aid in the design of future artificial neural networks. Such elements include structured neural network architectures motivated by the canonical cortical microcircuit, nested neural loops with a diversity of time scales, and complex synapses with rich internal dynamics.
Download slides: deeplearning2016_ganguli_theoretical_neuroscience_01.pdf (26.5 MB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !