published: Aug. 23, 2016, recorded: August 2016, views: 32095
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
In this lecture, I will cover the basic concepts behind feedforward neural networks. The talk will be split into 2 parts. In the first part, I'll cover forward propagation and backpropagation in neural networks. Specifically, I'll discuss the parameterization of feedforward nets, the most common types of units, the capacity of neural networks and how to compute the gradients of the training loss for classification with neural networks. In the second part, I'll discuss the final components necessary to train neural networks by gradient descent and then discuss the more recent ideas that are now commonly used for training deep neural networks. I will thus present different variants of gradient descent algorithms, dropout, batch normalization and unsupervised pretraining.
Download slides: deeplearning2016_larochelle_neural_networks.pdf (25.0 MB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !