Dropout: A simple and effective way to improve neural networks
published: Jan. 16, 2013, recorded: December 2012, views: 54546
Slides
Related content
Report a problem or upload files
If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Description
In a large feedforward neural network, overfitting can be greatly reduced by randomly omitting half of the hidden units on each training case. This prevents complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors. Instead, each neuron learns to detect a feature that is generally helpful for producing the correct answer given the combinatorially large variety of internal contexts in which it must operate. Random “dropout” gives big improvements on many benchmark tasks and sets new records for object recognition and molecular activity prediction. The Merck Molecular Activity Challenge was a contest hosted by Kaggle and sponsored by the pharmaceutical company Merck. The goal of the contest was to predict whether molecules were highly active towards a given target molecule. The competition data included a large number of numerical descriptors generated from the chemical structures of the input molecules and activity data for fifteen different biologically relevant targets. An accurate model has numerous applications in the drug discovery process. George will discuss his team's first place solution based on neural networks trained with dropout.
Link this page
Would you like to put a link to this lecture on your homepage?Go ahead! Copy the HTML snippet !
Reviews and comments:
Write your own review or comment: