Beyond Seq2Seq with Augmented RNNs
published: Aug. 23, 2016, recorded: August 2016, views: 1601
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Sequence to sequence models in their most basic form, following an encoder-decoder paradigm, compressively encode source sequence representations into a single vector representation and decode this representation into a target sequence. This lecture will discuss the problems with this compressive approach, some solutions involving attention and external differentiable memory, and issues faced by these extensions. Motivating examples from the field of natural language understanding will be provided throughout.
Download slides: deeplearning2016_grefenstette_augmented_rnn_01.pdf (3.3 MB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !