Natural Language Processing thumbnail
slide-image
Pause
Mute
Subtitles not available
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Natural Language Processing

Published on Jul 27, 20174362 Views

Related categories

Chapter list

Structure and Grounding in Natural Language00:00
Challenges for language and Artificial Intelligence00:34
This Lecture - 102:28
This Lecture - 203:28
Inductive biases in Recurrent Neural Networks03:30
Why do we believe that language has syntax? - 104:45
Why do we believe that language has syntax? - 207:38
Why do we believe that language has syntax? - 308:14
Why do we believe that language has syntax? - 408:51
Do RNN language models learn recursive structure? - 109:59
Do RNN language models learn recursive structure? - 212:52
A Recurrent Network for Generating Trees15:06
Recurrent Neural Network Grammars16:41
Example Derivation - 117:55
Example Derivation - 218:12
Is this a coherent probabilistic model of trees?21:34
Modeling the action conditional probabilities22:33
Syntactic composition25:16
English PTB (Parsing)28:19
English Language Modeling29:22
Training32:23
RNNGs Summary33:01
This Lecture - 334:22
Inducing hierarchical structure from distant reward - 134:55
Inducing hierarchical structure from distant reward - 236:07
Recurrent Neural Network Encoder36:49
Learning language structure from distant rewards37:15
Recursive Neural Network Encoder37:26
Convolutional Encoder38:15
Shift-Reduce TreeLSTM: Example Derivation38:43
Shift Reduce Parsing (Aho and Ullman, 1972) Each40:54
Reinforcement Learning - 141:23
Reinforcement Learning - 241:50
Sentiment Analysis Results42:54
Learned Example - 147:02
Learned Example - 247:54
This Lecture - 448:54
The Paradigm Problem - 150:15
The Paradigm Problem - 254:27
DeepMind Lab01:00:36
DeepMind Lab - 201:01:10
DeepMind Lab - 301:01:59
Language in DeepMind Lab - 101:02:35
Language in DeepMind Lab - 201:05:14
Language in DeepMind Lab: The Lexicon01:05:33
A trained agent following instructions01:06:33
A basic agent based on the A3C algorithm01:08:54
Additional (similar to UNREAL) auxiliary objectives - 101:09:53
Additional (similar to UNREAL) auxiliary objectives - 201:10:27
Unsupervised learning makes word learning possible01:11:59
DeepMind Lab - 401:12:46
And provides insight into agents' 'thoughts'....01:13:08
DeepMind Lab - 501:16:41
Curriculum Learning for Complex Tasks - 101:17:02
Curriculum Learning for Complex Tasks - 201:17:47
Agents learn to generalise word composition01:18:30
Decompose before re-compose01:19:04
Apply modifiers and predicates to novel objects01:19:23
But sample complexity remains an issue01:19:54
Knowing some words makes learning more fast01:20:32
This mirrors the ‘Vocabulary Spurt’ observed in infants01:21:05
Summary01:22:25