en-es
en-fr
en-sl
en
0.25
0.5
0.75
1.25
1.5
1.75
2
Natural Language Understanding
Published on Jul 27, 201710411 Views
Related categories
Chapter list
Natural Language Processing, Language Modelling and Machine Translation00:00
Natural Language Processing00:40
Language models03:37
History: cryptography05:36
Language models - 106:13
Language models - 207:26
Language models - 308:26
Evaluating a Language Model10:08
Language Modelling Data - 110:59
Language Modelling Data - 211:48
Language Modelling Overview13:50
N-Gram Models: The Markov Chain Assumption14:51
Outline14:51
N-Gram Models: Estimating Probabilities15:37
N-Gram Models: Back-O16:09
N-Gram Models: Interpolated Back-O17:16
Provisional Summary18:48
Outline - 121:56
Neural Language Models - 122:02
Neural Language Models - 222:19
Neural Language Models: Sampling - 123:42
Neural Language Models: Sampling - 223:59
Neural Language Models: Training - 124:17
Neural Language Models: Training - 224:31
Neural Language Models: Training - 324:33
Comparison with Count Based N-Gram LMs25:57
Outline - 227:16
Recurrent Neural Network Language Models - 127:18
Recurrent Neural Network Language Models - 228:08
Recurrent Neural Network Language Models - 328:38
Recurrent Neural Network Language Models - 428:58
Recurrent Neural Network Language Models - 529:23
Recurrent Neural Network Language Models - 630:52
Recurrent Neural Network Language Models - 731:51
Comparison with N-Gram LMs31:53
Language Modelling: Review33:56
Gated Units: LSTMs and GRUs35:00
Deep RNN LMs - 135:36
Deep RNN LMs - 235:43
Deep RNN LMs - 335:45
Deep RNN LMs - 436:28
Deep RNN LM - 136:44
Deep RNN LM - 236:52
Scaling: Large Vocabularies - 137:37
Scaling: Large Vocabularies - 240:12
Scaling: Large Vocabularies - 340:52
Scaling: Large Vocabularies - 442:02
Scaling: Large Vocabularies - 542:31
Scaling: Large Vocabularies - 643:38
Scaling: Large Vocabularies - 745:02
Sub-Word Level Language Modelling46:22
Regularisation: Dropout - 150:25
Regularisation: Dropout - 250:47
Regularisation: Bayesian Dropout (Gal)51:53
Evaluation: hyperparamters are a confounding factor52:34
Summary01:02:47
Outline - 301:03:13
Intro to MT01:03:16
Parallel Corpora01:04:18
MT History: Statistical MT at IBM - 101:05:29
MT History: Statistical MT at IBM - 201:06:58
Models of translation - 101:08:08
Models of translation - 201:09:27
IBM Model 1: The first translation attention model!01:09:42
Encoder-Decoders1301:10:50
Recurrent Encoder-Decoders for MT14 - 101:11:40
Recurrent Encoder-Decoders for MT14 - 201:12:25
Recurrent Encoder-Decoders for MT14 - 301:13:02
Attention Models for MT15 - 101:13:06
Attention Models for MT15 - 201:13:51
Attention Models for MT15 - 301:14:04
Attention Models for MT15 - 401:14:12
Attention Models for MT15 - 501:14:48
Attention Models for MT15 - 601:14:54
Returning to the Noisy Channel - 101:15:45
Returning to the Noisy Channel - 201:17:29
Decoding01:18:39
Decoding: Direct vs. Noisy Channel - 101:18:59
Decoding: Direct vs. Noisy Channel - 201:19:24
Decoding: Noisy Channel Model01:19:55
Segment to Segment Neural Transduction01:20:42
Noisy Channel Decoding01:22:25
Relative Performance1601:23:13
The End01:24:06