Learning Deep Generative Models thumbnail
slide-image
Pause
Mute
Subtitles not available
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Learning Deep Generative Models

Published on Aug 23, 201614465 Views

In this tutorial I will discuss mathematical basics of many popular deep generative models, including Restricted Boltzmann Machines (RBMs), Deep Boltzmann Machines (DBMs), Helmholtz Machines, Variatio

Related categories

Chapter list

Learning Deep Generative Models 00:00
Mining for Structure/100:02
Mining for Structure/200:46
Example: Understanding Images01:01
Talk Roadmap02:33
Restricted Boltzmann Machines03:38
Learning Features05:15
Model Learning/1 05:44
Model Learning/207:42
RBMs for Real-valued Data/108:22
RBMs for Real-valued Data/209:04
RBMs for Real-valued Data/309:10
RBMs for Word Counts/109:26
RBMs for Word Counts/210:00
Different Data Modalities 10:36
Product of Experts/111:24
Product of Experts12:52
Deep Boltzmann Machines/112:58
Deep Boltzmann Machines/213:09
Model Formulation13:48
Mathematical Formulation15:00
Approximate Learning/115:48
Approximate Learning/216:04
Approximate Learning17:23
Stochastic Approximation18:05
Learning Algorithm19:10
Variational Inference/121:26
Variational Inference/226:09
Variational Inference/327:13
Variational Inference/428:47
Good Generative Model?/1 28:49
Good Generative Model?/229:15
Good Generative Model?/3 29:22
Good Generative Model?/4 29:30
Good Generative Model?/529:36
Good Generative Model?/630:11
Handwriting Recognition30:40
Generative Model of 3-D Objects30:47
3-D Object Recognition 31:18
Learning Hierarchical Representations 31:38
Talk Roadmap31:51
Helmholtz Machines31:54
Various Deep Generative Models34:12
Motivating Example 35:04
Overall Model 35:55
Flipping Colors 36:34
Flipping Backgrounds 37:06
Flipping Objects 37:31
Qualitative Comparison 38:00
Variational Lower-Bound 38:15
Novel Scene Compositions38:45
Overall Model40:21
Variational Autoencoders (VAEs)41:18
VAE: Example 42:43
Recognition Network43:16
Variational Bound 44:37
Reparameterization Trick/146:43
Reparameterization Trick/247:48
Computing the Gradients/148:49
Computing the Gradients/250:27
VAE: Assumptions51:46
Importance Weighted Autoencoders/152:25
Importance Weighted Autoencoders/254:10
Tighter Lower Bound54:51
Computing the Gradients 56:12
IWAEs vs. VAEs/156:42
IWAE: Intuition/157:53
IWAE: Intuition/258:46
Computation with IWAEs59:27
Two Architectures 01:00:08
MNIST Results/1 01:01:14
IWAEs vs. VAEs/201:01:23
IWAEs vs. VAEs/301:01:27
MNIST Results/2 01:02:02
Key Observation01:02:39
Talk Roadmap01:05:56
Caption Generation/1 01:06:17
Encode-Decode Framework01:06:36
Caption Generation/2 01:06:48
Caption Generation/3 01:07:10
Caption Generation with Visual Attention01:07:31
Visual Attention01:07:57
Improving Action Recognition01:08:18
Recurrent Attention Model/1 01:08:29
Recurrent Attention Model/2 01:09:46
Model Definition01:10:25
Variational Learning/101:11:16
Variational Learning/201:11:59
Variational Learning/301:12:52
Sampling from the Prior01:13:09
Key Observation01:13:18
Maximizing Marginal Likelihood01:14:20
Comparing the Two Estimators01:14:56
Another Key Observation/101:15:51
Another Key Observation/201:16:29
Relationship To Helmholtz Machines/101:16:39
Relationship To Helmholtz Machines/2 01:16:59
The Wake-Sleep Recurrent Attention Model 01:18:18
Training Inference Network01:18:33
MNIST Attention Demo01:18:58
Hard vs. Sod Attention 01:19:25
Talk Roadmap 01:20:07
(Some) Open Problems/1 01:20:17
(Some) Open Problems/2 01:20:28
Sequence to Sequence Learning 01:21:14
Skip-Thought Model/1 01:21:26
Skip-Thought Model/2 01:21:40
Learning Objective01:21:52
Semantic Relatedness01:22:06
Semantic Relatedness Recurrent Neural Network 01:22:31
Neural Story Telling 01:24:51
Hierarchical RNNs/101:25:37
Hierarchical RNNs/201:25:53
Atari Games01:26:20
Actor-Mimic Net in Action01:26:39
Transfer Learning01:26:42
Summary01:26:46
Thank You01:27:35