0.25
0.5
0.75
1.25
1.5
1.75
2
Visual features II
Published on Sep 13, 20154332 Views
Related categories
Chapter list
Visual features II00:00
What next?01:28
Vision beyond object recognition02:33
Random dot stereograms03:39
Some things are hard to infer from still images04:20
There are things images cannot teach you06:08
Learn relations by concatenating images? - 107:26
Learn relations by concatenating images? - 211:01
wTx ?12:38
Families of manifolds14:24
Bi-linear models - 116:24
Bi-linear models - 218:01
Example: Gated Boltzmann machine18:17
Example: Gated autoencoder18:49
Multiplicative interactions19:02
Factored Gated Autoencoder21:01
Toy examples24:51
Learned filters wx - 124:58
Learned filters wx - 225:10
Rotation filters - 125:54
Rotation filters - 226:10
Filters learned from split-screen shifts - 226:45
Natural video filters - 126:56
Filters learned from split-screen shifts - 127:49
Natural video filters - 227:54
Understanding gating - 232:46
Understanding gating - 334:27
Orthogonal transformations decompose into rotations35:25
To detect the rotation angle, compute a 2-d inner product - 236:36
To detect the rotation angle, compute a 2-d inner product - 438:26
Understanding gating - 141:34
To detect the rotation angle, compute a 2-d inner product - 341:57
The aperture problem - 142:15
The aperture problem - 242:42
The aperture problem - 443:27
The aperture problem - 344:35
The aperture problem - 545:13
The aperture problem - 645:15
To detect the rotation angle, compute a 2-d inner product - 146:42
The aperture problem - 747:17
To detect the rotation angle, pool over 2-d inner products50:25
Action recognition 201151:20
Other applications52:59
Vanishing gradients57:50
Orthogonal weights create “dynamic memory” - 101:00:36
Orthogonal weights create “dynamic memory” - 201:01:27
Orthogonal weights create “dynamic memory” - 301:01:44
Why memory needs gating01:03:23
Predictive training01:08:03
sine waves - 101:09:54
sine waves - 201:09:55
sine waves - 301:10:05
sine waves - 401:10:05
The model learns rotational derivatives01:10:11
Learning higher-order derivatives (acceleration) - 101:10:14
Learning higher-order derivatives (acceleration) - 201:10:15
snap, crackle, pop01:10:16
Annealed teacher forcing01:10:41
chrips01:10:42
Harmonics01:10:48
NORB videos01:10:48
Multi-step prediction helps01:10:51
Recognizing accelerations01:12:30
bouncing balls (Mnih et al), (Sutskever et al)01:12:35
Learned filters01:12:35
Gating units01:15:21
bouncing balll with occlusion01:16:46
Vanishing gradients - 101:16:51
A 2-d subspace - 101:18:09
Vanishing gradients - 201:18:25
Autoencoders learn negative biases01:21:43
Do autoencoders orthogonalize weights?01:22:33
A 2-d subspace - 201:22:46
Zero-bias ReLUs are hard to beat01:22:52
The energy function of a ReLU autoencoder - 101:22:55
The energy function of a ReLU autoencoder - 201:22:57
Truncated rectified unit (Trec)01:22:57
Truncated linear unit (TLin)01:24:01
ZAE features from tiny images (Torralba et al.)01:24:03
Perm-invariant CIFAR-1001:24:50
Perm-invariant CIFAR-10 patches01:25:42
Rotation filters - 101:25:43
Rotation filters - 201:25:44
Deep fully-connected CIFAR-10 - 101:25:50
Deep fully-connected CIFAR-10 - 201:25:51
Deep fully-connected CIFAR-10 - 301:26:09