en
0.25
0.5
0.75
1.25
1.5
1.75
2
Discriminative Learning of Sum-Product Networks
Published on Jan 16, 201313589 Views
Sum-product networks are a new deep architecture that can perform fast, exact inference on high-treewidth models. Only generative methods for training SPNs have been proposed to date. In this paper
Related categories
Chapter list
Discriminative Learning of Sum-Product Networks00:00
Sum-Product Networks - 100:07
Sum-Product Networks - 200:13
Sum-Product Networks - 300:18
Sum-Product Networks - 400:26
Sum-Product Networks - 500:28
Sum-Product Networks - 600:31
Sum-Product Networks - 700:36
Sum-Product Networks - 800:40
Sum-Product Networks - 900:44
Overview01:00
Motivation01:17
Graphical Models01:24
Deep Architectures01:46
Discriminative Learning02:05
SPN Review - 102:26
SPN Review - 202:33
SPN Review - 302:45
SPN Review - 402:50
A Univariate Distribution Is an SPN - 102:52
A Univariate Distribution Is an SPN - 202:53
A Univariate Distribution Is an SPN - 302:57
A Univariate Distribution Is an SPN - 402:58
A Product of SPNs over Disjoint Variables Is an SPN - 103:02
A Product of SPNs over Disjoint Variables Is an SPN - 203:13
A Product of SPNs over Disjoint Variables Is an SPN - 303:21
A Weighted Sum of SPNs over the Same Variables Is an SPN - 103:24
A Weighted Sum of SPNs over the Same Variables Is an SPN - 203:35
A Weighted Sum of SPNs over the Same Variables Is an SPN - 303:59
All Marginals Are Computable in Linear Time - 104:10
All Marginals Are Computable in Linear Time - 204:17
All Marginals Are Computable in Linear Time - 304:26
All Marginals Are Computable in Linear Time - 404:32
All Marginals Are Computable in Linear Time - 504:46
All Marginals Are Computable in Linear Time - 604:53
All Marginals Are Computable in Linear Time - 704:58
All Marginals Are Computable in Linear Time - 805:01
All Marginals Are Computable in Linear Time - 905:03
All Marginals Are Computable in Linear Time - 1005:11
All Marginals Are Computable in Linear Time - 1105:16
All MAP States Are Computable in Linear Time - 105:19
All MAP States Are Computable in Linear Time - 205:24
All MAP States Are Computable in Linear Time - 305:26
All MAP States Are Computable in Linear Time - 405:35
All MAP States Are Computable in Linear Time - 505:44
All MAP States Are Computable in Linear Time - 605:49
All MAP States Are Computable in Linear Time - 705:53
All MAP States Are Computable in Linear Time - 805:56
All MAP States Are Computable in Linear Time - 905:59
Special Cases of SPNs - 106:03
Special Cases of SPNs - 206:14
Compactly Representable Probability Distributions - 106:25
Compactly Representable Probability Distributions - 206:29
Compactly Representable Probability Distributions - 306:31
Compactly Representable Probability Distributions - 406:32
Learning SPNs06:56
Discriminative Training07:21
Discriminative SPNs - 107:23
Discriminative SPNs - 207:28
Discriminative SPNs - 307:30
Discriminative SPNs - 407:33
Discriminative SPNs - 507:34
Discriminative SPNs - 607:39
Discriminative SPNs - 707:53
Discriminative SPNs - 807:57
Discriminative SPNs - 908:13
Discriminative Training - 108:21
Discriminative Training - 208:25
Discriminative Training - 308:31
Discriminative Training - 408:34
Discriminative Training - 508:37
SPN Backpropagation - 108:49
SPN Backpropagation - 208:55
SPN Backpropagation - 308:59
SPN Backpropagation - 409:03
SPN Backpropagation - 509:09
SPN Backpropagation - 609:11
SPN Backpropagation - 709:12
SPN Backpropagation - 809:15
SPN Backpropagation - 909:22
SPN Backpropagation - 1009:26
SPN Backpropagation - 1109:31
SPN Backpropagation - 1209:35
SPN Backpropagation - 1309:37
SPN Backpropagation - 1409:40
Problem with Backpropagation - 109:45
Problem with Backpropagation - 209:55
Hard Inference Overcomes Gradient Diffusion - 110:01
Hard Inference Overcomes Gradient Diffusion - 210:04
Reasons to Use Hard Inference - 110:17
Reasons to Use Hard Inference - 210:18
Reasons to Use Hard Inference - 310:21
Reasons to Use Hard Inference - 410:24
Hard Gradient - 110:32
Hard Gradient - 210:38
Hard Gradient - 310:41
Hard Gradient - 410:44
Hard Gradient - 510:53
Hard Gradient - 610:57
Hard Gradient - 711:03
Hard Gradient - 811:14
Hard Gradient - 911:17
Hard Gradient - 1011:19
Hard Gradient - 1111:29
Hard Gradient - 1211:39
Learning SPNs: Summary - 111:58
Learning SPNs: Summary - 212:04
Learning SPNs: Summary - 312:17
Experiments12:26
Image Classification12:29
Feature Extraction - 112:41
Feature Extraction - 212:48
Feature Extraction - 312:51
Feature Extraction - 412:55
Feature Extraction - 513:00
Feature Extraction - 613:10
Feature Extraction - 713:14
SPN Architecture - 113:20
SPN Architecture - 213:27
SPN Architecture - 313:30
SPN Architecture - 413:33
SPN Architecture - 513:37
CIFAR-10 Results - 114:05
CIFAR-10 Results - 214:06
CIFAR-10 Results - 314:14
CIFAR-10 Results - 414:39
CIFAR-10 Results - 514:45
CIFAR-10 Results - 614:51
CIFAR-10 Results - 714:53
CIFAR-10 Results - 814:59
CIFAR-10 Results - 915:04
CIFAR-10 Results - 1015:14
CIFAR-10 Results - 1115:15
STL-10 results15:23
Future Work - 115:50
Future Work - 215:52
Future Work - 315:54
Future Work - 415:55
Future Work - 515:58
Summary - 116:12
Summary - 216:14
Summary - 316:17
Summary - 416:18
Summary - 516:19
Summary - 616:21
Summary - 716:25