Sample Complexity Bounds for Differentially Private Learning thumbnail
Pause
Mute
Subtitles
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Sample Complexity Bounds for Differentially Private Learning

Published on Aug 02, 20113823 Views

We study the problem of privacy-preserving classification – namely, learning a classifier from sensitive data, while still preserving the privacy of individuals in the training set. In particular, we

Related categories

Chapter list

Sample complexity bounds for differentially private learning00:00
Part 1. Learning and privacy model00:00
Data analytics with sensitive information - 100:01
Data analytics with sensitive information - 200:42
Data analytics with sensitive information - 300:59
Data analytics with sensitive information - 401:34
Data analytics with sensitive information - 501:35
Example: genome-wide association studies01:51
Privacy-preserving machine learning02:33
Goal 1: Differential privacy - 102:56
Goal 1: Differential privacy - 203:55
Goal 2: Learning05:43
What was known - 106:20
What was known - 208:15
Part 2. Sample complexity bounds for differentially-private learning08:21
Our results08:28
No distribution-independent sample complexity upper bound - 109:21
No distribution-independent sample complexity upper bound - 210:48
No distribution-independent sample complexity upper bound - 311:26
Some hope for differentially-private learning - 112:43
Some hope for differentially-private learning - 213:20
Upper bounds based on prior knowledge of unlabeled data distribution - 113:23
Upper bounds based on prior knowledge of unlabeled data distribution - 214:20
Upper bounds based on prior knowledge of unlabeled data distribution - 315:00
Upper bounds based on prior knowledge of unlabeled data distribution - 415:12
Upper bounds based on prior knowledge of unlabeled data distribution - 516:08
Upper bounds based on prior knowledge of unlabeled data distribution - 616:55
Recap & future work17:10
Thank you18:32