en-es
en
0.25
0.5
0.75
1.25
1.5
1.75
2
Differentially Private Feature Selection via Stability Arguments, and the Robustness of the Lasso
Published on Aug 09, 20133788 Views
We design differentially private algorithms for statistical model selection. Given a data set and a large, discrete collection of “models”, each of which is a family of probability distributions, the
Related categories
Chapter list
Private Model Selection via Stability Arguments and the Robustness of Lasso00:00
What Can We Learn Privately? - 100:06
What Can We Learn Privately? - 201:17
Model Selection01:42
Running Example: Sparse Linear Regression - 102:08
Running Example: Sparse Linear Regression - 203:08
This Paper03:39
Why is Privacy a Concern in Model Selection? - 104:01
Why is Privacy a Concern in Model Selection? - 204:18
Why is Privacy a Concern in Model Selection? - 304:45
Dierential Privacy04:57
Prior Works on Learning and Privacy06:35
Our Contributions - 107:37
Our Results for Sparse Linear Regression09:13
Our Contributions - 211:10
Perturbation Stability based Model Selection - 111:14
Perturbation Stability based Model Selection - 212:32
Perturbation Stability based Model Selection - 313:08
Perturbation Stability based Model Selection - 413:35
Our Contributions - 314:19
Perturbation Stability of LASSO - 114:31
Perturbation Stability of LASSO - 215:02
Stability Test for LASSO 15:35
Geometry of LASSO - 116:41
Geometry of LASSO - 216:56
Making the Test Private (Simplified)17:12
Putting the Pieces Together17:36
Revisit: Our Contributions18:13
Future Work18:36