Lower Bounds and Hardness Amplification for Learning Shallow Monotone Formulas thumbnail
Pause
Mute
Subtitles
Playback speed
0.25
0.5
0.75
1
1.25
1.5
1.75
2
Full screen

Lower Bounds and Hardness Amplification for Learning Shallow Monotone Formulas

Published on Aug 02, 20112675 Views

Much work has been done on learning various classes of “simple” monotone functions under the uniform distribution. In this paper we give the first unconditional lower bounds for learning problems of

Chapter list

Lower Bounds and Hardness Amplification for Learning Shallow Monotone Formulas00:00
A Brief History of AI00:06
Example: Machine Translation - 100:45
Example: Machine Translation - 200:52
Example: Machine Translation - 300:55
Example: Rule-based MT - 101:21
Example: Rule-based MT - 201:30
Example: Rule-based MT - 301:40
Example: Rule-based MT - 402:02
Example: Statistical MT02:19
A Brief History of MT - 103:00
A Brief History of MT - 203:23
Big Questions03:40
Our Results03:57
Constant-depth Formulas04:55
Statistical Query Model - 105:29
Statistical Query Model - 206:44
SQ Dimension07:30
Parity08:31
Known Results - 109:02
Known Results - 209:29
Known Results - 309:49
Known Results - 410:08
Strong SQ Dimension - 110:18
Strong SQ Dimension - 210:36
Strong SQ Dimension - 310:50
Strong SQ Dimension - 411:10
Strong SQ Dimension - 511:19
First Result11:44
Slice Functions - 112:00
Slice Functions - 212:35
Slice Functions - 313:30
Second Result13:54
XOR Lemma14:13
Hardness Amplification14:34
Hardness Amp for Learning15:31
Hardness Amp for SQ16:22
Picking a Combiner16:39
Monotone Combiner16:54
The Depth-4 Construction17:15
Summary17:27