Lower Bounds and Hardness Amplification for Learning Shallow Monotone Formulas
published: Aug. 2, 2011, recorded: July 2011, views: 2669
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Much work has been done on learning various classes of “simple” monotone functions under the uniform distribution. In this paper we give the first unconditional lower bounds for learning problems of this sort by showing that polynomial-time algorithms cannot learn shallow monotone Boolean formulas under the uniform distribution in the well-studied Statistical Query (SQ) model. We introduce a new approach to understanding the learnability of “simple” monotone functions that is based on a recent characterization of Strong SQ learnability by Simon (2007) Using the characterization we first show that depth-3 monotone formulas of size no(1) cannot be learned by any polynomial-time SQ algorithm to accuracy 1 1=(log n (1): We then build on this result to show that depth-4 monotone formulas of size no(1) cannot be learned even to a certain 1 2 + o(1) accuracy in polynomial time. This improved hardness is achieved using a general technique that we introduce for amplifying the hardness of “mildly hard” learning problems in either the PAC or SQ framework. This hardness amplification for learning builds on the ideas in the work of O’Donnell (2004) on hardness amplification for approximating functions using small circuits, and is applicable to a number of other contexts. Finally, we demonstrate that our approach can also be used to reduce the well-known open problem of learning juntas to learning of depth-3 monotone formulas.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !