Machine Learning with Human Intelligence: Principled Corner Cutting (PC2)
published: Jan. 12, 2011, recorded: December 2010, views: 679
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
With the ever increasing availability of quantitative information, especially data with complex spatial and/or temporal structures, two closely related fields are undergoing substantial evolution: Machine learning and Statistics. On a grand scale, both have the same goal: separating signal from noise. In terms of methodological choices, however, it is not uncommon to hear machine learners complain about statisticians’ excessive worrying over modeling and inferential principles to a degree of being willing to produce nothing, and to hear statisticians express discomfort with machine learners’ tendency to let ease of practical implementation trump principled justifications, to a point of being willing to deliver anything. To take advantage of the strengths of both fields, we need to train substantially more principled corner cutters. That is, we must train researchers who are at ease in formulating the solution from the soundest principles available, and equally at ease in cutting corners, guided by these principles, to retain as much statistical efficiency as feasible while maintaining algorithmic efficiency under time and resource constraints. This thinking process is demonstrated by applying the self-consistency principle (Efron, 1967; Lee, Li and Meng, 2010) to handling incomplete and/or irregularly spaced data with non-parametric and semi-parametric models, including signal processing via wavelets and sparsity estimation via the LASSO and related penalties.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !