Adaptive procedures for FDR control in multiple testing
published: Dec. 17, 2007, recorded: September 2007, views: 5119
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Multiple testing is a classical statistical topic that has enjoyed a tremendous surge of interest in the past ten years, due to the growing domain of applications that are in demand for powerful and reliable procedures to this regard. For example, in bioinformatics it is often the case that multiple testing procedures are needed to process data in very high dimension where only a small number of sample points are available. In their 1995 seminal work, Benjamini and Hochberg first introduced the false discovery rate (FDR), a notion of type I error control that is particularly well suited to screening processes where a very high number of hypotheses has to be tested. It has since then been recognized as a de facto standard. We first review existing so-called "step-up" testing procedures with FDR control valid under several types of dependency assumptions on the joint test statistics, and show that we can recover (and extend) them by considering a very simple set-output point of view along with with what we call a "self-consistency condition" which is sufficient to ensure FDR control. We then proceed to consider adaptive procedures, where the estimation of the total proportion of true null hypotheses can lead to improved power. To this regard we introduce an algorithm that is almost always more powerful than an adaptive procedure proposed by Benjamini, Yekutieli and Krieger (2006).
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !