Practical Guide to Controlled Experiments on the Web: Listen to Your Customers not to the HiPPO

author: Ron Kohavi, Microsoft Research
published: Aug. 14, 2007,   recorded: August 2007,   views: 23190

Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.


The web provides an unprecedented opportunity to evaluate ideas quickly using controlled experiments, also called randomized experiments (single-factor or factorial designs), A/B tests (and their generalizations), split tests, Control/Treatment tests, and parallel flights. Controlled experiments embody the best scientific design for establishing a causal relationship between changes and their influence on user-observable behavior. We provide a practical guide to conducting online experiments, where end-users can help guide the development of features. Our experience indicates that significant learning and return-on-investment (ROI) are seen when development teams listen to their customers, not to the Highest Paid Person’s Opinion (HiPPO). We provide several examples of controlled experiments with surprising results. We review the important ingredients of running controlled experiments, and discuss their limitations (both technical and organizational). We focus on several areas that are critical to experimentation, including statistical power, sample size, and techniques for variance reduction. We describe common architectures for experimentation systems and analyze their advantages and disadvantages. We evaluate randomization and hashing techniques, which we show are not as simple in practice as is often assumed. Controlled experiments typically generate large amounts of data, which can be analyzed using data mining techniques to gain deeper understanding of the factors influencing the outcome of interest, leading to new hypotheses and creating a virtuous cycle of improvements. Organizations that embrace controlled experiments with clear evaluation criteria can evolve their systems with automated optimizations and real-time analyses. Based on our extensive practical experience with multiple systems and organizations, we share key lessons that will help practitioners in running trustworthy controlled experiments.

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Reviews and comments:

Comment1 Jason Brownlee, November 10, 2008 at 1:40 a.m.:

great talk!

The paper is available here:

Comment2 Justin Hunter, August 27, 2009 at 6:59 p.m.:


This is an absolutely fantastic talk. Thank you for putting together such a well-thought out and clearly-reasoned presentation with such interesting real-world examples.

I included a minute-by-minute summary about this presentation (as well as some of my thoughts about this topic) on my blog:

- Justin

Justin Hunter
Founder, Hexawise (creator of a software test case design tool that uses very similar principles to help software testers to identify what to test in ways that are more efficient and effective than manual "HiPPO-driven" test case selection methods)

Comment3 Adriana Galue, February 21, 2012 at 10:39 p.m.:

Very powerful video. Thank you for sharing your expertise !

Write your own review or comment:

make sure you have javascript enabled or clear this field: