Assessing Human Error Against a Benchmark of Perfection
published: Sept. 25, 2016, recorded: August 2016, views: 14
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
An increasing number of domains are providing us with detailed trace data on human decisions in settings where we can evaluate the quality of these decisions via an algorithm. Motivated by this development, an emerging line of work has begun to consider whether we can characterize and predict the kinds of decisions where people are likely to make errors.
To investigate what a general framework for human error prediction might look like, we focus on a model system with a rich history in the behavioral sciences: the decisions made by chess players as they select moves in a game. We carry out our analysis at a large scale, employing datasets with several million recorded games, and using chess tablebases to acquire a form of ground truth for a subset of chess positions that have been completely solved by computers but remain challenging even for the best players in the world.
We organize our analysis around three categories of features that we argue are present in most settings where the analysis of human error is applicable: the skill of the decision-maker, the time available to make the decision, and the inherent difﬁculty of the decision. We identify rich structure in all three of these categories of features, and ﬁnd strong evidence that in our domain, features de-scribing the inherent difﬁculty of an instance are signiﬁcantly more powerful than features based on skill or time.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !