Friends Don’t Let Friends Deploy Black-Box Models: The Importance of Intelligibility in Machine Learning
published: March 2, 2020, recorded: August 2019, views: 3
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Every data set is flawed, often in ways that are unanticipated and difficult to detect. If you can’t understand what your model has learned, then you almost certainly are shipping models that are less accurate than they could be and which might even be risky. Historically there has been a tradeoff between accuracy and intelligibility: accurate models such as neural nets, boosted tress and random forests are not very intelligible, and intelligible models such as logistic regression and small trees or decision lists usually are less accurate. In mission-critical domains such as healthcare, where being able to understand, validate, edit and ultimately trust a model is important, one often had to choose less accurate models. But this is changing. We have developed a learning method based on generalized additive models with pairwise interactions (GA2Ms) that is as accurate as full complexity models yet even more interpretable than logistic regression. In this talk I’ll highlight the kinds of problems that are lurking in all of our datasets, and how these interpretable, high-performance GAMs are making what was previously hidden, visible. I’ll also show how we’re using these models to uncover bias in models where fairness and transparency are important. (Code for the models has recently been released open-source.)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !