How do public sector values get into public sector machine learning systems, if at all?

author: Michael Veale, Department of Science, Technology, Engineering and Public Policy (STEaPP), University College London
published: July 24, 2017,   recorded: May 2017,   views: 824


Related Open Educational Resources

Related content

Report a problem or upload files

If you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Lecture popularity: You need to login to cast your vote.


More machine learning algorithm–powered decision-support systems are piloted and deployed in the public sector each day to help detect individuals and corporate wrongdoing in areas such as taxation, child protection and policing. While some welcome this trend as the dawn of more evidence-based administrative decisionmaking, others worry that the opacity and perceived objectivity of such systems usher in unwanted biases through the back door just as they kick due process out. Studies of these systems have primarily attempted to look-in or reverse-engineer them from the outside, missing the people that obtain, deploy and manage these technologies within diverse institutional contexts. To help fill this gap, 25 public servants and technologists from different sectors and countries involved in public sector machine learning projects were identified and interviewed. They were asked about their experiences with these technologies, focussing on how they understood and approached operational barriers and ethical issues they encountered. Analysis of these interviews shows promising roles for recent technological approaches to responsibility in this field such as ‘fairness-aware’ or interpretable machine learning systems. Yet these interviews also raise questions and issues that are both currently underemphasised and unlikely to be resolved by technical solutions alone. This research suggests that governance mechanisms for appliedmachine-learning must be more sensitive to on-the-ground pressures and contexts if they are to succeed in ensuring new data-driven decision-support systems are societally beneficial.

See Also:

Download slides icon Download slides: lawandethics2017_veale_public_sector_01.pdf (22.3 MB)

Help icon Streaming Video Help

Link this page

Would you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !

Write your own review or comment:

make sure you have javascript enabled or clear this field: