Big Data, Psychodiagnostics and Threats to Personal Autonomy
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
Experts and institutions have warned of the threat that big data analysis poses to our right to privacy, the challenge it raises to our traditional notions of criminal responsibility and justice, as well as concerns about the rising levels of invisible and unaccountable social control (EDPS 2015). And yet, surprisingly little has been written so far about its damaging potential for our personal autonomy as consumers and citizens. This is even more surprising given the psychodiagnostic research that has been conducted and developed in recent years (Kosinski et al 2013, Youyou et al 2015, Park et al 2015, Musil et al 2017), which is also being aggressively marketed to, and used by, economic and political stakeholders as well as political parties and organizations, as a powerful tool of non-rational persuasion. computing tools that are currently making headlines. We try to dispel the fog of self-promotion and spin to see its real, not merely imagined or hyped-up, diagnostic potential. How accurate are “psychograms” based on an analysis of people’s seemingly innocuous online activities? Is there any substance to the claim that computer algorithms can know us better than we know ourselves? Next, we assess the bold promises that the knowledge of a variety of psycho-social facts about individual users of ICT enables us to match every particular message to a particular addressee’s emotions, needs, and preferences to an extent that was unimaginable a decade ago. Can we really, by means of this new technology, manipulate people’s minds, choices and behavior much more efficiently than ever before? And what implications does this have for our self-understanding as rational and autonomous beings, not to mention the elevated moral standing that comes with it? Manipulation is one of the most familiar but also increasingly common threats to personal autonomy, which in turn is widely considered as worthy of, and even commanding, (almost) unconditional respect. Accordingly, every charge of manipulation needs to be taken seriously from the moral point of view. But does targeted, individualized commercial and political online advertising amount to vicious, morally problematic manipulation at all? In order to answer this, we provide a tentative definition of manipulative, as opposed to nonmanipulative, attitudinal and behavioral influence. We then show targeted advertising exploitative of Internet users’ identified cognitive shortcomings and emotional vulnerabilities manipulative. We lament the commercialization of politics and offer an explanation of what renders manipulation of citizens’ choices particularly problematic, even compared to daily tampering with our consumerist choices. Finally, for the purpose of policy recommendations, we envision three future scenarios: (a) pessimistic, (b) optimistic, and (c) balanced, arguing that while self-regulation and an opt-out option of data sharing may be sufficient for the second and the third, the onset of the first would require the passing of restrictive dataprotection legislation if we are to preserve our core democratic values and institutions.
Download slides: lawandethics2017_klampfer_musil_big_data_01.pdf (1.3 MB)
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !