8 April 2019 Legitimate and Monitored

“Torn between your conscience and a sense of solidarity with the organization”

Tessel Renzenbrink
  • Follow us on Facebook
  • Follow us on LinkedIn
  • Follow us on Twitter

Foto: Nick Harris CC-BY-ND 2.0

Ethical data dilemmas in the workplace

Herman Ozinga is a data analyst at the Municipality of Amsterdam. He was asked to provide data, but the request clashed with his moral compass. He wrestled with the decision: should he prioritize his sense of solidarity with the organization, or follow the call of his inner voice? In the end, he decided that he would no longer take part in that project. His decision prompted a discussion in his department regarding responsible use of data. The department received coaching in this dialogue from the municipal Bureau for Integrity; Bureau Tada also got involved later on in this process. Herman is sharing his story in the hope that data analysts, colleagues and others who have to deal with ethical questions in the course of their work will benefit from it.

This article uses a fictitious case to serve as an example, since we haven’t asked all those involved in the actual situation for their consent to publish what happened. However, the ethical dilemmas presented here do correspond to the actual case.

The department where Herman works is tasked with handling requests to provide data sets for the purpose of research. When new research questions involve the use of personal data, it is mandatory to have the request assessed by an independent third party. The department was working on streamlining the procedure for this assessment. “If you want to use personal data in a new way, this new use has to be assessed by a third party that is independent of both the party that requests the data and the party that provides the data”, says Herman. “At the time, the procedure was that each new data processing action that involved the use of personal data had to be presented to the independent third party for approval. There had to be a better way to do that. The goal was to no longer submit each case separately, but to process batches at a time. We all agreed that the procedure needed to be streamlined, but the question is how far you should take that. A streamlined procedure can also turn into slippery slope, lumping everything together. That would largely eliminate the monitoring function of the independent third party. It would perhaps be more efficient, but the question of whether data is being processed responsibly would be pushed to the background.”

“I won’t do it”

“We met with the commissioning client and other parties to discuss the details of what streamlining should look like,” Herman continues. “A conversation like that develops in a certain way. First you need to identify which aspects you want to discuss. You can point out certain issues, but then people say: ‘let’s discuss that later’. Controversial topics can be postponed until a context has been reached in which the topics can hardly be called controversial anymore. There is no malicious intent involved in the postponement, but that’s how such conversations go.” Although streamlining the procedure is a form of rationalization, there certainly is also a dialogue about the risks involved in dealing with data, and solutions are presented to mitigate those risks. Concerns are addressed and resolved as much as possible, eventually leading to consensus – which is after all the goal.

“Meanwhile, there was an impending data request to repeat an experiment that I wasn’t comfortable with the first time around. The streamlined directives would mean that the experiment could be carried out a second time. I tried to substantiate my viewpoint with good reasons not to carry out the experiment again, but my efforts had no effect. At that stage, I thought: ‘I need the emergency brake now’. Because it would just get harder and harder to say anything. So I gathered up all of my courage and stated at the start of yet another working group meeting – no longer waiting for the topic to come up on the agenda – ‘if a data request is submitted for this project again, you will have to work on it without me; I’m out’. It was a tough decision and difficult to say out loud. Again, we had already gone through an entire process in the working group by that point. We had discussed the details of what we believe is diligent data processing. Those discussions resulted in certain guidelines. And according to those guidelines, there would be no reason to not do it. And then you stand there all alone and you say: ‘I won’t do it’.”

Black-box technology

Herman’s ethical dilemma involves an experiment with machine learning in which a data set is used that contains personal data. Herman: “In principle, the law provides strict rules about what you can or cannot use that data set for. But once you say that it’s for research purposes, there are suddenly many more options. It should not be possible to trace the data to a specific person, but other than that, it is a legal gray area.” He sees two potential risks: a lack of government transparency towards citizens, and the possibility that the experiment could eventually lead to profiling.

Herman: “Machine Learning (ML) differs from traditional academic research. A researcher uses data to conduct research and then reaches a conclusion. The researcher can explain how the data leads to the conclusion. The research has discovered a regularity and can be repeated. The whole point of using Machine Learning is that the computer program itself responds to the data. The underlying principle here is not a regularity, but an automated mechanism that transcends human comprehension.”

Unverifiable results

ML searches for significant patterns in data. A familiar example is the automated filters that detect spam e-mails. A computer program is trained using a huge data set, consisting of millions of emails that people have labeled as either spam or not-spam. The program starts looking for patterns, and then those patterns are turned into hypotheses. An example of such a hypothesis is: ‘the presence of the word Viagra in an email indicates a high probability that it is spam’. That hypothesis is then tested on a different set of data that has not been used for training purposes. The mistakes made during this test (an email ends up in the spam folder when it should not have) are flagged and fed back into the computer program. The machine learns from its mistakes and adjusts the hypothesis accordingly.

Two characteristics of ML make it impossible for people to verify this technique. First of all, ML requires massive volumes of data. A program won’t learn anything from just three emails, so the training data consists of millions of data points. Second, ML looks at a lot of different parameters: the words in the body of the email, the reputation of the sender, the country it was sent from, and much more. Analyzing so much data and so many parameters leads to a very high level of complexity. That makes it untraceable and unverifiable by human observers. We can see what data is entered, and what results are generated… but how the program reaches the results is a black box. That is also exactly what ML promises: that the computer program will detect significant patterns that are too complex for people to perceive. But that also comes with its downsides.

“When the government starts using ML to manage processes, there will be problems,” Herman says. “The government must be able to explain its decisions. When a citizen requests a subsidy, the government applies the rules. If the government decides to reject a subsidy application, they can explain why easily enough: ‘You are not eligible due to rule x’. That is transparent. But when you use ML, you cannot explain why you do certain things. After all, you yourself don’t know how the computer program reached a certain outcome.”

Profiling

Another risk is profiling. Take the following fictitious example: An Amsterdam resident fails to pay the rent for his rent-controlled flat for the third consecutive month. That is a signal for the municipality to take action. A civil servant contacts the person and offers debt relief. As a government agency, it is possible to explain why these steps have been taken. But if we look at that fictitious machine learning case…Data is fed into a computer program, and then the computer assigns risk assessment numbers to citizens. A 10 indicates a high risk of developing problematic debts, while a 1 indicates a low risk. Then the municipality sends a civil servant to visit everyone who has been assigned a number of 8 or higher. If those citizens were to ask the municipality why they are being treated that way, the municipality will not have an answer. They do not have any solid leads, just an untraceable conclusion from a machine.

“When you apply ML conclusions to individual cases, it is no longer possible to explain to people why you treat them the way you do. At that point, you’re profiling. Profiling means that you divide people into profiles and then say: I will treat you like that. When the government does that, you go into hunting mode: you are looking for exactly the kind of things that you want to target.”

And that leads to the second problem, says Herman. “The difficulty with a number like that is that it does not say anything about the reasons why a citizen has been assigned that specific risk number. If you want to solve complex problems, you need something different than a simple number. You need to know the details of the situation. You start looking at the environment, pinpointing the differences and spotting opportunities. But if you’re using a Machine Learning method, you won’t be learning anything yourself. It is the algorithm that has learned something, not the municipality.”

Room to discuss data ethics

“When I said that I would no longer work on the project, I took a nose dive,” Herman said. “Once you make that move, you are down on the ground; you are out of the game. I was not able to provide any more input to define what happened to the experiment from there on. But that didn’t bother me much. I felt much better after speaking out. And it also contributed to a discussion within my department about data ethics, a discussion that goes beyond the boundaries of what the law allows.”

“I had to carry out a duty within the context of my job, but I objected to it as a matter of principle. As civil servants, we are supposed to carry out assignments – but at the same time, we are also asked to form our own opinions. So if you believe that the organization is heading in the wrong direction, you should be able to say that. And I could. That is high praise for the municipality of Amsterdam, and specifically for my department.”

“Ethics are not a formula”

Herman’s manager proactively took steps to shape the discussion about data ethics. That included getting Tada involved in the department. The Tada manifesto outlines six principles that are supposed to lay the foundations for a responsible digital city. In the Agenda for the Digital City, Amsterdam expressed its commitment to implement the Tada values in its own organization. Right now, the municipality and Bureau Tada are working together to develop methods for putting these abstract principles into practice. Herman’s department took a workshop on ‘Responsible digitalization and ethical data use’, developed by Bureau Tada and the municipal Bureau for Integrity. In the workshop, the participants take a close look on how the Tada principles could be applied in the workplace.

“Ethics are not a formula,” Herman says. “Ethical issues involve an openness in which you have to determine your own attitude. You cannot reduce it to just a check list or a protocol that tells you what you should do. It is a consideration, a judgment. That is one of the strong points of Tada; it consists of a number of principles, which provide guidance. You need to actively engage with those principles in order to put them into practice to the best of your ability. That is the openness of ethical deliberations, which leave you no room to hide.”

“The mindset in the municipality of Amsterdam is changing”

“Tada is a sign of a movement that is currently gaining momentum,” Herman continues. “It provides a counterweight by taking a critical look at all those lovely IT developments. The mindset in the municipality of Amsterdam is changing. Things used to move fast. At the time, the motto was: ‘if we want to be part of everything, we also need to start using these new technologies. There are of course still advocates for data experiments, but people are also paying far more attention to the downsides of these technologies. Caution has become much more important.

“That change is good. We need to have bigger discussions within the municipality about machine learning and big data techniques. People need a much better understanding of how it works before deciding whether or not it should be used. That means that these experimental projects should not be launched. During experiments, there is suddenly a lot of leeway and hardly any critical reflection before diving in. That leads to an attitude of: “We are doing an experiment and we don’t know where it’s headed, but just indulge us for a bit.’ But those are games of hide and seek. That is not openness or transparency. The attitude should be: ‘I am intending to do this. Tell me all your doubts and criticisms. Let’s have a discussion.’”

Author: Tessel Renzenbrink

Translator: Joy Phillips

Leave a Reply