Tailored to the People

Cathy O’Neil: Amsterdam is going to lead the way

  • Follow us on Facebook
  • Follow us on LinkedIn
  • Follow us on Twitter

Applying algorithms can make decision-making processes better and more honest than they have ever been before, says Cathy O’Neil. Such a positive view on data-driven decision-making is not necessarily what you’d expect from her. In her book entitled Weapons of Math Destruction, this data analyst warns about the harmful effects on individuals and society due to the use of algorithms. She emphasizes that much still needs to be done: “We have a lot of work to do, but we are going to get there. And Amsterdam is going to lead the way.”

O’Neil was invited by KPMG and the municipality of Amsterdam to visit the city and advise them about using algorithms. One of those sessions, was about the Tada principles. About twenty data professionals met with her at the De Bazel city archives in Amsterdam to discuss the concept. The central question was how to rework the ethical principles from the Tada manifesto into requirements: specific technical requirements that can be applied at a practical level by system designers and developers.

Translating policy, not deciding it

The meeting showed that data scientists have a great need for clearer requirements. In essence, everyone agrees on ethical principles. Of course everyone wants to have values like inclusiveness and transparency built into their systems. But once you start applying those values in practice, you are faced with difficult questions. For instance, when two values conflict, which should be prioritized? Many of those difficult decisions currently end up on the plate of data professionals.
O’Neil says: “Right now, data scientists are the de facto policy makers. That is not right. That is why we need Tada so desperately. Data scientists should not in charge of making the decision, but they should be in charge of implementing it. Policy should be decided by public discussion. Data scientists should be translating values, not deciding them.”

How do you make Tada principles concrete?

O’Neil has some suggestions for how that can be achieved: “If you want to take a Tada principle and make it concrete, you need to have a specific scenario. Otherwise, you talk about everything at once.” What that means is, it’s not enough to take a case study and say: implement the Tada principles in an algorithm that detects housing fraud. You need to provide a specific context. For instance: illegal holiday rental via Airbnb. O’Neil: “You can’t just say: here is an algorithm. You need to include how it is going to be used. What will be the consequences for the people that are falsely accused? What is going to happen in twenty years because of this algorithm? You’re going to have to really think through the context before you can say how to apply values. That is the biggest gap in people’s understanding. They think they can evaluate an algorithm, but they can’t. They can only evaluate an algorithm within a context.”

A slow and frustrating process

Evaluating an algorithm within a context sounds like a lot of work. It would mean that each algorithm that will be applied – or at least the ones with social consequences – would have to be subjected to thorough scrutiny. After the end of the work session, I asked O’Neil during an interview if that was what she meant. “Yes that is what needs to happen,” she says. “This is an awkward, slow, frustrating conversation that we absolutely must have. Tada is exactly what we need to do. We should make sure our values are being honored, as they are incorporated into algorithms. And figuring out how to do that is hard. We have a lot of work to do, but we are going to get there. It is exciting to see. And Amsterdam is going to lead the way.”

Better decision-making processes

The use of algorithms could actually make decision-making processes fairer than ever before, O’Neil states. She illustrates her point with an example. “Take Amazon’s algorithm for hiring new employees. They figured out it was sexist, so they decided not to use it. I am glad they checked first and ended up not implementing it at all. OK, that’s progress. But you could go beyond that. What does it mean when you codify a human process and discover that it is sexist? It means that your human process is sexist. I would have liked to see Amazon say: ‘we have this algorithm, but it is sexist and we are going to improve it. And then we are going to use it, because it will be less sexist than the human process it is replacing.’ We haven’t gotten to that step yet, but ideally we will. Then we will actually clean up the mistakes that we make as humans instead of simply propagating them.

“Once we start scrutinizing decision-making processes, it is going to be much more transparent, much more value-laden,” O’Neil concludes. “If we can see the fulfillment of what we have seen today [during the Tada session]. If we can turn these values into rules that data scientists could rework into code. And if we could monitor to make sure that those values were being upheld. – And I keep emphasizing that it is hard. – But it would be very exciting if we could actually discuss implementing those values into algorithms, pro-actively. That is the best we can hope for: that the algorithms reflect our values.”

Follow Cathy O’Neil at Mathbabe.org
Photo: Cathy O’Neil

Author: Tessel Renzenbrink
Translator: Joy Phillips & Michael Blommaert
Foto: Hans Kleijn

Leave a Reply