5 June 2019 Legitimate and Monitored

Algorithms for more fairness in the city

Tessel Renzenbrink
  • Follow us on Facebook
  • Follow us on LinkedIn
  • Follow us on Twitter

“By using algorithms, you can make the city more fair,” says Maarten Sukel. The AI specialist works for the CTO Innovation Team at the municipality of Amsterdam. The team uses new techniques to address various problems in the city. Their projects include route optimization for garbage trucks and ways to predict and distribute pedestrian and vehicle traffic intensity. But using new techniques also involves certain risks. Sukel believes that you need to be transparent about the risks as well.

Maarten Sukel. AI specialist CTO Innovatieteam

In an interview with Sukel, he talks enthusiastically about the positive contribution that algorithms can provide to the city. Using algorithms can make the city more sustainable and more efficient – and add more fairness to the system. But he also notes the risks involved. Automation can have an unintended side effect of actually increasing inequality in the city. That is why the municipality of Amsterdam is looking into various methods to ensure that algorithms are fair. This article covers several of those methods. For instance, Sukel addresses the importance of transparency, the function of the Tada manifesto and the process of developing an audit for algorithms.

An actual algorithm applied in the city

One of the projects that the Innovation Team has worked on is Signalen Informatievoorziening Amsterdam (SIA), an online service where Amsterdam residents can submit reports about the public space and possible nuisance. SIA uses machine learning to automatically classify citizen service reports and route them to the right department. One of the goals is to facilitate easier, clearer communication between the municipality and its citizens. Sukel: “In the past, someone had to describe the problem using the exact same terms as the municipality. Say you’re submitting a report to request a trash pickup: is it household waste, street litter, bulky waste, business waste? But people do not know how the municipal authorities define those terms. For instance, they might think: if it came from a house, it’s household waste. And then a street sweeper might be sent out to pick up a couch. It may seem trivial, but lots and lots of these reports are submitted. 250,000 reports will be submitted like this, just in this year.”

A second benefit is that all citizen service reports can now be submitted in a single display window. In the past, Amsterdam residents had to look up the right form to report a boat that was speeding, or an incident involving noise nuisance. “Now you can just type ‘boat was speeding’,” Sukel says. He shows a demo of how the application works behind the scenes. The algorithm automatically assigns a main category and sub category to the report. Sukel: “That happens based on more than half a million previous reports. The experiences from all of those previous reports are incorporated in this model. Of course that would be impossible to do from a human perspective.”

New techniques reveal old prejudices

Applying new techniques reveals new insights. The concentration of outspoken citizens can vary significantly from one neighborhood to another. That means that more reports will be submitted from a neighborhood that has more outspoken citizens. “In some neighborhoods where more reports are submitted, more things get resolved there, but not in other neighborhoods,” Sukel says. “We already knew that some neighborhoods submit more citizen service reports. But that phenomenon becomes clearer once you start documenting the data. The algorithm isn’t the problem here; it’s just learning to recognize what something is about. But that learning process does also reveal other factors.”

Exposing these inequalities makes it possible for algorithms to contribute to a city that does things more fairly. Processes that are carried out by people are not always fair. When an algorithm is designed for use in an existing process like this, it opens up a whole new perspective on how things work. This could potentially expose flaws that were already inherently present in the system. We can then correct these errors to ensure that the process becomes fairer. Sukel: “If an algorithm turns out to contain a sexist or racist bias, the blame is usually placed on technology. But when a model has that bias based on old data, then those problems were probably already there. That inherent bias is exposed when you create an algorithm for the process. It would be a waste to decide not to use the technology on that basis. That would mean throwing out a potential resource. It provides a clear signal, telling us: we need to do things differently.”

Sukel continues: “Let’s look at a hypothetical case. Suppose you are looking for housing fraud and you make certain assumptions to define your search. Say you assume that housing fraud mainly takes place in neighborhoods that have lots of rent-controlled apartments. Then you decide to check these neighborhoods more thoroughly, which will lead to discovering more incidents of fraud. You’ve just designed a self-fulfilling prophecy. You can solve that by modifying your algorithm or your processes. For instance by using random sampling to verify if your assumptions are correct. Those checks are commonplace in statistical analysis, but many processes at the municipality don’t apply that method, I’m guessing. You can expose flaws in existing processes when you create an algorithm for it. This offers huge potential for adding more fairness to the city’s processes.”

Finding broken paving tiles based on big data

Algorithms do not just expose existing inequalities. Sukel also sees opportunities to remedy these inequalities with new applications. “We are now working on experiments that should make sure that the areas where people submit fewer reports will still get access to the same help. For instance, we are looking into the possibility to report incidents by sending a photo. That might appeal to a different target group.”

Even further into the future, Sukel hopes to completely resolve the inequality between people who submit citizen service reports and people who don’t: “Just imagine that citizens would no longer have to report anything, that we just solve everything before it even becomes a problem.” Sukel combines his work with the Innovation Team with a PhD research project at the University of Amsterdam, studying how to detect problems in the city by using various data sources. “One project that we want to do now is scanning for waste. There are already scanning cars driving around checking for illegal parking. The same cars could also start scanning for trash bags (video) or loose paving tiles. But that technique is still in its infancy. Trash scanners won’t be driving around the city next week or anything.”

Ways to make algorithms fairer

Applying algorithms can reduce inequality, but there is also the risk that they could actually create more inequality. In that hypothetical case involving housing fraud detection, if the error is not corrected, it would be entirely possible that an algorithm would implement the error much more efficiently. That is why the municipality of Amsterdam is exploring various ways to make algorithms fairer. One of those methods is the implementation of the six Tada principles for a responsible digital city. Sukel notes, “Tada makes you more aware. You’re saying: these are the six values that all of us want to follow. That gives you a clear framework and a common goal.”

Algorithmic audit

Another method we are working on is the development of an audit for algorithms. Business processes can be audited according to predefined criteria – and we should have the same option for algorithms. Amsterdam is developing an algorithmic audit in conjunction with KPMG; the SIA reporting system will be the first test run. “The reporting system offers a case that we know is not highly sensitive,” Sukel says. “We are using the audit to investigate exactly what could go wrong. The audit has already had its fair share of publicity. It is a good thing that so many people are focusing on it. That will expose everything. Just by saying ‘we are conducting an audit’, people are already talking about, thinking about it. It generates attention, highlighting the effects of algorithms. The exact details of the framework are still unknown to us, since the project is not finished yet.”

Ability to explain algorithms

Another consideration in a fair digital city is what the Tada manifesto refers to as ‘the human factor’. Life in a city that uses algorithms should still remain comprehensible to its citizens. The ability to explain algorithms is part of that. Sukel: “This is a whole area of expertise. There are entire research groups at the University of Amsterdam who are working on this. They ask such questions as: ‘How do I explain that a model makes certain choices?’ That is easier to explain for some models than for others. It is easy for me to explain the text model of the SIA reporting system. You could picture it as a decision tree: a list of conditions that need to be met in order to file something under a certain category. That means you can track precisely why a certain decision has been made. But if we look at image recognition, those algorithms are highly complicated, and they are also relatively new. We do not know exactly how they work, so we can’t really explain them.”

“That difference in explainability matters in how you apply algorithms in your processes,” Sukel says. “I work on lots of projects in the public space. In those projects, you can take the application of new techniques just a bit further. But in the case of processes that deal with people, that have an impact on people’s lives, then you need to be able to offer a far better explanation. In such cases, you could decide to limit algorithms to an advisory role and leave the actual decisions in human hands. You do not have to blindly follow an algorithm. You can also use a combination of man and machine.”

“By being transparent, we can learn from each other”

There is no such thing yet as a uniform approach to ensure that algorithms are fair, Sukel states: “The cases are so diverse that there is no chance of designing a universal approach. It is especially important to keep on thinking: What’s happening here, and is it unfair? Is there any inherent bias? Are people being disadvantaged?” A key approach to encourage such questions is transparency. “In implementing the reporting system, we also try to be as transparent as possible: we share the source code, we organize meet-ups where people can ask questions, and I write articles about how it works. Basically, you want to show exactly what you are doing. That means that if there are any flaws in it, people can say so. It is good to collaborate with more people and to make choices together. It is impossible to achieve a comprehensive overview on your own. And if you are transparent, you are basically collaborating with everyone.”

Sukel explains what he means by that. “Sharing is the fastest way to learn. Right now, many municipalities and companies are working alone, discovering everything on their own. They all run into the same problems, and each and every one of them is looking for a solution all by themselves – but the technology is the same all over the world. By being transparent, we can learn from each other.”

Author: Tessel Renzenbrink

Translator: Joy Philips

Leave a Reply