Tada wants to critically question society. It is important that we continue to have that same attitude towards ourselves as well. We asked Dr. Merel Noorman and Dr. Linnet Taylor from the Tilburg Institute for Law, Technology, and Society (TILT) to take a critical look at Tada based on their scientific knowledge and years of expertise. We are grateful that they took the time and came up with a thorough analysis of Tada’s blind spots. In a later blog post, we will give you an update on how we are using their ideas to improve Tada.
Tada’s blind spots
Dr. Merel Noorman and Dr. Linnet Taylor
Tada has become a campaign, as you can read on the website: a campaign to bring six public principles to the attention of, in particular, developers of smart city technologies and policy-makers. It started in 2017 as an initiative of the Amsterdam Economic Board, during which the Board invited a group of professionals and citizens to participate in thinking about which principles should apply in responsible digital cities. This consultation process ultimately resulted in the Tada manifesto which contained six core values: inclusive, control, tailored to the people, legitimate and monitored, open and transparent, and from everyone – for everyone. By now, the manifesto has been signed by 452 city residents and 89 organizations. Since 2018, it has also been part of the coalition agreement of the City of Amsterdam, thus further fleshing out what the city means by such terms as ‘responsible data use’ and ‘responsible digital city’.
Tada as an awareness-raising campaign is a good first step, and is in line with a series of similar initiatives that have developed manifestos, principles and ethical guidelines to put the ethics of (big) data and AI systems on the agenda. The EU, various companies such as Google and IBM, and many other cities have also presented their own lists of principles and values to encourage various parties to innovate and use data responsibly. Such initiatives have had an impact: the ethics of AI and data are now high on the agenda of many stakeholders. And the general public has also become increasingly aware of the ethical questions raised by new smart technologies.
At the same time, the various lists necessarily state fairly abstract general values and principles which it would seem, at first glance, that everyone should agree with. Principles such as open, transparent, legitimate, and inclusive are ‘yay’ words: they evoke positive emotions, but at the same time they could mean everything and nothing at all. Since the principles are very abstract, they can be applied in many different contexts. But it also comes with the risk of treating this list of principles as a gratuitous checklist, allowing you to interpret each criterion in your own way. Because what does transparency mean exactly? Transparency for whom? Who needs to understand it and be able to do something with it? And is open always a good thing under any and all circumstances? Or does it also have problematic aspects? Open access to data can lead to increasing the power of large market parties – who have the manpower and money to do something with that data – compared to smaller organizations. It’s worth asking whether that is always advisable.
This means that the time has now also come to take a look at what all these principles, values and guidelines mean in practice. The organization behind Tada is also working on their own implementation of these principles, organizing workshops to help projects determine more specific meaning for the six principles. On the site, the Tada team states that “The purpose of the workshop is to teach participants to recognize ethical dilemmas in the digital domain and develop skills to deal with these dilemmas. During the workshop, people will work on actual cases encountered by the organization.” This is another good next step to raise awareness of the ethical aspects of smart technologies and to look further into what the six principles mean in practice.
However, it cannot stop there, especially because the Tada approach has a number of blind spots. Firstly, a delimited list of principles runs the risk of making us blind to other principles and interests. For example, solidarity and sustainability are not included there. And openness and transparency are certainly important for something like a crowd management system, but any such project will also have to consider other questions, such as freedom of movement, or how the benefits and burdens of technology are distributed. Sure, a short list is useful as a starting point – and it wouldn’t take much effort to position solidarity within the scope of ‘inclusive’ or ‘from everyone – for everyone’. But the list that was chosen also reflects a specific (political) perspective. The decision to prioritize these specific values is not ‘neutral’. The fact that this list was formulated by a select group of professionals and citizens begs the question of whether another select group might have come up with a different list.
In addition, principles can be context-dependent and open to multiple interpretations. Transparency in a digital system may mean something different for a civil servant than for the citizen who is affected by the system. A list of core values and guiding principles can even be used to justify oppressive technology. The history of digital technology teaches us that it has often been used in ways that disproportionately disadvantage certain population groups in cities, within the frameworks of the law and in accordance with values endorsed by a majority of the population. Just consider the surveillance technologies that are currently being deployed on a large scale in Hong Kong. And principles can also raise conflicting issues. Like the all-too-familiar balance between security and privacy. So who will ultimately decide which principles will be prioritized, or which interpretation will take precedence? The idea of responsible digital cities should therefore not only focus on a predefined set of principles, but also on the politics of principles in general.
Which brings us to the second blind spot. The Tada manifesto is fairly technocentric/datacentric. It focuses on the technology and its development. The potential blind spot that could emerge here is that other perspectives on a problem or phenomenon might be overlooked. Tada focuses on the design process and the principles built into the technology by system developers, but does not focus as strongly on the unintended effects of using the technology, such as how citizens perceive the technology, the potential manipulation of technology by third parties, or the possibility of function creep (in which technology is used for purposes other than what it was originally designed for).
Finally, a third possible blind spot that follows from this is how public principles are embedded in the institutional structures that smart technologies are part of. A development team can develop the system entirely according to the Tada principles (and more) with the best of intentions, but if we do not look at how these principles are part of the checks and balances inside the system, and if the institutions are not receptive to the public that does or does not embrace these principles, then coming up with lists of principles is nothing more than public relations to make you look good.
Designing around core values, as advocated by Tada, is absolutely necessary – but the embedding of such technology in existing practices and institutions in ways that are based on those values and principles, as well as dealing with unexpected and unintended effects, must also be a central component in the governance structures surrounding the technology.