Mona de Boer is a systems auditor at PwC. She opens and dissects systems and makes them understandable and reliable for their users. “As a child, I wanted to be a brain surgeon. By peeking inside people’s skulls, I thought I would be able to understand people. Now I’m looking inside the engines of complex digital systems: neurological, self-learning networks. You could say that, in a roundabout way, I’ve ended up right where I wanted to be.”
The essence of what all accountants do is reducing the information asymmetry between a company’s managing board and its shareholders. This is essential so shareholders can be completely sure that their money is being spent well: As an external party, can I invest in a company? Can I trust them?
But an aspiring brain surgeon will not be satisfied for long with a simple balance sheet of assets and liabilities. It is the rapidly progressing digitalization of the information on which decisions are based and the automation of these decisions that makes this role challenging. Sometimes so challenging that De Boer has to develop new methods to continue meeting the demand for reliability.
“In recent years, we have seen a convergence of three essential factors: datafication, in which more and more ‘things’ are being measured and stored as data; digitalization, in which all that data is being stored primarily (and often exclusively) in digital formats; and calculating capacity: the technology to take those massive volumes of data, now digital, and actually do something with it.”
This has enabled us to develop wonderful self-learning systems that are more and more able to analyze patterns, make predictions and help us maintain control of complexity.
But the challenge is explaining it. The fact that something works isn’t enough. We want to know how it works.
For example, primary schools are increasingly using algorithms to offer advice about the level of secondary school that the child should progress to. If your son or daughter gets a recommendation to go to a lower level than expected, we want to know what that decision was based on.
Relatively simple algorithms already make it possible to explain this: Henry is going to a less academically oriented secondary school, because he frequently fails his classes, misses 15% of the lessons, and has never borrowed a book from the library. But when complexity increases, as it does in neural networks, our ability to explain what’s happening rapidly declines to near-zero: Susanna is not going to the pre-university track because of her pupil score. That number was derived by a self-learning AI that uses a database of 1.5 million Dutch primary school pupils incorporating 1,500 data points, including such factors as: number of siblings, travel patterns, social media posts, and poor credit ratings.
Can these types of complex, self-learning, self-correcting algorithms still be explained? Is it still possible to generate confidence if no one knows how the black box works anymore?
“Yes, it is, and it has to be done,” De Boer believes. “In theory, everything can be explained. Neural networks, too, but whether it is always feasible in practice… That’s a different question. Explaining a highly complex system takes time and effort. And we aren’t eager to incur those costs.”
The best option for explaining the systems may be leaving it up to the systems themselves. These possibilities will be the focus of De Boer’s PhD research in the next few years: “I think that a neural network will be able to explain itself with real certainty within 10 years. This will not make economic sense for all algorithms yet. We will have to make choices about which we want to have explained, and which ones we can explain.”
Even in the not-so-near future, we cannot expect that every system can be explained. We will have to make choices. Where we are required to provide that explanation – by laws or by ethics – both companies and government authorities will have to invest in doing so.